Oct  2 06:45:33 np0005465987 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  2 06:45:33 np0005465987 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  2 06:45:33 np0005465987 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 06:45:33 np0005465987 kernel: BIOS-provided physical RAM map:
Oct  2 06:45:33 np0005465987 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  2 06:45:33 np0005465987 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  2 06:45:33 np0005465987 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  2 06:45:33 np0005465987 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  2 06:45:33 np0005465987 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  2 06:45:33 np0005465987 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  2 06:45:33 np0005465987 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  2 06:45:33 np0005465987 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  2 06:45:33 np0005465987 kernel: NX (Execute Disable) protection: active
Oct  2 06:45:33 np0005465987 kernel: APIC: Static calls initialized
Oct  2 06:45:33 np0005465987 kernel: SMBIOS 2.8 present.
Oct  2 06:45:33 np0005465987 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  2 06:45:33 np0005465987 kernel: Hypervisor detected: KVM
Oct  2 06:45:33 np0005465987 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  2 06:45:33 np0005465987 kernel: kvm-clock: using sched offset of 4314894420 cycles
Oct  2 06:45:33 np0005465987 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  2 06:45:33 np0005465987 kernel: tsc: Detected 2799.998 MHz processor
Oct  2 06:45:33 np0005465987 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  2 06:45:33 np0005465987 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  2 06:45:33 np0005465987 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  2 06:45:33 np0005465987 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  2 06:45:33 np0005465987 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  2 06:45:33 np0005465987 kernel: Using GB pages for direct mapping
Oct  2 06:45:33 np0005465987 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  2 06:45:33 np0005465987 kernel: ACPI: Early table checksum verification disabled
Oct  2 06:45:33 np0005465987 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  2 06:45:33 np0005465987 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:45:33 np0005465987 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:45:33 np0005465987 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:45:33 np0005465987 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  2 06:45:33 np0005465987 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:45:33 np0005465987 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:45:33 np0005465987 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  2 06:45:33 np0005465987 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  2 06:45:33 np0005465987 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  2 06:45:33 np0005465987 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  2 06:45:33 np0005465987 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  2 06:45:33 np0005465987 kernel: No NUMA configuration found
Oct  2 06:45:33 np0005465987 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  2 06:45:33 np0005465987 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct  2 06:45:33 np0005465987 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  2 06:45:33 np0005465987 kernel: Zone ranges:
Oct  2 06:45:33 np0005465987 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  2 06:45:33 np0005465987 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  2 06:45:33 np0005465987 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  2 06:45:33 np0005465987 kernel:  Device   empty
Oct  2 06:45:33 np0005465987 kernel: Movable zone start for each node
Oct  2 06:45:33 np0005465987 kernel: Early memory node ranges
Oct  2 06:45:33 np0005465987 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  2 06:45:33 np0005465987 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  2 06:45:33 np0005465987 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  2 06:45:33 np0005465987 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  2 06:45:33 np0005465987 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  2 06:45:33 np0005465987 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  2 06:45:33 np0005465987 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  2 06:45:33 np0005465987 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  2 06:45:33 np0005465987 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  2 06:45:33 np0005465987 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  2 06:45:33 np0005465987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  2 06:45:33 np0005465987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  2 06:45:33 np0005465987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  2 06:45:33 np0005465987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  2 06:45:33 np0005465987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  2 06:45:33 np0005465987 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  2 06:45:33 np0005465987 kernel: TSC deadline timer available
Oct  2 06:45:33 np0005465987 kernel: CPU topo: Max. logical packages:   8
Oct  2 06:45:33 np0005465987 kernel: CPU topo: Max. logical dies:       8
Oct  2 06:45:33 np0005465987 kernel: CPU topo: Max. dies per package:   1
Oct  2 06:45:33 np0005465987 kernel: CPU topo: Max. threads per core:   1
Oct  2 06:45:33 np0005465987 kernel: CPU topo: Num. cores per package:     1
Oct  2 06:45:33 np0005465987 kernel: CPU topo: Num. threads per package:   1
Oct  2 06:45:33 np0005465987 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  2 06:45:33 np0005465987 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  2 06:45:33 np0005465987 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  2 06:45:33 np0005465987 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  2 06:45:33 np0005465987 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  2 06:45:33 np0005465987 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  2 06:45:33 np0005465987 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  2 06:45:33 np0005465987 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  2 06:45:33 np0005465987 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  2 06:45:33 np0005465987 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  2 06:45:33 np0005465987 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  2 06:45:33 np0005465987 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  2 06:45:33 np0005465987 kernel: Booting paravirtualized kernel on KVM
Oct  2 06:45:33 np0005465987 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  2 06:45:33 np0005465987 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  2 06:45:33 np0005465987 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  2 06:45:33 np0005465987 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  2 06:45:33 np0005465987 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 06:45:33 np0005465987 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  2 06:45:33 np0005465987 kernel: random: crng init done
Oct  2 06:45:33 np0005465987 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: Fallback order for Node 0: 0 
Oct  2 06:45:33 np0005465987 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  2 06:45:33 np0005465987 kernel: Policy zone: Normal
Oct  2 06:45:33 np0005465987 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  2 06:45:33 np0005465987 kernel: software IO TLB: area num 8.
Oct  2 06:45:33 np0005465987 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  2 06:45:33 np0005465987 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  2 06:45:33 np0005465987 kernel: ftrace: allocated 193 pages with 3 groups
Oct  2 06:45:33 np0005465987 kernel: Dynamic Preempt: voluntary
Oct  2 06:45:33 np0005465987 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  2 06:45:33 np0005465987 kernel: rcu: #011RCU event tracing is enabled.
Oct  2 06:45:33 np0005465987 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  2 06:45:33 np0005465987 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  2 06:45:33 np0005465987 kernel: #011Rude variant of Tasks RCU enabled.
Oct  2 06:45:33 np0005465987 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  2 06:45:33 np0005465987 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  2 06:45:33 np0005465987 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  2 06:45:33 np0005465987 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 06:45:33 np0005465987 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 06:45:33 np0005465987 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 06:45:33 np0005465987 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  2 06:45:33 np0005465987 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  2 06:45:33 np0005465987 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  2 06:45:33 np0005465987 kernel: Console: colour VGA+ 80x25
Oct  2 06:45:33 np0005465987 kernel: printk: console [ttyS0] enabled
Oct  2 06:45:33 np0005465987 kernel: ACPI: Core revision 20230331
Oct  2 06:45:33 np0005465987 kernel: APIC: Switch to symmetric I/O mode setup
Oct  2 06:45:33 np0005465987 kernel: x2apic enabled
Oct  2 06:45:33 np0005465987 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  2 06:45:33 np0005465987 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  2 06:45:33 np0005465987 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct  2 06:45:33 np0005465987 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  2 06:45:33 np0005465987 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  2 06:45:33 np0005465987 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  2 06:45:33 np0005465987 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  2 06:45:33 np0005465987 kernel: Spectre V2 : Mitigation: Retpolines
Oct  2 06:45:33 np0005465987 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  2 06:45:33 np0005465987 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  2 06:45:33 np0005465987 kernel: RETBleed: Mitigation: untrained return thunk
Oct  2 06:45:33 np0005465987 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  2 06:45:33 np0005465987 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  2 06:45:33 np0005465987 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  2 06:45:33 np0005465987 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  2 06:45:33 np0005465987 kernel: x86/bugs: return thunk changed
Oct  2 06:45:33 np0005465987 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  2 06:45:33 np0005465987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  2 06:45:33 np0005465987 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  2 06:45:33 np0005465987 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  2 06:45:33 np0005465987 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  2 06:45:33 np0005465987 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  2 06:45:33 np0005465987 kernel: Freeing SMP alternatives memory: 40K
Oct  2 06:45:33 np0005465987 kernel: pid_max: default: 32768 minimum: 301
Oct  2 06:45:33 np0005465987 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  2 06:45:33 np0005465987 kernel: landlock: Up and running.
Oct  2 06:45:33 np0005465987 kernel: Yama: becoming mindful.
Oct  2 06:45:33 np0005465987 kernel: SELinux:  Initializing.
Oct  2 06:45:33 np0005465987 kernel: LSM support for eBPF active
Oct  2 06:45:33 np0005465987 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  2 06:45:33 np0005465987 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  2 06:45:33 np0005465987 kernel: ... version:                0
Oct  2 06:45:33 np0005465987 kernel: ... bit width:              48
Oct  2 06:45:33 np0005465987 kernel: ... generic registers:      6
Oct  2 06:45:33 np0005465987 kernel: ... value mask:             0000ffffffffffff
Oct  2 06:45:33 np0005465987 kernel: ... max period:             00007fffffffffff
Oct  2 06:45:33 np0005465987 kernel: ... fixed-purpose events:   0
Oct  2 06:45:33 np0005465987 kernel: ... event mask:             000000000000003f
Oct  2 06:45:33 np0005465987 kernel: signal: max sigframe size: 1776
Oct  2 06:45:33 np0005465987 kernel: rcu: Hierarchical SRCU implementation.
Oct  2 06:45:33 np0005465987 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  2 06:45:33 np0005465987 kernel: smp: Bringing up secondary CPUs ...
Oct  2 06:45:33 np0005465987 kernel: smpboot: x86: Booting SMP configuration:
Oct  2 06:45:33 np0005465987 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  2 06:45:33 np0005465987 kernel: smp: Brought up 1 node, 8 CPUs
Oct  2 06:45:33 np0005465987 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct  2 06:45:33 np0005465987 kernel: node 0 deferred pages initialised in 27ms
Oct  2 06:45:33 np0005465987 kernel: Memory: 7765416K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616512K reserved, 0K cma-reserved)
Oct  2 06:45:33 np0005465987 kernel: devtmpfs: initialized
Oct  2 06:45:33 np0005465987 kernel: x86/mm: Memory block size: 128MB
Oct  2 06:45:33 np0005465987 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  2 06:45:33 np0005465987 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: pinctrl core: initialized pinctrl subsystem
Oct  2 06:45:33 np0005465987 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  2 06:45:33 np0005465987 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  2 06:45:33 np0005465987 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  2 06:45:33 np0005465987 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  2 06:45:33 np0005465987 kernel: audit: initializing netlink subsys (disabled)
Oct  2 06:45:33 np0005465987 kernel: audit: type=2000 audit(1759401932.025:1): state=initialized audit_enabled=0 res=1
Oct  2 06:45:33 np0005465987 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  2 06:45:33 np0005465987 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  2 06:45:33 np0005465987 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  2 06:45:33 np0005465987 kernel: cpuidle: using governor menu
Oct  2 06:45:33 np0005465987 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  2 06:45:33 np0005465987 kernel: PCI: Using configuration type 1 for base access
Oct  2 06:45:33 np0005465987 kernel: PCI: Using configuration type 1 for extended access
Oct  2 06:45:33 np0005465987 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  2 06:45:33 np0005465987 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  2 06:45:33 np0005465987 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  2 06:45:33 np0005465987 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  2 06:45:33 np0005465987 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  2 06:45:33 np0005465987 kernel: Demotion targets for Node 0: null
Oct  2 06:45:33 np0005465987 kernel: cryptd: max_cpu_qlen set to 1000
Oct  2 06:45:33 np0005465987 kernel: ACPI: Added _OSI(Module Device)
Oct  2 06:45:33 np0005465987 kernel: ACPI: Added _OSI(Processor Device)
Oct  2 06:45:33 np0005465987 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  2 06:45:33 np0005465987 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  2 06:45:33 np0005465987 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  2 06:45:33 np0005465987 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  2 06:45:33 np0005465987 kernel: ACPI: Interpreter enabled
Oct  2 06:45:33 np0005465987 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  2 06:45:33 np0005465987 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  2 06:45:33 np0005465987 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  2 06:45:33 np0005465987 kernel: PCI: Using E820 reservations for host bridge windows
Oct  2 06:45:33 np0005465987 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  2 06:45:33 np0005465987 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  2 06:45:33 np0005465987 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [3] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [4] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [5] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [6] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [7] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [8] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [9] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [10] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [11] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [12] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [13] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [14] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [15] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [16] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [17] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [18] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [19] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [20] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [21] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [22] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [23] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [24] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [25] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [26] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [27] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [28] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [29] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [30] registered
Oct  2 06:45:33 np0005465987 kernel: acpiphp: Slot [31] registered
Oct  2 06:45:33 np0005465987 kernel: PCI host bridge to bus 0000:00
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  2 06:45:33 np0005465987 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  2 06:45:33 np0005465987 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  2 06:45:33 np0005465987 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  2 06:45:33 np0005465987 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  2 06:45:33 np0005465987 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  2 06:45:33 np0005465987 kernel: iommu: Default domain type: Translated
Oct  2 06:45:33 np0005465987 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  2 06:45:33 np0005465987 kernel: SCSI subsystem initialized
Oct  2 06:45:33 np0005465987 kernel: ACPI: bus type USB registered
Oct  2 06:45:33 np0005465987 kernel: usbcore: registered new interface driver usbfs
Oct  2 06:45:33 np0005465987 kernel: usbcore: registered new interface driver hub
Oct  2 06:45:33 np0005465987 kernel: usbcore: registered new device driver usb
Oct  2 06:45:33 np0005465987 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  2 06:45:33 np0005465987 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  2 06:45:33 np0005465987 kernel: PTP clock support registered
Oct  2 06:45:33 np0005465987 kernel: EDAC MC: Ver: 3.0.0
Oct  2 06:45:33 np0005465987 kernel: NetLabel: Initializing
Oct  2 06:45:33 np0005465987 kernel: NetLabel:  domain hash size = 128
Oct  2 06:45:33 np0005465987 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  2 06:45:33 np0005465987 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  2 06:45:33 np0005465987 kernel: PCI: Using ACPI for IRQ routing
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  2 06:45:33 np0005465987 kernel: vgaarb: loaded
Oct  2 06:45:33 np0005465987 kernel: clocksource: Switched to clocksource kvm-clock
Oct  2 06:45:33 np0005465987 kernel: VFS: Disk quotas dquot_6.6.0
Oct  2 06:45:33 np0005465987 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  2 06:45:33 np0005465987 kernel: pnp: PnP ACPI init
Oct  2 06:45:33 np0005465987 kernel: pnp: PnP ACPI: found 5 devices
Oct  2 06:45:33 np0005465987 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  2 06:45:33 np0005465987 kernel: NET: Registered PF_INET protocol family
Oct  2 06:45:33 np0005465987 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  2 06:45:33 np0005465987 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  2 06:45:33 np0005465987 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  2 06:45:33 np0005465987 kernel: NET: Registered PF_XDP protocol family
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  2 06:45:33 np0005465987 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  2 06:45:33 np0005465987 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  2 06:45:33 np0005465987 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 72796 usecs
Oct  2 06:45:33 np0005465987 kernel: PCI: CLS 0 bytes, default 64
Oct  2 06:45:33 np0005465987 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  2 06:45:33 np0005465987 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  2 06:45:33 np0005465987 kernel: ACPI: bus type thunderbolt registered
Oct  2 06:45:33 np0005465987 kernel: Trying to unpack rootfs image as initramfs...
Oct  2 06:45:33 np0005465987 kernel: Initialise system trusted keyrings
Oct  2 06:45:33 np0005465987 kernel: Key type blacklist registered
Oct  2 06:45:33 np0005465987 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  2 06:45:33 np0005465987 kernel: zbud: loaded
Oct  2 06:45:33 np0005465987 kernel: integrity: Platform Keyring initialized
Oct  2 06:45:33 np0005465987 kernel: integrity: Machine keyring initialized
Oct  2 06:45:33 np0005465987 kernel: Freeing initrd memory: 86104K
Oct  2 06:45:33 np0005465987 kernel: NET: Registered PF_ALG protocol family
Oct  2 06:45:33 np0005465987 kernel: xor: automatically using best checksumming function   avx       
Oct  2 06:45:33 np0005465987 kernel: Key type asymmetric registered
Oct  2 06:45:33 np0005465987 kernel: Asymmetric key parser 'x509' registered
Oct  2 06:45:33 np0005465987 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  2 06:45:33 np0005465987 kernel: io scheduler mq-deadline registered
Oct  2 06:45:33 np0005465987 kernel: io scheduler kyber registered
Oct  2 06:45:33 np0005465987 kernel: io scheduler bfq registered
Oct  2 06:45:33 np0005465987 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  2 06:45:33 np0005465987 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  2 06:45:33 np0005465987 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  2 06:45:33 np0005465987 kernel: ACPI: button: Power Button [PWRF]
Oct  2 06:45:33 np0005465987 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  2 06:45:33 np0005465987 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  2 06:45:33 np0005465987 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  2 06:45:33 np0005465987 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  2 06:45:33 np0005465987 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  2 06:45:33 np0005465987 kernel: Non-volatile memory driver v1.3
Oct  2 06:45:33 np0005465987 kernel: rdac: device handler registered
Oct  2 06:45:33 np0005465987 kernel: hp_sw: device handler registered
Oct  2 06:45:33 np0005465987 kernel: emc: device handler registered
Oct  2 06:45:33 np0005465987 kernel: alua: device handler registered
Oct  2 06:45:33 np0005465987 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  2 06:45:33 np0005465987 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  2 06:45:33 np0005465987 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  2 06:45:33 np0005465987 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  2 06:45:33 np0005465987 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  2 06:45:33 np0005465987 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  2 06:45:33 np0005465987 kernel: usb usb1: Product: UHCI Host Controller
Oct  2 06:45:33 np0005465987 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  2 06:45:33 np0005465987 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  2 06:45:33 np0005465987 kernel: hub 1-0:1.0: USB hub found
Oct  2 06:45:33 np0005465987 kernel: hub 1-0:1.0: 2 ports detected
Oct  2 06:45:33 np0005465987 kernel: usbcore: registered new interface driver usbserial_generic
Oct  2 06:45:33 np0005465987 kernel: usbserial: USB Serial support registered for generic
Oct  2 06:45:33 np0005465987 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  2 06:45:33 np0005465987 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  2 06:45:33 np0005465987 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  2 06:45:33 np0005465987 kernel: mousedev: PS/2 mouse device common for all mice
Oct  2 06:45:33 np0005465987 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  2 06:45:33 np0005465987 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  2 06:45:33 np0005465987 kernel: rtc_cmos 00:04: registered as rtc0
Oct  2 06:45:33 np0005465987 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  2 06:45:33 np0005465987 kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T10:45:32 UTC (1759401932)
Oct  2 06:45:33 np0005465987 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  2 06:45:33 np0005465987 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  2 06:45:33 np0005465987 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  2 06:45:33 np0005465987 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  2 06:45:33 np0005465987 kernel: usbcore: registered new interface driver usbhid
Oct  2 06:45:33 np0005465987 kernel: usbhid: USB HID core driver
Oct  2 06:45:33 np0005465987 kernel: drop_monitor: Initializing network drop monitor service
Oct  2 06:45:33 np0005465987 kernel: Initializing XFRM netlink socket
Oct  2 06:45:33 np0005465987 kernel: NET: Registered PF_INET6 protocol family
Oct  2 06:45:33 np0005465987 kernel: Segment Routing with IPv6
Oct  2 06:45:33 np0005465987 kernel: NET: Registered PF_PACKET protocol family
Oct  2 06:45:33 np0005465987 kernel: mpls_gso: MPLS GSO support
Oct  2 06:45:33 np0005465987 kernel: IPI shorthand broadcast: enabled
Oct  2 06:45:33 np0005465987 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  2 06:45:33 np0005465987 kernel: AES CTR mode by8 optimization enabled
Oct  2 06:45:33 np0005465987 kernel: sched_clock: Marking stable (1170012398, 148154863)->(1451131818, -132964557)
Oct  2 06:45:33 np0005465987 kernel: registered taskstats version 1
Oct  2 06:45:33 np0005465987 kernel: Loading compiled-in X.509 certificates
Oct  2 06:45:33 np0005465987 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  2 06:45:33 np0005465987 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  2 06:45:33 np0005465987 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  2 06:45:33 np0005465987 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  2 06:45:33 np0005465987 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  2 06:45:33 np0005465987 kernel: Demotion targets for Node 0: null
Oct  2 06:45:33 np0005465987 kernel: page_owner is disabled
Oct  2 06:45:33 np0005465987 kernel: Key type .fscrypt registered
Oct  2 06:45:33 np0005465987 kernel: Key type fscrypt-provisioning registered
Oct  2 06:45:33 np0005465987 kernel: Key type big_key registered
Oct  2 06:45:33 np0005465987 kernel: Key type encrypted registered
Oct  2 06:45:33 np0005465987 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  2 06:45:33 np0005465987 kernel: Loading compiled-in module X.509 certificates
Oct  2 06:45:33 np0005465987 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  2 06:45:33 np0005465987 kernel: ima: Allocated hash algorithm: sha256
Oct  2 06:45:33 np0005465987 kernel: ima: No architecture policies found
Oct  2 06:45:33 np0005465987 kernel: evm: Initialising EVM extended attributes:
Oct  2 06:45:33 np0005465987 kernel: evm: security.selinux
Oct  2 06:45:33 np0005465987 kernel: evm: security.SMACK64 (disabled)
Oct  2 06:45:33 np0005465987 kernel: evm: security.SMACK64EXEC (disabled)
Oct  2 06:45:33 np0005465987 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  2 06:45:33 np0005465987 kernel: evm: security.SMACK64MMAP (disabled)
Oct  2 06:45:33 np0005465987 kernel: evm: security.apparmor (disabled)
Oct  2 06:45:33 np0005465987 kernel: evm: security.ima
Oct  2 06:45:33 np0005465987 kernel: evm: security.capability
Oct  2 06:45:33 np0005465987 kernel: evm: HMAC attrs: 0x1
Oct  2 06:45:33 np0005465987 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  2 06:45:33 np0005465987 kernel: Running certificate verification RSA selftest
Oct  2 06:45:33 np0005465987 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  2 06:45:33 np0005465987 kernel: Running certificate verification ECDSA selftest
Oct  2 06:45:33 np0005465987 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  2 06:45:33 np0005465987 kernel: clk: Disabling unused clocks
Oct  2 06:45:33 np0005465987 kernel: Freeing unused decrypted memory: 2028K
Oct  2 06:45:33 np0005465987 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  2 06:45:33 np0005465987 kernel: Write protecting the kernel read-only data: 30720k
Oct  2 06:45:33 np0005465987 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  2 06:45:33 np0005465987 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  2 06:45:33 np0005465987 kernel: Run /init as init process
Oct  2 06:45:33 np0005465987 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  2 06:45:33 np0005465987 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  2 06:45:33 np0005465987 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  2 06:45:33 np0005465987 kernel: usb 1-1: Manufacturer: QEMU
Oct  2 06:45:33 np0005465987 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  2 06:45:33 np0005465987 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  2 06:45:33 np0005465987 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  2 06:45:33 np0005465987 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 06:45:33 np0005465987 systemd: Detected virtualization kvm.
Oct  2 06:45:33 np0005465987 systemd: Detected architecture x86-64.
Oct  2 06:45:33 np0005465987 systemd: Running in initrd.
Oct  2 06:45:33 np0005465987 systemd: No hostname configured, using default hostname.
Oct  2 06:45:33 np0005465987 systemd: Hostname set to <localhost>.
Oct  2 06:45:33 np0005465987 systemd: Initializing machine ID from VM UUID.
Oct  2 06:45:33 np0005465987 systemd: Queued start job for default target Initrd Default Target.
Oct  2 06:45:33 np0005465987 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  2 06:45:33 np0005465987 systemd: Reached target Local Encrypted Volumes.
Oct  2 06:45:33 np0005465987 systemd: Reached target Initrd /usr File System.
Oct  2 06:45:33 np0005465987 systemd: Reached target Local File Systems.
Oct  2 06:45:33 np0005465987 systemd: Reached target Path Units.
Oct  2 06:45:33 np0005465987 systemd: Reached target Slice Units.
Oct  2 06:45:33 np0005465987 systemd: Reached target Swaps.
Oct  2 06:45:33 np0005465987 systemd: Reached target Timer Units.
Oct  2 06:45:33 np0005465987 systemd: Listening on D-Bus System Message Bus Socket.
Oct  2 06:45:33 np0005465987 systemd: Listening on Journal Socket (/dev/log).
Oct  2 06:45:33 np0005465987 systemd: Listening on Journal Socket.
Oct  2 06:45:33 np0005465987 systemd: Listening on udev Control Socket.
Oct  2 06:45:33 np0005465987 systemd: Listening on udev Kernel Socket.
Oct  2 06:45:33 np0005465987 systemd: Reached target Socket Units.
Oct  2 06:45:33 np0005465987 systemd: Starting Create List of Static Device Nodes...
Oct  2 06:45:33 np0005465987 systemd: Starting Journal Service...
Oct  2 06:45:33 np0005465987 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  2 06:45:33 np0005465987 systemd: Starting Apply Kernel Variables...
Oct  2 06:45:33 np0005465987 systemd: Starting Create System Users...
Oct  2 06:45:33 np0005465987 systemd: Starting Setup Virtual Console...
Oct  2 06:45:33 np0005465987 systemd: Finished Create List of Static Device Nodes.
Oct  2 06:45:33 np0005465987 systemd: Finished Apply Kernel Variables.
Oct  2 06:45:33 np0005465987 systemd-journald[307]: Journal started
Oct  2 06:45:33 np0005465987 systemd-journald[307]: Runtime Journal (/run/log/journal/1113ba579df94fb0a2ed0ca31cebd82a) is 8.0M, max 153.5M, 145.5M free.
Oct  2 06:45:33 np0005465987 systemd: Started Journal Service.
Oct  2 06:45:33 np0005465987 systemd-sysusers[312]: Creating group 'users' with GID 100.
Oct  2 06:45:33 np0005465987 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Oct  2 06:45:33 np0005465987 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  2 06:45:33 np0005465987 systemd[1]: Finished Create System Users.
Oct  2 06:45:33 np0005465987 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  2 06:45:33 np0005465987 systemd[1]: Starting Create Volatile Files and Directories...
Oct  2 06:45:33 np0005465987 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  2 06:45:33 np0005465987 systemd[1]: Finished Setup Virtual Console.
Oct  2 06:45:33 np0005465987 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  2 06:45:33 np0005465987 systemd[1]: Starting dracut cmdline hook...
Oct  2 06:45:33 np0005465987 systemd[1]: Finished Create Volatile Files and Directories.
Oct  2 06:45:33 np0005465987 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Oct  2 06:45:33 np0005465987 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 06:45:33 np0005465987 systemd[1]: Finished dracut cmdline hook.
Oct  2 06:45:33 np0005465987 systemd[1]: Starting dracut pre-udev hook...
Oct  2 06:45:33 np0005465987 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  2 06:45:33 np0005465987 kernel: device-mapper: uevent: version 1.0.3
Oct  2 06:45:33 np0005465987 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  2 06:45:33 np0005465987 kernel: RPC: Registered named UNIX socket transport module.
Oct  2 06:45:33 np0005465987 kernel: RPC: Registered udp transport module.
Oct  2 06:45:33 np0005465987 kernel: RPC: Registered tcp transport module.
Oct  2 06:45:33 np0005465987 kernel: RPC: Registered tcp-with-tls transport module.
Oct  2 06:45:33 np0005465987 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  2 06:45:33 np0005465987 rpc.statd[444]: Version 2.5.4 starting
Oct  2 06:45:33 np0005465987 rpc.statd[444]: Initializing NSM state
Oct  2 06:45:33 np0005465987 rpc.idmapd[449]: Setting log level to 0
Oct  2 06:45:33 np0005465987 systemd[1]: Finished dracut pre-udev hook.
Oct  2 06:45:33 np0005465987 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  2 06:45:33 np0005465987 systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Oct  2 06:45:33 np0005465987 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  2 06:45:33 np0005465987 systemd[1]: Starting dracut pre-trigger hook...
Oct  2 06:45:33 np0005465987 systemd[1]: Finished dracut pre-trigger hook.
Oct  2 06:45:33 np0005465987 systemd[1]: Starting Coldplug All udev Devices...
Oct  2 06:45:33 np0005465987 systemd[1]: Created slice Slice /system/modprobe.
Oct  2 06:45:33 np0005465987 systemd[1]: Starting Load Kernel Module configfs...
Oct  2 06:45:33 np0005465987 systemd[1]: Finished Coldplug All udev Devices.
Oct  2 06:45:33 np0005465987 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 06:45:33 np0005465987 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 06:45:34 np0005465987 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  2 06:45:34 np0005465987 systemd[1]: Reached target Network.
Oct  2 06:45:34 np0005465987 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  2 06:45:34 np0005465987 systemd[1]: Starting dracut initqueue hook...
Oct  2 06:45:34 np0005465987 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  2 06:45:34 np0005465987 kernel: scsi host0: ata_piix
Oct  2 06:45:34 np0005465987 kernel: scsi host1: ata_piix
Oct  2 06:45:34 np0005465987 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  2 06:45:34 np0005465987 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  2 06:45:34 np0005465987 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  2 06:45:34 np0005465987 kernel: vda: vda1
Oct  2 06:45:34 np0005465987 systemd[1]: Mounting Kernel Configuration File System...
Oct  2 06:45:34 np0005465987 systemd[1]: Mounted Kernel Configuration File System.
Oct  2 06:45:34 np0005465987 systemd[1]: Reached target System Initialization.
Oct  2 06:45:34 np0005465987 systemd[1]: Reached target Basic System.
Oct  2 06:45:34 np0005465987 kernel: ata1: found unknown device (class 0)
Oct  2 06:45:34 np0005465987 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  2 06:45:34 np0005465987 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  2 06:45:34 np0005465987 systemd-udevd[487]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 06:45:34 np0005465987 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  2 06:45:34 np0005465987 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  2 06:45:34 np0005465987 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  2 06:45:34 np0005465987 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  2 06:45:34 np0005465987 systemd[1]: Reached target Initrd Root Device.
Oct  2 06:45:34 np0005465987 systemd[1]: Finished dracut initqueue hook.
Oct  2 06:45:34 np0005465987 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  2 06:45:34 np0005465987 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  2 06:45:34 np0005465987 systemd[1]: Reached target Remote File Systems.
Oct  2 06:45:34 np0005465987 systemd[1]: Starting dracut pre-mount hook...
Oct  2 06:45:34 np0005465987 systemd[1]: Finished dracut pre-mount hook.
Oct  2 06:45:34 np0005465987 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  2 06:45:34 np0005465987 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Oct  2 06:45:34 np0005465987 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  2 06:45:34 np0005465987 systemd[1]: Mounting /sysroot...
Oct  2 06:45:34 np0005465987 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  2 06:45:34 np0005465987 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  2 06:45:34 np0005465987 kernel: XFS (vda1): Ending clean mount
Oct  2 06:45:34 np0005465987 systemd[1]: Mounted /sysroot.
Oct  2 06:45:34 np0005465987 systemd[1]: Reached target Initrd Root File System.
Oct  2 06:45:34 np0005465987 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  2 06:45:35 np0005465987 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  2 06:45:35 np0005465987 systemd[1]: Reached target Initrd File Systems.
Oct  2 06:45:35 np0005465987 systemd[1]: Reached target Initrd Default Target.
Oct  2 06:45:35 np0005465987 systemd[1]: Starting dracut mount hook...
Oct  2 06:45:35 np0005465987 systemd[1]: Finished dracut mount hook.
Oct  2 06:45:35 np0005465987 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  2 06:45:35 np0005465987 rpc.idmapd[449]: exiting on signal 15
Oct  2 06:45:35 np0005465987 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  2 06:45:35 np0005465987 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Network.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Timer Units.
Oct  2 06:45:35 np0005465987 systemd[1]: dbus.socket: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  2 06:45:35 np0005465987 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Initrd Default Target.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Basic System.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Initrd Root Device.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Initrd /usr File System.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Path Units.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Remote File Systems.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Slice Units.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Socket Units.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target System Initialization.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Local File Systems.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Swaps.
Oct  2 06:45:35 np0005465987 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped dracut mount hook.
Oct  2 06:45:35 np0005465987 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped dracut pre-mount hook.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  2 06:45:35 np0005465987 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped dracut initqueue hook.
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped Apply Kernel Variables.
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped Coldplug All udev Devices.
Oct  2 06:45:35 np0005465987 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped dracut pre-trigger hook.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped Setup Virtual Console.
Oct  2 06:45:35 np0005465987 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  2 06:45:35 np0005465987 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Closed udev Control Socket.
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Closed udev Kernel Socket.
Oct  2 06:45:35 np0005465987 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped dracut pre-udev hook.
Oct  2 06:45:35 np0005465987 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped dracut cmdline hook.
Oct  2 06:45:35 np0005465987 systemd[1]: Starting Cleanup udev Database...
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  2 06:45:35 np0005465987 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  2 06:45:35 np0005465987 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Stopped Create System Users.
Oct  2 06:45:35 np0005465987 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  2 06:45:35 np0005465987 systemd[1]: Finished Cleanup udev Database.
Oct  2 06:45:35 np0005465987 systemd[1]: Reached target Switch Root.
Oct  2 06:45:35 np0005465987 systemd[1]: Starting Switch Root...
Oct  2 06:45:35 np0005465987 systemd[1]: Switching root.
Oct  2 06:45:35 np0005465987 systemd-journald[307]: Journal stopped
Oct  2 06:45:36 np0005465987 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  2 06:45:36 np0005465987 kernel: audit: type=1404 audit(1759401935.460:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  2 06:45:36 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 06:45:36 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 06:45:36 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 06:45:36 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 06:45:36 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 06:45:36 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 06:45:36 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 06:45:36 np0005465987 kernel: audit: type=1403 audit(1759401935.623:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  2 06:45:36 np0005465987 systemd: Successfully loaded SELinux policy in 167.659ms.
Oct  2 06:45:36 np0005465987 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.159ms.
Oct  2 06:45:36 np0005465987 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 06:45:36 np0005465987 systemd: Detected virtualization kvm.
Oct  2 06:45:36 np0005465987 systemd: Detected architecture x86-64.
Oct  2 06:45:36 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 06:45:36 np0005465987 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  2 06:45:36 np0005465987 systemd: Stopped Switch Root.
Oct  2 06:45:36 np0005465987 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  2 06:45:36 np0005465987 systemd: Created slice Slice /system/getty.
Oct  2 06:45:36 np0005465987 systemd: Created slice Slice /system/serial-getty.
Oct  2 06:45:36 np0005465987 systemd: Created slice Slice /system/sshd-keygen.
Oct  2 06:45:36 np0005465987 systemd: Created slice User and Session Slice.
Oct  2 06:45:36 np0005465987 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  2 06:45:36 np0005465987 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  2 06:45:36 np0005465987 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  2 06:45:36 np0005465987 systemd: Reached target Local Encrypted Volumes.
Oct  2 06:45:36 np0005465987 systemd: Stopped target Switch Root.
Oct  2 06:45:36 np0005465987 systemd: Stopped target Initrd File Systems.
Oct  2 06:45:36 np0005465987 systemd: Stopped target Initrd Root File System.
Oct  2 06:45:36 np0005465987 systemd: Reached target Local Integrity Protected Volumes.
Oct  2 06:45:36 np0005465987 systemd: Reached target Path Units.
Oct  2 06:45:36 np0005465987 systemd: Reached target rpc_pipefs.target.
Oct  2 06:45:36 np0005465987 systemd: Reached target Slice Units.
Oct  2 06:45:36 np0005465987 systemd: Reached target Swaps.
Oct  2 06:45:36 np0005465987 systemd: Reached target Local Verity Protected Volumes.
Oct  2 06:45:36 np0005465987 systemd: Listening on RPCbind Server Activation Socket.
Oct  2 06:45:36 np0005465987 systemd: Reached target RPC Port Mapper.
Oct  2 06:45:36 np0005465987 systemd: Listening on Process Core Dump Socket.
Oct  2 06:45:36 np0005465987 systemd: Listening on initctl Compatibility Named Pipe.
Oct  2 06:45:36 np0005465987 systemd: Listening on udev Control Socket.
Oct  2 06:45:36 np0005465987 systemd: Listening on udev Kernel Socket.
Oct  2 06:45:36 np0005465987 systemd: Mounting Huge Pages File System...
Oct  2 06:45:36 np0005465987 systemd: Mounting POSIX Message Queue File System...
Oct  2 06:45:36 np0005465987 systemd: Mounting Kernel Debug File System...
Oct  2 06:45:36 np0005465987 systemd: Mounting Kernel Trace File System...
Oct  2 06:45:36 np0005465987 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  2 06:45:36 np0005465987 systemd: Starting Create List of Static Device Nodes...
Oct  2 06:45:36 np0005465987 systemd: Starting Load Kernel Module configfs...
Oct  2 06:45:36 np0005465987 systemd: Starting Load Kernel Module drm...
Oct  2 06:45:36 np0005465987 systemd: Starting Load Kernel Module efi_pstore...
Oct  2 06:45:36 np0005465987 systemd: Starting Load Kernel Module fuse...
Oct  2 06:45:36 np0005465987 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  2 06:45:36 np0005465987 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  2 06:45:36 np0005465987 systemd: Stopped File System Check on Root Device.
Oct  2 06:45:36 np0005465987 systemd: Stopped Journal Service.
Oct  2 06:45:36 np0005465987 systemd: Starting Journal Service...
Oct  2 06:45:36 np0005465987 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  2 06:45:36 np0005465987 systemd: Starting Generate network units from Kernel command line...
Oct  2 06:45:36 np0005465987 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 06:45:36 np0005465987 systemd: Starting Remount Root and Kernel File Systems...
Oct  2 06:45:36 np0005465987 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  2 06:45:36 np0005465987 systemd: Starting Apply Kernel Variables...
Oct  2 06:45:36 np0005465987 systemd-journald[683]: Journal started
Oct  2 06:45:36 np0005465987 systemd-journald[683]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  2 06:45:36 np0005465987 systemd[1]: Queued start job for default target Multi-User System.
Oct  2 06:45:36 np0005465987 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  2 06:45:36 np0005465987 kernel: fuse: init (API version 7.37)
Oct  2 06:45:36 np0005465987 systemd: Starting Coldplug All udev Devices...
Oct  2 06:45:36 np0005465987 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  2 06:45:36 np0005465987 systemd: Started Journal Service.
Oct  2 06:45:36 np0005465987 systemd[1]: Mounted Huge Pages File System.
Oct  2 06:45:36 np0005465987 systemd[1]: Mounted POSIX Message Queue File System.
Oct  2 06:45:36 np0005465987 systemd[1]: Mounted Kernel Debug File System.
Oct  2 06:45:36 np0005465987 systemd[1]: Mounted Kernel Trace File System.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Create List of Static Device Nodes.
Oct  2 06:45:36 np0005465987 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 06:45:36 np0005465987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  2 06:45:36 np0005465987 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Load Kernel Module fuse.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Generate network units from Kernel command line.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Apply Kernel Variables.
Oct  2 06:45:36 np0005465987 kernel: ACPI: bus type drm_connector registered
Oct  2 06:45:36 np0005465987 systemd[1]: Mounting FUSE Control File System...
Oct  2 06:45:36 np0005465987 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Rebuild Hardware Database...
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  2 06:45:36 np0005465987 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Load/Save OS Random Seed...
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Create System Users...
Oct  2 06:45:36 np0005465987 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Load Kernel Module drm.
Oct  2 06:45:36 np0005465987 systemd[1]: Mounted FUSE Control File System.
Oct  2 06:45:36 np0005465987 systemd-journald[683]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  2 06:45:36 np0005465987 systemd-journald[683]: Received client request to flush runtime journal.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Load/Save OS Random Seed.
Oct  2 06:45:36 np0005465987 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Coldplug All udev Devices.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Create System Users.
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  2 06:45:36 np0005465987 systemd[1]: Reached target Preparation for Local File Systems.
Oct  2 06:45:36 np0005465987 systemd[1]: Reached target Local File Systems.
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  2 06:45:36 np0005465987 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  2 06:45:36 np0005465987 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  2 06:45:36 np0005465987 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Automatic Boot Loader Update...
Oct  2 06:45:36 np0005465987 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Create Volatile Files and Directories...
Oct  2 06:45:36 np0005465987 bootctl[701]: Couldn't find EFI system partition, skipping.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Automatic Boot Loader Update.
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Create Volatile Files and Directories.
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Security Auditing Service...
Oct  2 06:45:36 np0005465987 systemd[1]: Starting RPC Bind...
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Rebuild Journal Catalog...
Oct  2 06:45:36 np0005465987 auditd[707]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  2 06:45:36 np0005465987 auditd[707]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Rebuild Journal Catalog.
Oct  2 06:45:36 np0005465987 augenrules[712]: /sbin/augenrules: No change
Oct  2 06:45:36 np0005465987 systemd[1]: Started RPC Bind.
Oct  2 06:45:36 np0005465987 augenrules[727]: No rules
Oct  2 06:45:36 np0005465987 augenrules[727]: enabled 1
Oct  2 06:45:36 np0005465987 augenrules[727]: failure 1
Oct  2 06:45:36 np0005465987 augenrules[727]: pid 707
Oct  2 06:45:36 np0005465987 augenrules[727]: rate_limit 0
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog_limit 8192
Oct  2 06:45:36 np0005465987 augenrules[727]: lost 0
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog 1
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog_wait_time 60000
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog_wait_time_actual 0
Oct  2 06:45:36 np0005465987 augenrules[727]: enabled 1
Oct  2 06:45:36 np0005465987 augenrules[727]: failure 1
Oct  2 06:45:36 np0005465987 augenrules[727]: pid 707
Oct  2 06:45:36 np0005465987 augenrules[727]: rate_limit 0
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog_limit 8192
Oct  2 06:45:36 np0005465987 augenrules[727]: lost 0
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog 0
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog_wait_time 60000
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog_wait_time_actual 0
Oct  2 06:45:36 np0005465987 augenrules[727]: enabled 1
Oct  2 06:45:36 np0005465987 augenrules[727]: failure 1
Oct  2 06:45:36 np0005465987 augenrules[727]: pid 707
Oct  2 06:45:36 np0005465987 augenrules[727]: rate_limit 0
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog_limit 8192
Oct  2 06:45:36 np0005465987 augenrules[727]: lost 0
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog 0
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog_wait_time 60000
Oct  2 06:45:36 np0005465987 augenrules[727]: backlog_wait_time_actual 0
Oct  2 06:45:36 np0005465987 systemd[1]: Started Security Auditing Service.
Oct  2 06:45:36 np0005465987 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  2 06:45:36 np0005465987 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  2 06:45:37 np0005465987 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  2 06:45:37 np0005465987 systemd[1]: Finished Rebuild Hardware Database.
Oct  2 06:45:37 np0005465987 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  2 06:45:37 np0005465987 systemd[1]: Starting Update is Completed...
Oct  2 06:45:37 np0005465987 systemd[1]: Finished Update is Completed.
Oct  2 06:45:37 np0005465987 systemd-udevd[735]: Using default interface naming scheme 'rhel-9.0'.
Oct  2 06:45:37 np0005465987 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  2 06:45:37 np0005465987 systemd[1]: Reached target System Initialization.
Oct  2 06:45:37 np0005465987 systemd[1]: Started dnf makecache --timer.
Oct  2 06:45:37 np0005465987 systemd[1]: Started Daily rotation of log files.
Oct  2 06:45:37 np0005465987 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  2 06:45:37 np0005465987 systemd[1]: Reached target Timer Units.
Oct  2 06:45:37 np0005465987 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  2 06:45:37 np0005465987 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  2 06:45:37 np0005465987 systemd[1]: Reached target Socket Units.
Oct  2 06:45:37 np0005465987 systemd[1]: Starting D-Bus System Message Bus...
Oct  2 06:45:37 np0005465987 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 06:45:37 np0005465987 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  2 06:45:37 np0005465987 systemd[1]: Starting Load Kernel Module configfs...
Oct  2 06:45:37 np0005465987 systemd-udevd[747]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 06:45:37 np0005465987 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 06:45:37 np0005465987 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 06:45:37 np0005465987 systemd[1]: Started D-Bus System Message Bus.
Oct  2 06:45:37 np0005465987 systemd[1]: Reached target Basic System.
Oct  2 06:45:37 np0005465987 dbus-broker-lau[768]: Ready
Oct  2 06:45:37 np0005465987 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  2 06:45:37 np0005465987 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  2 06:45:37 np0005465987 systemd[1]: Starting NTP client/server...
Oct  2 06:45:37 np0005465987 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  2 06:45:37 np0005465987 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  2 06:45:37 np0005465987 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  2 06:45:37 np0005465987 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  2 06:45:37 np0005465987 systemd[1]: Starting IPv4 firewall with iptables...
Oct  2 06:45:37 np0005465987 systemd[1]: Started irqbalance daemon.
Oct  2 06:45:37 np0005465987 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  2 06:45:37 np0005465987 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 06:45:37 np0005465987 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 06:45:37 np0005465987 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 06:45:37 np0005465987 systemd[1]: Reached target sshd-keygen.target.
Oct  2 06:45:37 np0005465987 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  2 06:45:37 np0005465987 systemd[1]: Reached target User and Group Name Lookups.
Oct  2 06:45:37 np0005465987 chronyd[796]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  2 06:45:37 np0005465987 chronyd[796]: Loaded 0 symmetric keys
Oct  2 06:45:37 np0005465987 systemd[1]: Starting User Login Management...
Oct  2 06:45:37 np0005465987 chronyd[796]: Using right/UTC timezone to obtain leap second data
Oct  2 06:45:37 np0005465987 chronyd[796]: Loaded seccomp filter (level 2)
Oct  2 06:45:37 np0005465987 systemd[1]: Started NTP client/server.
Oct  2 06:45:37 np0005465987 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  2 06:45:37 np0005465987 systemd-logind[794]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  2 06:45:37 np0005465987 systemd-logind[794]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  2 06:45:37 np0005465987 systemd-logind[794]: New seat seat0.
Oct  2 06:45:37 np0005465987 systemd[1]: Started User Login Management.
Oct  2 06:45:37 np0005465987 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  2 06:45:37 np0005465987 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  2 06:45:37 np0005465987 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  2 06:45:37 np0005465987 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  2 06:45:37 np0005465987 kernel: Console: switching to colour dummy device 80x25
Oct  2 06:45:37 np0005465987 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  2 06:45:37 np0005465987 kernel: [drm] features: -context_init
Oct  2 06:45:37 np0005465987 kernel: [drm] number of scanouts: 1
Oct  2 06:45:37 np0005465987 kernel: [drm] number of cap sets: 0
Oct  2 06:45:37 np0005465987 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  2 06:45:37 np0005465987 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  2 06:45:37 np0005465987 kernel: Console: switching to colour frame buffer device 128x48
Oct  2 06:45:37 np0005465987 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  2 06:45:37 np0005465987 kernel: kvm_amd: TSC scaling supported
Oct  2 06:45:37 np0005465987 kernel: kvm_amd: Nested Virtualization enabled
Oct  2 06:45:37 np0005465987 kernel: kvm_amd: Nested Paging enabled
Oct  2 06:45:37 np0005465987 kernel: kvm_amd: LBR virtualization supported
Oct  2 06:45:37 np0005465987 iptables.init[787]: iptables: Applying firewall rules: [  OK  ]
Oct  2 06:45:37 np0005465987 systemd[1]: Finished IPv4 firewall with iptables.
Oct  2 06:45:38 np0005465987 cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 02 Oct 2025 10:45:37 +0000. Up 6.61 seconds.
Oct  2 06:45:38 np0005465987 systemd[1]: run-cloud\x2dinit-tmp-tmpxfcnqjs3.mount: Deactivated successfully.
Oct  2 06:45:38 np0005465987 systemd[1]: Starting Hostname Service...
Oct  2 06:45:38 np0005465987 systemd[1]: Started Hostname Service.
Oct  2 06:45:38 np0005465987 systemd-hostnamed[857]: Hostname set to <np0005465987.novalocal> (static)
Oct  2 06:45:38 np0005465987 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  2 06:45:38 np0005465987 systemd[1]: Reached target Preparation for Network.
Oct  2 06:45:38 np0005465987 systemd[1]: Starting Network Manager...
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6067] NetworkManager (version 1.54.1-1.el9) is starting... (boot:a0f66d87-d4ab-463e-bf2e-1bc7883c3156)
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6073] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6238] manager[0x55d783099080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6292] hostname: hostname: using hostnamed
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6292] hostname: static hostname changed from (none) to "np0005465987.novalocal"
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6297] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6417] manager[0x55d783099080]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6418] manager[0x55d783099080]: rfkill: WWAN hardware radio set enabled
Oct  2 06:45:38 np0005465987 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6509] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6510] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6511] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6511] manager: Networking is enabled by state file
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6513] settings: Loaded settings plugin: keyfile (internal)
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6543] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6569] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6596] dhcp: init: Using DHCP client 'internal'
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6599] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6615] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6627] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6637] device (lo): Activation: starting connection 'lo' (0865d9d9-3a95-4066-a4b0-d8a9cceef64b)
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6647] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6652] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 06:45:38 np0005465987 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6694] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 06:45:38 np0005465987 systemd[1]: Started Network Manager.
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6705] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6712] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 06:45:38 np0005465987 systemd[1]: Reached target Network.
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6721] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6727] device (eth0): carrier: link connected
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6740] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6753] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  2 06:45:38 np0005465987 systemd[1]: Starting Network Manager Wait Online...
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6772] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6780] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6781] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 06:45:38 np0005465987 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6788] manager: NetworkManager state is now CONNECTING
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6790] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6803] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6808] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 06:45:38 np0005465987 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6852] dhcp4 (eth0): state changed new lease, address=38.129.56.107
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6862] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6888] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6898] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6901] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6914] device (lo): Activation: successful, device activated.
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6923] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6928] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6936] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6945] device (eth0): Activation: successful, device activated.
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6958] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 06:45:38 np0005465987 NetworkManager[861]: <info>  [1759401938.6972] manager: startup complete
Oct  2 06:45:38 np0005465987 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  2 06:45:38 np0005465987 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  2 06:45:38 np0005465987 systemd[1]: Reached target NFS client services.
Oct  2 06:45:38 np0005465987 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  2 06:45:38 np0005465987 systemd[1]: Reached target Remote File Systems.
Oct  2 06:45:38 np0005465987 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 06:45:38 np0005465987 systemd[1]: Finished Network Manager Wait Online.
Oct  2 06:45:38 np0005465987 systemd[1]: Starting Cloud-init: Network Stage...
Oct  2 06:45:39 np0005465987 cloud-init[927]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 02 Oct 2025 10:45:39 +0000. Up 7.67 seconds.
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: |  eth0  | True |        38.129.56.107         | 255.255.255.0 | global | fa:16:3e:cb:c1:6f |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: |  eth0  | True | fe80::f816:3eff:fecb:c16f/64 |       .       |  link  | fa:16:3e:cb:c1:6f |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct  2 06:45:39 np0005465987 cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 06:45:40 np0005465987 cloud-init[927]: Generating public/private rsa key pair.
Oct  2 06:45:40 np0005465987 cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  2 06:45:40 np0005465987 cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  2 06:45:40 np0005465987 cloud-init[927]: The key fingerprint is:
Oct  2 06:45:40 np0005465987 cloud-init[927]: SHA256:sjEWCv0gr/RFeCrVEGyx0WJ7rSBhILfhykExaQC5BGY root@np0005465987.novalocal
Oct  2 06:45:40 np0005465987 cloud-init[927]: The key's randomart image is:
Oct  2 06:45:40 np0005465987 cloud-init[927]: +---[RSA 3072]----+
Oct  2 06:45:40 np0005465987 cloud-init[927]: |XE=.=+           |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |**++=*.          |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |o+=+*o=.         |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |o.o=oB...        |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |..o.+oB.S        |
Oct  2 06:45:40 np0005465987 cloud-init[927]: | . + o.=         |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |  . . .          |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |                 |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |                 |
Oct  2 06:45:40 np0005465987 cloud-init[927]: +----[SHA256]-----+
Oct  2 06:45:40 np0005465987 cloud-init[927]: Generating public/private ecdsa key pair.
Oct  2 06:45:40 np0005465987 cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  2 06:45:40 np0005465987 cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  2 06:45:40 np0005465987 cloud-init[927]: The key fingerprint is:
Oct  2 06:45:40 np0005465987 cloud-init[927]: SHA256:2modlMWRA3s3x+euItWHsoGPCK44XFNzQyvGOqsibV4 root@np0005465987.novalocal
Oct  2 06:45:40 np0005465987 cloud-init[927]: The key's randomart image is:
Oct  2 06:45:40 np0005465987 cloud-init[927]: +---[ECDSA 256]---+
Oct  2 06:45:40 np0005465987 cloud-init[927]: |        .o.o     |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |        ..=  .   |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |     . ..+..o o .|
Oct  2 06:45:40 np0005465987 cloud-init[927]: |      * *. . o o |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |     + =S. . . ..|
Oct  2 06:45:40 np0005465987 cloud-init[927]: |    = .o. . + o..|
Oct  2 06:45:40 np0005465987 cloud-init[927]: | o .E=.o.o + + ..|
Oct  2 06:45:40 np0005465987 cloud-init[927]: |o =o. o.o o +  . |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |.++o.o.    . ..  |
Oct  2 06:45:40 np0005465987 cloud-init[927]: +----[SHA256]-----+
Oct  2 06:45:40 np0005465987 cloud-init[927]: Generating public/private ed25519 key pair.
Oct  2 06:45:40 np0005465987 cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  2 06:45:40 np0005465987 cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  2 06:45:40 np0005465987 cloud-init[927]: The key fingerprint is:
Oct  2 06:45:40 np0005465987 cloud-init[927]: SHA256:R+rkMJQBq3ZmOHjqjFDmLixtFvwNAbsCtvRaB8qlN9U root@np0005465987.novalocal
Oct  2 06:45:40 np0005465987 cloud-init[927]: The key's randomart image is:
Oct  2 06:45:40 np0005465987 cloud-init[927]: +--[ED25519 256]--+
Oct  2 06:45:40 np0005465987 cloud-init[927]: |    ...          |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |  .  . o         |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |   o. +   .      |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |.+.=.o E o       |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |*.#.*.o S .      |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |.%+X.. * .       |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |+o=ooo  o        |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |B++ . .          |
Oct  2 06:45:40 np0005465987 cloud-init[927]: |o*.              |
Oct  2 06:45:40 np0005465987 cloud-init[927]: +----[SHA256]-----+
Oct  2 06:45:40 np0005465987 systemd[1]: Finished Cloud-init: Network Stage.
Oct  2 06:45:40 np0005465987 systemd[1]: Reached target Cloud-config availability.
Oct  2 06:45:40 np0005465987 systemd[1]: Reached target Network is Online.
Oct  2 06:45:40 np0005465987 systemd[1]: Starting Cloud-init: Config Stage...
Oct  2 06:45:40 np0005465987 systemd[1]: Starting Notify NFS peers of a restart...
Oct  2 06:45:40 np0005465987 systemd[1]: Starting System Logging Service...
Oct  2 06:45:40 np0005465987 sm-notify[1009]: Version 2.5.4 starting
Oct  2 06:45:40 np0005465987 systemd[1]: Starting OpenSSH server daemon...
Oct  2 06:45:40 np0005465987 systemd[1]: Starting Permit User Sessions...
Oct  2 06:45:40 np0005465987 systemd[1]: Started Notify NFS peers of a restart.
Oct  2 06:45:40 np0005465987 systemd[1]: Finished Permit User Sessions.
Oct  2 06:45:40 np0005465987 systemd[1]: Started Command Scheduler.
Oct  2 06:45:40 np0005465987 systemd[1]: Started Getty on tty1.
Oct  2 06:45:40 np0005465987 systemd[1]: Started Serial Getty on ttyS0.
Oct  2 06:45:40 np0005465987 systemd[1]: Reached target Login Prompts.
Oct  2 06:45:40 np0005465987 systemd[1]: Started OpenSSH server daemon.
Oct  2 06:45:40 np0005465987 rsyslogd[1010]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1010" x-info="https://www.rsyslog.com"] start
Oct  2 06:45:40 np0005465987 rsyslogd[1010]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  2 06:45:40 np0005465987 systemd[1]: Started System Logging Service.
Oct  2 06:45:40 np0005465987 systemd[1]: Reached target Multi-User System.
Oct  2 06:45:40 np0005465987 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  2 06:45:40 np0005465987 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  2 06:45:40 np0005465987 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  2 06:45:40 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 06:45:40 np0005465987 cloud-init[1040]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 02 Oct 2025 10:45:40 +0000. Up 9.51 seconds.
Oct  2 06:45:40 np0005465987 systemd[1]: Finished Cloud-init: Config Stage.
Oct  2 06:45:41 np0005465987 systemd[1]: Starting Cloud-init: Final Stage...
Oct  2 06:45:41 np0005465987 cloud-init[1044]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 02 Oct 2025 10:45:41 +0000. Up 9.94 seconds.
Oct  2 06:45:41 np0005465987 cloud-init[1046]: #############################################################
Oct  2 06:45:41 np0005465987 cloud-init[1047]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  2 06:45:41 np0005465987 cloud-init[1049]: 256 SHA256:2modlMWRA3s3x+euItWHsoGPCK44XFNzQyvGOqsibV4 root@np0005465987.novalocal (ECDSA)
Oct  2 06:45:41 np0005465987 cloud-init[1051]: 256 SHA256:R+rkMJQBq3ZmOHjqjFDmLixtFvwNAbsCtvRaB8qlN9U root@np0005465987.novalocal (ED25519)
Oct  2 06:45:41 np0005465987 cloud-init[1053]: 3072 SHA256:sjEWCv0gr/RFeCrVEGyx0WJ7rSBhILfhykExaQC5BGY root@np0005465987.novalocal (RSA)
Oct  2 06:45:41 np0005465987 cloud-init[1054]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  2 06:45:41 np0005465987 cloud-init[1055]: #############################################################
Oct  2 06:45:41 np0005465987 cloud-init[1044]: Cloud-init v. 24.4-7.el9 finished at Thu, 02 Oct 2025 10:45:41 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.16 seconds
Oct  2 06:45:41 np0005465987 systemd[1]: Finished Cloud-init: Final Stage.
Oct  2 06:45:41 np0005465987 systemd[1]: Reached target Cloud-init target.
Oct  2 06:45:41 np0005465987 systemd[1]: Startup finished in 1.520s (kernel) + 2.561s (initrd) + 6.136s (userspace) = 10.218s.
Oct  2 06:45:43 np0005465987 chronyd[796]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Oct  2 06:45:43 np0005465987 chronyd[796]: System clock TAI offset set to 37 seconds
Oct  2 06:45:45 np0005465987 chronyd[796]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Oct  2 06:45:47 np0005465987 irqbalance[789]: Cannot change IRQ 35 affinity: Operation not permitted
Oct  2 06:45:47 np0005465987 irqbalance[789]: IRQ 35 affinity is now unmanaged
Oct  2 06:45:47 np0005465987 irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Oct  2 06:45:47 np0005465987 irqbalance[789]: IRQ 25 affinity is now unmanaged
Oct  2 06:45:47 np0005465987 irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  2 06:45:47 np0005465987 irqbalance[789]: IRQ 28 affinity is now unmanaged
Oct  2 06:45:47 np0005465987 irqbalance[789]: Cannot change IRQ 34 affinity: Operation not permitted
Oct  2 06:45:47 np0005465987 irqbalance[789]: IRQ 34 affinity is now unmanaged
Oct  2 06:45:47 np0005465987 irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  2 06:45:47 np0005465987 irqbalance[789]: IRQ 32 affinity is now unmanaged
Oct  2 06:45:47 np0005465987 irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  2 06:45:47 np0005465987 irqbalance[789]: IRQ 30 affinity is now unmanaged
Oct  2 06:45:48 np0005465987 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 06:46:08 np0005465987 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 06:46:26 np0005465987 systemd[1]: Created slice User Slice of UID 1000.
Oct  2 06:46:26 np0005465987 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  2 06:46:26 np0005465987 systemd-logind[794]: New session 1 of user zuul.
Oct  2 06:46:26 np0005465987 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  2 06:46:26 np0005465987 systemd[1]: Starting User Manager for UID 1000...
Oct  2 06:46:26 np0005465987 systemd[1066]: Queued start job for default target Main User Target.
Oct  2 06:46:26 np0005465987 systemd[1066]: Created slice User Application Slice.
Oct  2 06:46:26 np0005465987 systemd[1066]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 06:46:26 np0005465987 systemd[1066]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 06:46:26 np0005465987 systemd[1066]: Reached target Paths.
Oct  2 06:46:26 np0005465987 systemd[1066]: Reached target Timers.
Oct  2 06:46:26 np0005465987 systemd[1066]: Starting D-Bus User Message Bus Socket...
Oct  2 06:46:26 np0005465987 systemd[1066]: Starting Create User's Volatile Files and Directories...
Oct  2 06:46:26 np0005465987 systemd[1066]: Listening on D-Bus User Message Bus Socket.
Oct  2 06:46:26 np0005465987 systemd[1066]: Reached target Sockets.
Oct  2 06:46:26 np0005465987 systemd[1066]: Finished Create User's Volatile Files and Directories.
Oct  2 06:46:26 np0005465987 systemd[1066]: Reached target Basic System.
Oct  2 06:46:26 np0005465987 systemd[1066]: Reached target Main User Target.
Oct  2 06:46:26 np0005465987 systemd[1066]: Startup finished in 143ms.
Oct  2 06:46:26 np0005465987 systemd[1]: Started User Manager for UID 1000.
Oct  2 06:46:26 np0005465987 systemd[1]: Started Session 1 of User zuul.
Oct  2 06:46:27 np0005465987 python3[1148]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 06:46:34 np0005465987 python3[1176]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 06:46:50 np0005465987 python3[1234]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 06:46:51 np0005465987 python3[1274]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  2 06:46:54 np0005465987 python3[1300]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4HLvw7SnHU3ZU9fE0jIv/0FBRDzTnHfUtei1LSOQahXMfp0JTrMJ0Rj7BbYXxImr2WcDV3bv5FU3LkNsWWkvKZ+/YTg55vh88jhcTTSwOPfyQ0NCsgJ787HDXojmkTKqvsS4ZyAP6VcPlCZbUWNnTSSbJUSyaHZMV5ihm0q6iSgctKks2z5A9UayATNjnXUmG/mYZF8TjRztR4mgHBNFbBBfNYFztb1B2fe+vxBnNa4ls2O1rLzC/5crDuKj3ook8+1X4UTHys4s5ONjn9jxIkB3P5jnGl8ibSdVQRN46RVP9p93WUUJmVLRZdoaq0MrLEwnG1poWzqFMv/cX91UaFWkVQBmiX85KvlSQJqVymdz6LOcPNL+U30yMk55hrLdmeOMl1B9DW4VKD/rr+vtK+HpbaJYYHv9A+woTHFawd+Lkd7oFFavN5+ce0qiO8pg3vZYBLUXgFNDIcMMuu3Th/9xHdxwKfNaQJ9ESqYe+DAE5UBQd/CAmWzb1hdJnFX8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:46:55 np0005465987 python3[1324]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:46:56 np0005465987 python3[1423]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:46:56 np0005465987 python3[1494]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402015.8721182-253-20218547154696/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ce6939fdbe5b42ab81e6b7883c8c12f2_id_rsa follow=False checksum=205f22b2d368cfc213016f6ec99460e215b81c8a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:46:57 np0005465987 python3[1617]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:46:57 np0005465987 python3[1688]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402017.0476332-308-73333213809233/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ce6939fdbe5b42ab81e6b7883c8c12f2_id_rsa.pub follow=False checksum=e1273b820e94a75c0885ab32202b04dec1652e4b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:46:59 np0005465987 python3[1736]: ansible-ping Invoked with data=pong
Oct  2 06:47:01 np0005465987 python3[1760]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 06:47:03 np0005465987 python3[1818]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  2 06:47:05 np0005465987 python3[1850]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:05 np0005465987 python3[1874]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:05 np0005465987 python3[1898]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:06 np0005465987 python3[1922]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:06 np0005465987 python3[1946]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:06 np0005465987 python3[1970]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:08 np0005465987 python3[1996]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:09 np0005465987 python3[2074]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:47:10 np0005465987 python3[2147]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402029.1589239-32-164944776905400/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:11 np0005465987 python3[2195]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:11 np0005465987 python3[2219]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:11 np0005465987 python3[2243]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:12 np0005465987 python3[2267]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:12 np0005465987 python3[2291]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:12 np0005465987 python3[2315]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:13 np0005465987 python3[2339]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:13 np0005465987 python3[2363]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:13 np0005465987 python3[2387]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:13 np0005465987 python3[2411]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:14 np0005465987 python3[2435]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:14 np0005465987 python3[2459]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:14 np0005465987 python3[2483]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:15 np0005465987 python3[2507]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:15 np0005465987 python3[2531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:15 np0005465987 python3[2555]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:16 np0005465987 python3[2579]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:16 np0005465987 python3[2603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:16 np0005465987 python3[2627]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:16 np0005465987 python3[2651]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:17 np0005465987 python3[2675]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:17 np0005465987 python3[2699]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:17 np0005465987 python3[2723]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:18 np0005465987 python3[2747]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:18 np0005465987 python3[2771]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:18 np0005465987 python3[2795]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:47:21 np0005465987 python3[2821]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 06:47:21 np0005465987 systemd[1]: Starting Time & Date Service...
Oct  2 06:47:21 np0005465987 systemd[1]: Started Time & Date Service.
Oct  2 06:47:21 np0005465987 systemd-timedated[2823]: Changed time zone to 'UTC' (UTC).
Oct  2 06:47:22 np0005465987 python3[2852]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:22 np0005465987 python3[2928]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:47:23 np0005465987 python3[2999]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759402042.5409226-253-27782622169001/source _original_basename=tmpn3i5ksk4 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:23 np0005465987 python3[3099]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:47:24 np0005465987 python3[3170]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402043.5823002-303-36130950406230/source _original_basename=tmpt4ulmmvq follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:25 np0005465987 python3[3272]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:47:26 np0005465987 python3[3345]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402045.4821486-382-193269614785681/source _original_basename=tmpi15d0vaj follow=False checksum=37e5bf6967427939916d50ad64483ab30d285ff4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:27 np0005465987 python3[3393]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:47:27 np0005465987 python3[3419]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:47:27 np0005465987 python3[3499]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:47:28 np0005465987 python3[3572]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402047.618985-453-10523141515925/source _original_basename=tmp0zj0hp3_ follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:29 np0005465987 python3[3623]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-2436-8b23-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:47:30 np0005465987 python3[3651]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-2436-8b23-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  2 06:47:32 np0005465987 python3[3679]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:47:51 np0005465987 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 06:47:54 np0005465987 python3[3707]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:48:54 np0005465987 systemd-logind[794]: Session 1 logged out. Waiting for processes to exit.
Oct  2 06:48:58 np0005465987 systemd[1066]: Starting Mark boot as successful...
Oct  2 06:48:58 np0005465987 systemd[1066]: Finished Mark boot as successful.
Oct  2 06:49:17 np0005465987 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  2 06:49:17 np0005465987 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  2 06:49:17 np0005465987 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  2 06:49:17 np0005465987 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  2 06:49:17 np0005465987 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  2 06:49:17 np0005465987 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  2 06:49:17 np0005465987 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  2 06:49:17 np0005465987 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  2 06:49:17 np0005465987 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  2 06:49:17 np0005465987 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0067] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 06:49:18 np0005465987 systemd-udevd[3709]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0219] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0249] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0253] device (eth1): carrier: link connected
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0255] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0261] policy: auto-activating connection 'Wired connection 1' (0dc54d3d-9dd8-3915-91cb-425e5ed9b9b7)
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0264] device (eth1): Activation: starting connection 'Wired connection 1' (0dc54d3d-9dd8-3915-91cb-425e5ed9b9b7)
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0265] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0267] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0271] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 06:49:18 np0005465987 NetworkManager[861]: <info>  [1759402158.0275] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 06:49:18 np0005465987 systemd-logind[794]: New session 3 of user zuul.
Oct  2 06:49:18 np0005465987 systemd[1]: Started Session 3 of User zuul.
Oct  2 06:49:18 np0005465987 python3[3740]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-3fa1-89d7-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:49:29 np0005465987 python3[3820]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:49:29 np0005465987 python3[3893]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402168.7017167-155-66171985728442/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=56ba80647875a69fa50aa116c7993ec1b4ab76ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:49:29 np0005465987 python3[3943]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 06:49:29 np0005465987 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  2 06:49:29 np0005465987 systemd[1]: Stopped Network Manager Wait Online.
Oct  2 06:49:29 np0005465987 systemd[1]: Stopping Network Manager Wait Online...
Oct  2 06:49:29 np0005465987 systemd[1]: Stopping Network Manager...
Oct  2 06:49:29 np0005465987 NetworkManager[861]: <info>  [1759402169.9212] caught SIGTERM, shutting down normally.
Oct  2 06:49:29 np0005465987 NetworkManager[861]: <info>  [1759402169.9223] dhcp4 (eth0): canceled DHCP transaction
Oct  2 06:49:29 np0005465987 NetworkManager[861]: <info>  [1759402169.9223] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 06:49:29 np0005465987 NetworkManager[861]: <info>  [1759402169.9223] dhcp4 (eth0): state changed no lease
Oct  2 06:49:29 np0005465987 NetworkManager[861]: <info>  [1759402169.9225] manager: NetworkManager state is now CONNECTING
Oct  2 06:49:29 np0005465987 NetworkManager[861]: <info>  [1759402169.9307] dhcp4 (eth1): canceled DHCP transaction
Oct  2 06:49:29 np0005465987 NetworkManager[861]: <info>  [1759402169.9307] dhcp4 (eth1): state changed no lease
Oct  2 06:49:29 np0005465987 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 06:49:29 np0005465987 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 06:49:30 np0005465987 NetworkManager[861]: <info>  [1759402170.0125] exiting (success)
Oct  2 06:49:30 np0005465987 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  2 06:49:30 np0005465987 systemd[1]: Stopped Network Manager.
Oct  2 06:49:30 np0005465987 systemd[1]: NetworkManager.service: Consumed 1.259s CPU time, 10.0M memory peak.
Oct  2 06:49:30 np0005465987 systemd[1]: Starting Network Manager...
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.0804] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:a0f66d87-d4ab-463e-bf2e-1bc7883c3156)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.0806] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.0869] manager[0x56035e69c070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 06:49:30 np0005465987 systemd[1]: Starting Hostname Service...
Oct  2 06:49:30 np0005465987 systemd[1]: Started Hostname Service.
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1754] hostname: hostname: using hostnamed
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1755] hostname: static hostname changed from (none) to "np0005465987.novalocal"
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1760] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1765] manager[0x56035e69c070]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1766] manager[0x56035e69c070]: rfkill: WWAN hardware radio set enabled
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1798] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1798] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1799] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1799] manager: Networking is enabled by state file
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1802] settings: Loaded settings plugin: keyfile (internal)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1806] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1840] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1852] dhcp: init: Using DHCP client 'internal'
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1856] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1862] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1867] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1874] device (lo): Activation: starting connection 'lo' (0865d9d9-3a95-4066-a4b0-d8a9cceef64b)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1878] device (eth0): carrier: link connected
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1882] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1885] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1886] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1891] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1897] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1903] device (eth1): carrier: link connected
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1906] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1910] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (0dc54d3d-9dd8-3915-91cb-425e5ed9b9b7) (indicated)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1911] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1915] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1920] device (eth1): Activation: starting connection 'Wired connection 1' (0dc54d3d-9dd8-3915-91cb-425e5ed9b9b7)
Oct  2 06:49:30 np0005465987 systemd[1]: Started Network Manager.
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1932] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1936] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1939] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1941] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1943] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1945] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1947] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1949] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1951] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1957] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1959] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1968] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1970] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1983] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1988] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.1992] device (lo): Activation: successful, device activated.
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.2024] dhcp4 (eth0): state changed new lease, address=38.129.56.107
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.2030] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.2080] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 systemd[1]: Starting Network Manager Wait Online...
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.2111] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.2113] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.2115] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.2119] device (eth0): Activation: successful, device activated.
Oct  2 06:49:30 np0005465987 NetworkManager[3960]: <info>  [1759402170.2124] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 06:49:30 np0005465987 python3[4027]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-3fa1-89d7-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:49:40 np0005465987 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 06:50:00 np0005465987 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.3657] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 06:50:15 np0005465987 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 06:50:15 np0005465987 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.3888] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.3891] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.3899] device (eth1): Activation: successful, device activated.
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.3906] manager: startup complete
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.3908] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <warn>  [1759402215.3913] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.3920] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  2 06:50:15 np0005465987 systemd[1]: Finished Network Manager Wait Online.
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4037] dhcp4 (eth1): canceled DHCP transaction
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4038] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4038] dhcp4 (eth1): state changed no lease
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4051] policy: auto-activating connection 'ci-private-network' (30642f4e-e1cf-58cb-8c3f-88e6d6b1f945)
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4055] device (eth1): Activation: starting connection 'ci-private-network' (30642f4e-e1cf-58cb-8c3f-88e6d6b1f945)
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4056] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4059] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4065] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4072] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4103] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4104] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 06:50:15 np0005465987 NetworkManager[3960]: <info>  [1759402215.4110] device (eth1): Activation: successful, device activated.
Oct  2 06:50:25 np0005465987 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 06:50:30 np0005465987 systemd[1]: session-3.scope: Deactivated successfully.
Oct  2 06:50:30 np0005465987 systemd[1]: session-3.scope: Consumed 1.518s CPU time.
Oct  2 06:50:30 np0005465987 systemd-logind[794]: Session 3 logged out. Waiting for processes to exit.
Oct  2 06:50:30 np0005465987 systemd-logind[794]: Removed session 3.
Oct  2 06:51:12 np0005465987 systemd-logind[794]: New session 4 of user zuul.
Oct  2 06:51:12 np0005465987 systemd[1]: Started Session 4 of User zuul.
Oct  2 06:51:12 np0005465987 python3[4137]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:51:13 np0005465987 python3[4210]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402272.556658-373-212165343406977/source _original_basename=tmp9cdom2k6 follow=False checksum=21cbaa2be6010ff877e8cd89353655f79ca4b790 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:51:16 np0005465987 systemd[1]: session-4.scope: Deactivated successfully.
Oct  2 06:51:16 np0005465987 systemd-logind[794]: Session 4 logged out. Waiting for processes to exit.
Oct  2 06:51:16 np0005465987 systemd-logind[794]: Removed session 4.
Oct  2 06:51:58 np0005465987 systemd[1066]: Created slice User Background Tasks Slice.
Oct  2 06:51:58 np0005465987 systemd[1066]: Starting Cleanup of User's Temporary Files and Directories...
Oct  2 06:51:58 np0005465987 systemd[1066]: Finished Cleanup of User's Temporary Files and Directories.
Oct  2 06:53:19 np0005465987 chronyd[796]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Oct  2 06:56:45 np0005465987 systemd-logind[794]: New session 5 of user zuul.
Oct  2 06:56:45 np0005465987 systemd[1]: Started Session 5 of User zuul.
Oct  2 06:56:46 np0005465987 python3[4268]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-bd1e-1dbd-000000000ca2-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:56:46 np0005465987 python3[4297]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:56:47 np0005465987 python3[4323]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:56:47 np0005465987 python3[4349]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:56:47 np0005465987 python3[4375]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:56:48 np0005465987 python3[4401]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:56:48 np0005465987 python3[4401]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  2 06:56:49 np0005465987 python3[4427]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 06:56:49 np0005465987 systemd[1]: Reloading.
Oct  2 06:56:49 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 06:56:50 np0005465987 python3[4483]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  2 06:56:51 np0005465987 python3[4510]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:56:51 np0005465987 python3[4538]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:56:51 np0005465987 python3[4566]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:56:52 np0005465987 python3[4594]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:56:52 np0005465987 python3[4621]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-bd1e-1dbd-000000000ca8-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:56:53 np0005465987 python3[4651]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 06:56:56 np0005465987 systemd[1]: session-5.scope: Deactivated successfully.
Oct  2 06:56:56 np0005465987 systemd[1]: session-5.scope: Consumed 3.273s CPU time.
Oct  2 06:56:56 np0005465987 systemd-logind[794]: Session 5 logged out. Waiting for processes to exit.
Oct  2 06:56:56 np0005465987 systemd-logind[794]: Removed session 5.
Oct  2 06:56:58 np0005465987 systemd-logind[794]: New session 6 of user zuul.
Oct  2 06:56:58 np0005465987 systemd[1]: Started Session 6 of User zuul.
Oct  2 06:56:58 np0005465987 python3[4686]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 06:57:14 np0005465987 kernel: SELinux:  Converting 363 SID table entries...
Oct  2 06:57:14 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 06:57:14 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 06:57:14 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 06:57:14 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 06:57:14 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 06:57:14 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 06:57:14 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 06:57:23 np0005465987 kernel: SELinux:  Converting 363 SID table entries...
Oct  2 06:57:23 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 06:57:23 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 06:57:23 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 06:57:23 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 06:57:23 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 06:57:23 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 06:57:23 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 06:57:32 np0005465987 kernel: SELinux:  Converting 363 SID table entries...
Oct  2 06:57:32 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 06:57:32 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 06:57:32 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 06:57:32 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 06:57:32 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 06:57:32 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 06:57:32 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 06:57:33 np0005465987 setsebool[4762]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  2 06:57:33 np0005465987 setsebool[4762]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  2 06:57:44 np0005465987 kernel: SELinux:  Converting 366 SID table entries...
Oct  2 06:57:44 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 06:57:44 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 06:57:44 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 06:57:44 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 06:57:44 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 06:57:44 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 06:57:44 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 06:58:01 np0005465987 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  2 06:58:02 np0005465987 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 06:58:02 np0005465987 systemd[1]: Starting man-db-cache-update.service...
Oct  2 06:58:02 np0005465987 systemd[1]: Reloading.
Oct  2 06:58:02 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 06:58:02 np0005465987 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 06:58:02 np0005465987 systemd[1]: Starting PackageKit Daemon...
Oct  2 06:58:03 np0005465987 systemd[1]: Starting Authorization Manager...
Oct  2 06:58:03 np0005465987 polkitd[6433]: Started polkitd version 0.117
Oct  2 06:58:03 np0005465987 systemd[1]: Started Authorization Manager.
Oct  2 06:58:03 np0005465987 systemd[1]: Started PackageKit Daemon.
Oct  2 06:58:14 np0005465987 python3[13249]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-e981-cfdf-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 06:58:15 np0005465987 kernel: evm: overlay not supported
Oct  2 06:58:16 np0005465987 systemd[1066]: Starting D-Bus User Message Bus...
Oct  2 06:58:16 np0005465987 dbus-broker-launch[13714]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  2 06:58:16 np0005465987 dbus-broker-launch[13714]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  2 06:58:16 np0005465987 systemd[1066]: Started D-Bus User Message Bus.
Oct  2 06:58:16 np0005465987 dbus-broker-lau[13714]: Ready
Oct  2 06:58:16 np0005465987 systemd[1066]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  2 06:58:16 np0005465987 systemd[1066]: Created slice Slice /user.
Oct  2 06:58:16 np0005465987 systemd[1066]: podman-13593.scope: unit configures an IP firewall, but not running as root.
Oct  2 06:58:16 np0005465987 systemd[1066]: (This warning is only shown for the first unit using IP firewalling.)
Oct  2 06:58:16 np0005465987 systemd[1066]: Started podman-13593.scope.
Oct  2 06:58:16 np0005465987 systemd[1066]: Started podman-pause-9cacb39d.scope.
Oct  2 06:58:16 np0005465987 python3[14031]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.59:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.59:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:58:17 np0005465987 irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Oct  2 06:58:17 np0005465987 irqbalance[789]: IRQ 27 affinity is now unmanaged
Oct  2 06:58:17 np0005465987 systemd[1]: session-6.scope: Deactivated successfully.
Oct  2 06:58:17 np0005465987 systemd[1]: session-6.scope: Consumed 59.534s CPU time.
Oct  2 06:58:17 np0005465987 systemd-logind[794]: Session 6 logged out. Waiting for processes to exit.
Oct  2 06:58:17 np0005465987 systemd-logind[794]: Removed session 6.
Oct  2 06:58:41 np0005465987 systemd-logind[794]: New session 7 of user zuul.
Oct  2 06:58:41 np0005465987 systemd[1]: Started Session 7 of User zuul.
Oct  2 06:58:42 np0005465987 python3[24639]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoWb+X4zjyiL1C8i00X7uqnMpK2nqXgv8anwAVqGS5xltCc1+WIIAtSsS512sgaVFfbCTDrCO98+/EA3bNR7Uo= zuul@np0005465985.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:58:42 np0005465987 python3[24882]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoWb+X4zjyiL1C8i00X7uqnMpK2nqXgv8anwAVqGS5xltCc1+WIIAtSsS512sgaVFfbCTDrCO98+/EA3bNR7Uo= zuul@np0005465985.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:58:43 np0005465987 python3[25412]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005465987.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  2 06:58:44 np0005465987 python3[25740]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoWb+X4zjyiL1C8i00X7uqnMpK2nqXgv8anwAVqGS5xltCc1+WIIAtSsS512sgaVFfbCTDrCO98+/EA3bNR7Uo= zuul@np0005465985.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 06:58:45 np0005465987 python3[26021]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 06:58:45 np0005465987 python3[26280]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402724.8810697-168-210541133516558/source _original_basename=tmp2d50y2qv follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 06:58:46 np0005465987 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 06:58:46 np0005465987 systemd[1]: Finished man-db-cache-update.service.
Oct  2 06:58:46 np0005465987 systemd[1]: man-db-cache-update.service: Consumed 51.655s CPU time.
Oct  2 06:58:46 np0005465987 systemd[1]: run-r13d94e6440d342eaad52f66459246416.service: Deactivated successfully.
Oct  2 06:58:46 np0005465987 python3[26564]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Oct  2 06:58:46 np0005465987 systemd[1]: Starting Hostname Service...
Oct  2 06:58:46 np0005465987 systemd[1]: Started Hostname Service.
Oct  2 06:58:46 np0005465987 systemd-hostnamed[26568]: Changed pretty hostname to 'compute-1'
Oct  2 06:58:46 np0005465987 systemd-hostnamed[26568]: Hostname set to <compute-1> (static)
Oct  2 06:58:46 np0005465987 NetworkManager[3960]: <info>  [1759402726.6960] hostname: static hostname changed from "np0005465987.novalocal" to "compute-1"
Oct  2 06:58:46 np0005465987 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 06:58:46 np0005465987 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 06:58:47 np0005465987 systemd[1]: session-7.scope: Deactivated successfully.
Oct  2 06:58:47 np0005465987 systemd[1]: session-7.scope: Consumed 2.102s CPU time.
Oct  2 06:58:47 np0005465987 systemd-logind[794]: Session 7 logged out. Waiting for processes to exit.
Oct  2 06:58:47 np0005465987 systemd-logind[794]: Removed session 7.
Oct  2 06:58:56 np0005465987 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 06:59:16 np0005465987 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:00:58 np0005465987 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  2 07:00:58 np0005465987 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  2 07:00:58 np0005465987 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  2 07:00:58 np0005465987 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  2 07:03:08 np0005465987 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 07:03:50 np0005465987 systemd-logind[794]: New session 8 of user zuul.
Oct  2 07:03:50 np0005465987 systemd[1]: Started Session 8 of User zuul.
Oct  2 07:03:51 np0005465987 python3[26682]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:03:54 np0005465987 python3[26798]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:03:54 np0005465987 python3[26871]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7865472-30614-268323103045360/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:03:54 np0005465987 python3[26897]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:03:55 np0005465987 python3[26970]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7865472-30614-268323103045360/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:03:55 np0005465987 python3[26996]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:03:55 np0005465987 python3[27069]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7865472-30614-268323103045360/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:03:55 np0005465987 python3[27095]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:03:56 np0005465987 python3[27168]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7865472-30614-268323103045360/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:03:56 np0005465987 python3[27194]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:03:56 np0005465987 python3[27267]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7865472-30614-268323103045360/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:03:56 np0005465987 python3[27293]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:03:57 np0005465987 python3[27366]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7865472-30614-268323103045360/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:03:57 np0005465987 python3[27392]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:03:57 np0005465987 python3[27465]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403033.7865472-30614-268323103045360/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:04:11 np0005465987 python3[27513]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:09:10 np0005465987 systemd[1]: session-8.scope: Deactivated successfully.
Oct  2 07:09:10 np0005465987 systemd[1]: session-8.scope: Consumed 4.348s CPU time.
Oct  2 07:09:10 np0005465987 systemd-logind[794]: Session 8 logged out. Waiting for processes to exit.
Oct  2 07:09:10 np0005465987 systemd-logind[794]: Removed session 8.
Oct  2 07:19:32 np0005465987 systemd-logind[794]: New session 9 of user zuul.
Oct  2 07:19:32 np0005465987 systemd[1]: Started Session 9 of User zuul.
Oct  2 07:19:33 np0005465987 python3.9[27678]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:19:35 np0005465987 python3.9[27859]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:19:48 np0005465987 systemd[1]: session-9.scope: Deactivated successfully.
Oct  2 07:19:48 np0005465987 systemd[1]: session-9.scope: Consumed 7.510s CPU time.
Oct  2 07:19:48 np0005465987 systemd-logind[794]: Session 9 logged out. Waiting for processes to exit.
Oct  2 07:19:48 np0005465987 systemd-logind[794]: Removed session 9.
Oct  2 07:20:04 np0005465987 systemd-logind[794]: New session 10 of user zuul.
Oct  2 07:20:04 np0005465987 systemd[1]: Started Session 10 of User zuul.
Oct  2 07:20:04 np0005465987 python3.9[28069]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  2 07:20:06 np0005465987 python3.9[28243]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:20:07 np0005465987 python3.9[28395]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:20:08 np0005465987 python3.9[28548]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:20:09 np0005465987 python3.9[28700]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:20:09 np0005465987 python3.9[28852]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:20:10 np0005465987 python3.9[28975]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404009.332793-183-273219824146115/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:20:11 np0005465987 systemd[1]: Starting dnf makecache...
Oct  2 07:20:11 np0005465987 dnf[29099]: Failed determining last makecache time.
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-openstack-barbican-42b4c41831408a8e323 250 kB/s |  13 kB     00:00
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.5 MB/s |  65 kB     00:00
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-openstack-cinder-1c00d6490d88e436f26ef 788 kB/s |  32 kB     00:00
Oct  2 07:20:11 np0005465987 python3.9[29128]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-python-stevedore-c4acc5639fd2329372142 2.8 MB/s | 131 kB     00:00
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-python-cloudkitty-tests-tempest-3961dc 576 kB/s |  25 kB     00:00
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-os-net-config-28598c2978b9e2207dd19fc4 7.3 MB/s | 356 kB     00:00
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.0 MB/s |  42 kB     00:00
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-python-designate-tests-tempest-347fdbc 381 kB/s |  18 kB     00:00
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-openstack-glance-1fd12c29b339f30fe823e 349 kB/s |  18 kB     00:00
Oct  2 07:20:11 np0005465987 dnf[29099]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 671 kB/s |  29 kB     00:00
Oct  2 07:20:12 np0005465987 dnf[29099]: delorean-openstack-manila-3c01b7181572c95dac462 548 kB/s |  25 kB     00:00
Oct  2 07:20:12 np0005465987 dnf[29099]: delorean-python-whitebox-neutron-tests-tempest- 2.9 MB/s | 154 kB     00:00
Oct  2 07:20:12 np0005465987 dnf[29099]: delorean-openstack-octavia-ba397f07a7331190208c 555 kB/s |  26 kB     00:00
Oct  2 07:20:12 np0005465987 dnf[29099]: delorean-openstack-watcher-c014f81a8647287f6dcc 311 kB/s |  16 kB     00:00
Oct  2 07:20:12 np0005465987 dnf[29099]: delorean-edpm-image-builder-55ba53cf215b14ed95b 163 kB/s | 7.4 kB     00:00
Oct  2 07:20:12 np0005465987 python3.9[29323]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:20:12 np0005465987 dnf[29099]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 3.2 MB/s | 144 kB     00:00
Oct  2 07:20:12 np0005465987 dnf[29099]: delorean-openstack-swift-dc98a8463506ac520c469a 301 kB/s |  14 kB     00:00
Oct  2 07:20:12 np0005465987 dnf[29099]: delorean-python-tempestconf-8515371b7cceebd4282 1.2 MB/s |  53 kB     00:00
Oct  2 07:20:12 np0005465987 dnf[29099]: delorean-openstack-heat-ui-013accbfd179753bc3f0 2.0 MB/s |  96 kB     00:00
Oct  2 07:20:13 np0005465987 python3.9[29496]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:20:13 np0005465987 dnf[29099]: CentOS Stream 9 - BaseOS                         10 MB/s | 8.8 MB     00:00
Oct  2 07:20:16 np0005465987 dnf[29099]: CentOS Stream 9 - AppStream                      17 MB/s |  25 MB     00:01
Oct  2 07:20:17 np0005465987 python3.9[29756]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:20:18 np0005465987 python3.9[29906]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:20:19 np0005465987 python3.9[30060]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:20:20 np0005465987 python3.9[30218]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:20:21 np0005465987 python3.9[30302]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:20:23 np0005465987 dnf[29099]: CentOS Stream 9 - CRB                           7.5 MB/s | 7.1 MB     00:00
Oct  2 07:20:25 np0005465987 dnf[29099]: CentOS Stream 9 - Extras packages                43 kB/s |  20 kB     00:00
Oct  2 07:20:25 np0005465987 dnf[29099]: dlrn-antelope-testing                            17 MB/s | 1.1 MB     00:00
Oct  2 07:20:25 np0005465987 dnf[29099]: dlrn-antelope-build-deps                        8.1 MB/s | 461 kB     00:00
Oct  2 07:20:26 np0005465987 dnf[29099]: centos9-rabbitmq                                484 kB/s | 123 kB     00:00
Oct  2 07:20:26 np0005465987 dnf[29099]: centos9-storage                                 4.9 MB/s | 415 kB     00:00
Oct  2 07:20:26 np0005465987 dnf[29099]: centos9-opstools                                747 kB/s |  51 kB     00:00
Oct  2 07:20:26 np0005465987 dnf[29099]: NFV SIG OpenvSwitch                             5.8 MB/s | 447 kB     00:00
Oct  2 07:20:27 np0005465987 dnf[29099]: repo-setup-centos-appstream                      63 MB/s |  25 MB     00:00
Oct  2 07:20:33 np0005465987 dnf[29099]: repo-setup-centos-baseos                         52 MB/s | 8.8 MB     00:00
Oct  2 07:20:34 np0005465987 dnf[29099]: repo-setup-centos-highavailability              8.8 MB/s | 744 kB     00:00
Oct  2 07:20:35 np0005465987 dnf[29099]: repo-setup-centos-powertools                     52 MB/s | 7.1 MB     00:00
Oct  2 07:20:37 np0005465987 dnf[29099]: Extra Packages for Enterprise Linux 9 - x86_64   19 MB/s |  20 MB     00:01
Oct  2 07:20:51 np0005465987 dnf[29099]: Metadata cache created.
Oct  2 07:20:51 np0005465987 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  2 07:20:51 np0005465987 systemd[1]: Finished dnf makecache.
Oct  2 07:20:51 np0005465987 systemd[1]: dnf-makecache.service: Consumed 35.025s CPU time.
Oct  2 07:20:52 np0005465987 systemd[1]: Reloading.
Oct  2 07:20:52 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:20:52 np0005465987 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  2 07:20:53 np0005465987 systemd[1]: Reloading.
Oct  2 07:20:53 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:20:53 np0005465987 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  2 07:20:53 np0005465987 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  2 07:20:53 np0005465987 systemd[1]: Reloading.
Oct  2 07:20:53 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:20:53 np0005465987 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  2 07:20:53 np0005465987 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct  2 07:20:53 np0005465987 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct  2 07:20:53 np0005465987 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct  2 07:21:57 np0005465987 kernel: SELinux:  Converting 2715 SID table entries...
Oct  2 07:21:57 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:21:57 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:21:57 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:21:57 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:21:57 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:21:57 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:21:57 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:21:57 np0005465987 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  2 07:21:58 np0005465987 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:21:58 np0005465987 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:21:58 np0005465987 systemd[1]: Reloading.
Oct  2 07:21:58 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:21:58 np0005465987 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:21:58 np0005465987 systemd[1]: Starting PackageKit Daemon...
Oct  2 07:21:58 np0005465987 systemd[1]: Started PackageKit Daemon.
Oct  2 07:21:59 np0005465987 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:21:59 np0005465987 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:21:59 np0005465987 systemd[1]: man-db-cache-update.service: Consumed 1.026s CPU time.
Oct  2 07:21:59 np0005465987 systemd[1]: run-r4cf3df1744b3481da1326bc10cf46dfe.service: Deactivated successfully.
Oct  2 07:22:48 np0005465987 python3.9[31753]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:22:52 np0005465987 python3.9[32034]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  2 07:22:53 np0005465987 python3.9[32186]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  2 07:22:56 np0005465987 python3.9[32341]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:23:04 np0005465987 python3.9[32494]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  2 07:23:08 np0005465987 python3.9[32646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:23:09 np0005465987 python3.9[32798]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:23:09 np0005465987 python3.9[32921]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404188.8318903-645-83780764951494/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:23:11 np0005465987 python3.9[33073]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  2 07:23:12 np0005465987 python3.9[33226]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:23:12 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:23:12 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:23:13 np0005465987 python3.9[33385]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:23:14 np0005465987 python3.9[33545]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  2 07:23:15 np0005465987 python3.9[33698]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:23:16 np0005465987 python3.9[33856]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  2 07:23:17 np0005465987 python3.9[34008]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:23:20 np0005465987 python3.9[34161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:23:21 np0005465987 python3.9[34313]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:23:21 np0005465987 python3.9[34436]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404200.7190773-930-169088307760826/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:23:23 np0005465987 python3.9[34588]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:23:23 np0005465987 systemd[1]: Starting Load Kernel Modules...
Oct  2 07:23:23 np0005465987 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  2 07:23:23 np0005465987 kernel: Bridge firewalling registered
Oct  2 07:23:23 np0005465987 systemd-modules-load[34592]: Inserted module 'br_netfilter'
Oct  2 07:23:23 np0005465987 systemd[1]: Finished Load Kernel Modules.
Oct  2 07:23:24 np0005465987 python3.9[34747]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:23:24 np0005465987 python3.9[34870]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404203.575742-999-7325849922903/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:23:25 np0005465987 python3.9[35022]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:23:28 np0005465987 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct  2 07:23:28 np0005465987 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct  2 07:23:29 np0005465987 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:23:29 np0005465987 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:23:29 np0005465987 systemd[1]: Reloading.
Oct  2 07:23:29 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:23:29 np0005465987 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:23:32 np0005465987 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:23:32 np0005465987 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:23:32 np0005465987 systemd[1]: man-db-cache-update.service: Consumed 4.575s CPU time.
Oct  2 07:23:32 np0005465987 systemd[1]: run-rbb7e80d87e1a40b597404aabb8a257c7.service: Deactivated successfully.
Oct  2 07:23:35 np0005465987 python3.9[38732]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:23:36 np0005465987 python3.9[38884]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  2 07:23:37 np0005465987 python3.9[39034]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:23:38 np0005465987 python3.9[39186]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:23:38 np0005465987 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 07:23:39 np0005465987 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 07:23:40 np0005465987 python3.9[39559]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:23:40 np0005465987 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  2 07:23:40 np0005465987 systemd[1]: tuned.service: Deactivated successfully.
Oct  2 07:23:40 np0005465987 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  2 07:23:40 np0005465987 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 07:23:40 np0005465987 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 07:23:41 np0005465987 python3.9[39720]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  2 07:23:45 np0005465987 python3.9[39872]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:23:45 np0005465987 systemd[1]: Reloading.
Oct  2 07:23:45 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:23:46 np0005465987 python3.9[40060]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:23:46 np0005465987 systemd[1]: Reloading.
Oct  2 07:23:46 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:23:47 np0005465987 python3.9[40250]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:23:48 np0005465987 python3.9[40403]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:23:48 np0005465987 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  2 07:23:49 np0005465987 python3.9[40556]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:23:51 np0005465987 python3.9[40718]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:23:52 np0005465987 python3.9[40872]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:23:53 np0005465987 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  2 07:23:53 np0005465987 systemd[1]: Stopped Apply Kernel Variables.
Oct  2 07:23:53 np0005465987 systemd[1]: Stopping Apply Kernel Variables...
Oct  2 07:23:53 np0005465987 systemd[1]: Starting Apply Kernel Variables...
Oct  2 07:23:53 np0005465987 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  2 07:23:53 np0005465987 systemd[1]: Finished Apply Kernel Variables.
Oct  2 07:23:53 np0005465987 systemd[1]: session-10.scope: Deactivated successfully.
Oct  2 07:23:53 np0005465987 systemd[1]: session-10.scope: Consumed 1min 35.650s CPU time.
Oct  2 07:23:53 np0005465987 systemd-logind[794]: Session 10 logged out. Waiting for processes to exit.
Oct  2 07:23:53 np0005465987 systemd-logind[794]: Removed session 10.
Oct  2 07:23:58 np0005465987 systemd-logind[794]: New session 11 of user zuul.
Oct  2 07:23:58 np0005465987 systemd[1]: Started Session 11 of User zuul.
Oct  2 07:23:59 np0005465987 python3.9[41055]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:24:01 np0005465987 python3.9[41211]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  2 07:24:02 np0005465987 python3.9[41364]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:24:03 np0005465987 python3.9[41522]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:24:05 np0005465987 python3.9[41682]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:24:06 np0005465987 python3.9[41766]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:24:11 np0005465987 python3.9[41929]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:24:24 np0005465987 kernel: SELinux:  Converting 2725 SID table entries...
Oct  2 07:24:24 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:24:24 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:24:24 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:24:24 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:24:24 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:24:24 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:24:24 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:24:24 np0005465987 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  2 07:24:24 np0005465987 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  2 07:24:26 np0005465987 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:24:26 np0005465987 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:24:26 np0005465987 systemd[1]: Reloading.
Oct  2 07:24:26 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:24:26 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:24:26 np0005465987 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:24:27 np0005465987 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:24:27 np0005465987 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:24:27 np0005465987 systemd[1]: run-r9bc9339657614c4da4e2f358355ad333.service: Deactivated successfully.
Oct  2 07:24:28 np0005465987 python3.9[43031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:24:28 np0005465987 systemd[1]: Reloading.
Oct  2 07:24:28 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:24:28 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:24:28 np0005465987 systemd[1]: Starting Open vSwitch Database Unit...
Oct  2 07:24:28 np0005465987 chown[43073]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  2 07:24:28 np0005465987 ovs-ctl[43078]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  2 07:24:28 np0005465987 ovs-ctl[43078]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  2 07:24:28 np0005465987 ovs-ctl[43078]: Starting ovsdb-server [  OK  ]
Oct  2 07:24:28 np0005465987 ovs-vsctl[43127]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  2 07:24:28 np0005465987 ovs-vsctl[43147]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"e4c8874a-a81b-4869-98e4-aca2e3f3bf40\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  2 07:24:28 np0005465987 ovs-ctl[43078]: Configuring Open vSwitch system IDs [  OK  ]
Oct  2 07:24:28 np0005465987 ovs-ctl[43078]: Enabling remote OVSDB managers [  OK  ]
Oct  2 07:24:28 np0005465987 ovs-vsctl[43153]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct  2 07:24:28 np0005465987 systemd[1]: Started Open vSwitch Database Unit.
Oct  2 07:24:28 np0005465987 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  2 07:24:28 np0005465987 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  2 07:24:28 np0005465987 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  2 07:24:28 np0005465987 kernel: openvswitch: Open vSwitch switching datapath
Oct  2 07:24:28 np0005465987 ovs-ctl[43199]: Inserting openvswitch module [  OK  ]
Oct  2 07:24:28 np0005465987 ovs-ctl[43167]: Starting ovs-vswitchd [  OK  ]
Oct  2 07:24:29 np0005465987 ovs-vsctl[43218]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct  2 07:24:29 np0005465987 ovs-ctl[43167]: Enabling remote OVSDB managers [  OK  ]
Oct  2 07:24:29 np0005465987 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  2 07:24:29 np0005465987 systemd[1]: Starting Open vSwitch...
Oct  2 07:24:29 np0005465987 systemd[1]: Finished Open vSwitch.
Oct  2 07:24:30 np0005465987 python3.9[43369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:24:31 np0005465987 python3.9[43521]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  2 07:24:33 np0005465987 kernel: SELinux:  Converting 2739 SID table entries...
Oct  2 07:24:33 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:24:33 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:24:33 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:24:33 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:24:33 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:24:33 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:24:33 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:24:34 np0005465987 python3.9[43677]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:24:35 np0005465987 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  2 07:24:36 np0005465987 python3.9[43835]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:24:38 np0005465987 python3.9[43988]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:24:39 np0005465987 python3.9[44275]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 07:24:40 np0005465987 python3.9[44425]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:24:41 np0005465987 python3.9[44579]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:24:43 np0005465987 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:24:43 np0005465987 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:24:43 np0005465987 systemd[1]: Reloading.
Oct  2 07:24:43 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:24:43 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:24:44 np0005465987 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:24:44 np0005465987 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:24:44 np0005465987 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:24:44 np0005465987 systemd[1]: run-r49de1d0b1e004bd2a08136757e550e14.service: Deactivated successfully.
Oct  2 07:24:45 np0005465987 python3.9[44896]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:24:46 np0005465987 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  2 07:24:46 np0005465987 systemd[1]: Stopped Network Manager Wait Online.
Oct  2 07:24:46 np0005465987 systemd[1]: Stopping Network Manager Wait Online...
Oct  2 07:24:46 np0005465987 systemd[1]: Stopping Network Manager...
Oct  2 07:24:46 np0005465987 NetworkManager[3960]: <info>  [1759404286.0130] caught SIGTERM, shutting down normally.
Oct  2 07:24:46 np0005465987 NetworkManager[3960]: <info>  [1759404286.0143] dhcp4 (eth0): canceled DHCP transaction
Oct  2 07:24:46 np0005465987 NetworkManager[3960]: <info>  [1759404286.0144] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:24:46 np0005465987 NetworkManager[3960]: <info>  [1759404286.0144] dhcp4 (eth0): state changed no lease
Oct  2 07:24:46 np0005465987 NetworkManager[3960]: <info>  [1759404286.0146] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 07:24:46 np0005465987 NetworkManager[3960]: <info>  [1759404286.0245] exiting (success)
Oct  2 07:24:46 np0005465987 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:24:46 np0005465987 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:24:46 np0005465987 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  2 07:24:46 np0005465987 systemd[1]: Stopped Network Manager.
Oct  2 07:24:46 np0005465987 systemd[1]: NetworkManager.service: Consumed 11.010s CPU time, 4.1M memory peak, read 0B from disk, written 36.5K to disk.
Oct  2 07:24:46 np0005465987 systemd[1]: Starting Network Manager...
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.0756] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:a0f66d87-d4ab-463e-bf2e-1bc7883c3156)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.0761] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.0812] manager[0x564398dd7090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 07:24:46 np0005465987 systemd[1]: Starting Hostname Service...
Oct  2 07:24:46 np0005465987 systemd[1]: Started Hostname Service.
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1588] hostname: hostname: using hostnamed
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1589] hostname: static hostname changed from (none) to "compute-1"
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1592] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1597] manager[0x564398dd7090]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1597] manager[0x564398dd7090]: rfkill: WWAN hardware radio set enabled
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1614] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1622] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1622] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1623] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1623] manager: Networking is enabled by state file
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1624] settings: Loaded settings plugin: keyfile (internal)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1628] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1648] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1657] dhcp: init: Using DHCP client 'internal'
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1660] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1664] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1669] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1675] device (lo): Activation: starting connection 'lo' (0865d9d9-3a95-4066-a4b0-d8a9cceef64b)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1681] device (eth0): carrier: link connected
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1684] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1687] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1688] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1693] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1698] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1703] device (eth1): carrier: link connected
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1706] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1710] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (30642f4e-e1cf-58cb-8c3f-88e6d6b1f945) (indicated)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1710] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1715] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1720] device (eth1): Activation: starting connection 'ci-private-network' (30642f4e-e1cf-58cb-8c3f-88e6d6b1f945)
Oct  2 07:24:46 np0005465987 systemd[1]: Started Network Manager.
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1725] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1732] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1734] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1737] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1739] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1742] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1744] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1746] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1753] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1760] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1762] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1771] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1782] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1792] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1795] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1801] device (lo): Activation: successful, device activated.
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1808] dhcp4 (eth0): state changed new lease, address=38.129.56.107
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.1816] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 07:24:46 np0005465987 systemd[1]: Starting Network Manager Wait Online...
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2054] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2067] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2071] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2073] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2075] device (eth1): Activation: successful, device activated.
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2082] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2083] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2085] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2087] device (eth0): Activation: successful, device activated.
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2091] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 07:24:46 np0005465987 NetworkManager[44910]: <info>  [1759404286.2093] manager: startup complete
Oct  2 07:24:46 np0005465987 systemd[1]: Finished Network Manager Wait Online.
Oct  2 07:24:47 np0005465987 python3.9[45122]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:24:47 np0005465987 irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Oct  2 07:24:47 np0005465987 irqbalance[789]: IRQ 26 affinity is now unmanaged
Oct  2 07:24:52 np0005465987 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:24:52 np0005465987 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:24:52 np0005465987 systemd[1]: Reloading.
Oct  2 07:24:52 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:24:52 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:24:52 np0005465987 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:24:53 np0005465987 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:24:53 np0005465987 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:24:53 np0005465987 systemd[1]: run-r5c496ab88445474190c1407c768bbb1b.service: Deactivated successfully.
Oct  2 07:24:55 np0005465987 python3.9[45584]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:24:55 np0005465987 python3.9[45736]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:24:56 np0005465987 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:24:56 np0005465987 python3.9[45890]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:24:57 np0005465987 python3.9[46042]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:24:58 np0005465987 python3.9[46194]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:24:58 np0005465987 python3.9[46346]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:24:59 np0005465987 python3.9[46498]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:25:00 np0005465987 python3.9[46621]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404299.0679219-653-17462896883913/.source _original_basename=.9q6fw2qi follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:25:00 np0005465987 python3.9[46773]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:25:02 np0005465987 python3.9[46925]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  2 07:25:02 np0005465987 python3.9[47077]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:25:05 np0005465987 python3.9[47504]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  2 07:25:06 np0005465987 ansible-async_wrapper.py[47679]: Invoked with j806278942150 300 /home/zuul/.ansible/tmp/ansible-tmp-1759404305.7618291-851-72588193882064/AnsiballZ_edpm_os_net_config.py _
Oct  2 07:25:06 np0005465987 ansible-async_wrapper.py[47682]: Starting module and watcher
Oct  2 07:25:06 np0005465987 ansible-async_wrapper.py[47682]: Start watching 47683 (300)
Oct  2 07:25:06 np0005465987 ansible-async_wrapper.py[47683]: Start module (47683)
Oct  2 07:25:06 np0005465987 ansible-async_wrapper.py[47679]: Return async_wrapper task started.
Oct  2 07:25:06 np0005465987 python3.9[47684]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  2 07:25:07 np0005465987 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  2 07:25:07 np0005465987 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  2 07:25:07 np0005465987 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  2 07:25:07 np0005465987 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  2 07:25:07 np0005465987 kernel: cfg80211: failed to load regulatory.db
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.4697] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.4720] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5193] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5195] audit: op="connection-add" uuid="99b02e8a-86ca-418b-a216-ec211d43ece4" name="br-ex-br" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5208] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5210] audit: op="connection-add" uuid="ceb30783-35d4-40f7-a9f7-5c0dac777385" name="br-ex-port" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5220] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5221] audit: op="connection-add" uuid="dc40c348-9dab-40ee-a4db-7822fc83c03d" name="eth1-port" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5231] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5233] audit: op="connection-add" uuid="11cfe687-2347-4b07-8477-4699badd9dc2" name="vlan20-port" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5243] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5245] audit: op="connection-add" uuid="7f6d10b5-84da-4b05-bd38-638d5a4fdb40" name="vlan21-port" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5257] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5258] audit: op="connection-add" uuid="ccafbbeb-c1a8-412e-8dcb-77f952cc723e" name="vlan22-port" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5268] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5269] audit: op="connection-add" uuid="73ce51ad-f983-44e5-9f2d-31a56d9fc382" name="vlan23-port" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5288] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5303] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5304] audit: op="connection-add" uuid="8ec2e032-8a7a-46ae-b7b2-bfde21b5eb06" name="br-ex-if" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5369] audit: op="connection-update" uuid="30642f4e-e1cf-58cb-8c3f-88e6d6b1f945" name="ci-private-network" args="ovs-interface.type,connection.controller,connection.master,connection.slave-type,connection.timestamp,connection.port-type,ipv4.routes,ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.method,ipv4.routing-rules,ovs-external-ids.data,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ipv6.dns,ipv6.method,ipv6.routing-rules" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5383] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5385] audit: op="connection-add" uuid="d2be450b-3265-40f4-a3df-547867cafdda" name="vlan20-if" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5398] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5400] audit: op="connection-add" uuid="4c725979-9bc0-4313-a9d6-db4ae9d06d15" name="vlan21-if" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5413] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5415] audit: op="connection-add" uuid="27c5cb21-cb0e-4f07-89f6-28d4b15ba8fa" name="vlan22-if" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5430] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5432] audit: op="connection-add" uuid="37cde1eb-26a6-452c-9f80-81e8717d1ba8" name="vlan23-if" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5442] audit: op="connection-delete" uuid="0dc54d3d-9dd8-3915-91cb-425e5ed9b9b7" name="Wired connection 1" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5451] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5462] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5465] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (99b02e8a-86ca-418b-a216-ec211d43ece4)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5466] audit: op="connection-activate" uuid="99b02e8a-86ca-418b-a216-ec211d43ece4" name="br-ex-br" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5469] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5475] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5480] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ceb30783-35d4-40f7-a9f7-5c0dac777385)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5482] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5487] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5491] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (dc40c348-9dab-40ee-a4db-7822fc83c03d)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5493] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5499] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5502] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (11cfe687-2347-4b07-8477-4699badd9dc2)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5504] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5510] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5513] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7f6d10b5-84da-4b05-bd38-638d5a4fdb40)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5515] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5521] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5525] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ccafbbeb-c1a8-412e-8dcb-77f952cc723e)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5527] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5533] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5536] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (73ce51ad-f983-44e5-9f2d-31a56d9fc382)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5538] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5540] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5542] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5547] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5551] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5555] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (8ec2e032-8a7a-46ae-b7b2-bfde21b5eb06)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5556] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5559] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5561] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5562] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5564] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5572] device (eth1): disconnecting for new activation request.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5573] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5576] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5578] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5580] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5582] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5586] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5589] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (d2be450b-3265-40f4-a3df-547867cafdda)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5591] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5593] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5595] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5597] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5599] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5603] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5606] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (4c725979-9bc0-4313-a9d6-db4ae9d06d15)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5608] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5611] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5613] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5614] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5616] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5621] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5625] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (27c5cb21-cb0e-4f07-89f6-28d4b15ba8fa)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5626] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5629] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5631] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5632] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5635] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5639] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5642] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (37cde1eb-26a6-452c-9f80-81e8717d1ba8)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5643] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5646] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5649] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5650] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5652] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5661] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5663] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5665] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5668] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5673] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5676] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 kernel: ovs-system: entered promiscuous mode
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5687] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5690] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5691] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5697] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5701] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5703] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 kernel: Timeout policy base is empty
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5704] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5708] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5712] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5716] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 systemd-udevd[47691]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5718] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5724] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5729] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5733] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5736] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5741] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5745] dhcp4 (eth0): canceled DHCP transaction
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5745] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5746] dhcp4 (eth0): state changed no lease
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5747] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  2 07:25:08 np0005465987 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5758] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5763] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47685 uid=0 result="fail" reason="Device is not activated"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5793] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5800] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5805] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5814] device (eth1): disconnecting for new activation request.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5814] audit: op="connection-activate" uuid="30642f4e-e1cf-58cb-8c3f-88e6d6b1f945" name="ci-private-network" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5816] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5818] dhcp4 (eth0): state changed new lease, address=38.129.56.107
Oct  2 07:25:08 np0005465987 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5874] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47685 uid=0 result="success"
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.5875] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  2 07:25:08 np0005465987 kernel: br-ex: entered promiscuous mode
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6004] device (eth1): Activation: starting connection 'ci-private-network' (30642f4e-e1cf-58cb-8c3f-88e6d6b1f945)
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6008] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6017] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6020] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6025] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6029] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6037] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6039] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6041] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6042] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6043] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6044] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6055] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6063] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6066] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6070] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6073] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6076] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6080] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6083] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6087] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6090] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6094] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6098] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6102] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6109] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6114] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6122] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:25:08 np0005465987 kernel: vlan22: entered promiscuous mode
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6144] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6157] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6162] device (eth1): Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6169] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6211] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6213] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6220] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 kernel: vlan23: entered promiscuous mode
Oct  2 07:25:08 np0005465987 systemd-udevd[47690]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6246] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6264] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 kernel: vlan21: entered promiscuous mode
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6312] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6314] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6319] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6360] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6377] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 kernel: vlan20: entered promiscuous mode
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6398] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6413] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6416] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6422] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6427] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6444] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6446] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6452] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6504] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6523] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6546] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6548] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:25:08 np0005465987 NetworkManager[44910]: <info>  [1759404308.6556] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:25:09 np0005465987 NetworkManager[44910]: <info>  [1759404309.7684] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47685 uid=0 result="success"
Oct  2 07:25:09 np0005465987 NetworkManager[44910]: <info>  [1759404309.8985] checkpoint[0x564398dae950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  2 07:25:09 np0005465987 NetworkManager[44910]: <info>  [1759404309.8986] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47685 uid=0 result="success"
Oct  2 07:25:10 np0005465987 NetworkManager[44910]: <info>  [1759404310.2035] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47685 uid=0 result="success"
Oct  2 07:25:10 np0005465987 NetworkManager[44910]: <info>  [1759404310.2048] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47685 uid=0 result="success"
Oct  2 07:25:10 np0005465987 python3.9[48044]: ansible-ansible.legacy.async_status Invoked with jid=j806278942150.47679 mode=status _async_dir=/root/.ansible_async
Oct  2 07:25:10 np0005465987 NetworkManager[44910]: <info>  [1759404310.4173] audit: op="networking-control" arg="global-dns-configuration" pid=47685 uid=0 result="success"
Oct  2 07:25:10 np0005465987 NetworkManager[44910]: <info>  [1759404310.4199] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  2 07:25:10 np0005465987 NetworkManager[44910]: <info>  [1759404310.4219] audit: op="networking-control" arg="global-dns-configuration" pid=47685 uid=0 result="success"
Oct  2 07:25:10 np0005465987 NetworkManager[44910]: <info>  [1759404310.4684] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47685 uid=0 result="success"
Oct  2 07:25:10 np0005465987 NetworkManager[44910]: <info>  [1759404310.6185] checkpoint[0x564398daea20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  2 07:25:10 np0005465987 NetworkManager[44910]: <info>  [1759404310.6190] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47685 uid=0 result="success"
Oct  2 07:25:10 np0005465987 ansible-async_wrapper.py[47683]: Module complete (47683)
Oct  2 07:25:11 np0005465987 ansible-async_wrapper.py[47682]: Done in kid B.
Oct  2 07:25:13 np0005465987 python3.9[48148]: ansible-ansible.legacy.async_status Invoked with jid=j806278942150.47679 mode=status _async_dir=/root/.ansible_async
Oct  2 07:25:14 np0005465987 python3.9[48248]: ansible-ansible.legacy.async_status Invoked with jid=j806278942150.47679 mode=cleanup _async_dir=/root/.ansible_async
Oct  2 07:25:15 np0005465987 python3.9[48400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:25:15 np0005465987 python3.9[48523]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404314.9499824-932-252652295294087/.source.returncode _original_basename=.sesgz9nw follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:25:16 np0005465987 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:25:16 np0005465987 python3.9[48677]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:25:17 np0005465987 python3.9[48801]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404316.4956694-980-64715793607747/.source.cfg _original_basename=.m7ubqvou follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:25:18 np0005465987 python3.9[48953]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:25:18 np0005465987 systemd[1]: Reloading Network Manager...
Oct  2 07:25:18 np0005465987 NetworkManager[44910]: <info>  [1759404318.8525] audit: op="reload" arg="0" pid=48957 uid=0 result="success"
Oct  2 07:25:18 np0005465987 NetworkManager[44910]: <info>  [1759404318.8536] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  2 07:25:18 np0005465987 systemd[1]: Reloaded Network Manager.
Oct  2 07:25:19 np0005465987 systemd[1]: session-11.scope: Deactivated successfully.
Oct  2 07:25:19 np0005465987 systemd[1]: session-11.scope: Consumed 47.100s CPU time.
Oct  2 07:25:19 np0005465987 systemd-logind[794]: Session 11 logged out. Waiting for processes to exit.
Oct  2 07:25:19 np0005465987 systemd-logind[794]: Removed session 11.
Oct  2 07:25:24 np0005465987 systemd-logind[794]: New session 12 of user zuul.
Oct  2 07:25:24 np0005465987 systemd[1]: Started Session 12 of User zuul.
Oct  2 07:25:25 np0005465987 python3.9[49141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:25:26 np0005465987 python3.9[49295]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:25:28 np0005465987 python3.9[49489]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:25:28 np0005465987 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:25:28 np0005465987 systemd[1]: session-12.scope: Deactivated successfully.
Oct  2 07:25:28 np0005465987 systemd[1]: session-12.scope: Consumed 2.076s CPU time.
Oct  2 07:25:28 np0005465987 systemd-logind[794]: Session 12 logged out. Waiting for processes to exit.
Oct  2 07:25:28 np0005465987 systemd-logind[794]: Removed session 12.
Oct  2 07:25:35 np0005465987 systemd-logind[794]: New session 13 of user zuul.
Oct  2 07:25:35 np0005465987 systemd[1]: Started Session 13 of User zuul.
Oct  2 07:25:36 np0005465987 python3.9[49673]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:25:37 np0005465987 python3.9[49828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:25:38 np0005465987 python3.9[49984]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:25:39 np0005465987 python3.9[50068]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:25:41 np0005465987 python3.9[50222]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:25:42 np0005465987 python3.9[50417]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:25:43 np0005465987 python3.9[50569]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:25:43 np0005465987 systemd[1]: var-lib-containers-storage-overlay-compat2018711798-merged.mount: Deactivated successfully.
Oct  2 07:25:43 np0005465987 podman[50570]: 2025-10-02 11:25:43.770446483 +0000 UTC m=+0.052438084 system refresh
Oct  2 07:25:44 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:25:45 np0005465987 python3.9[50732]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:25:45 np0005465987 python3.9[50855]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404344.562795-203-187278753386241/.source.json follow=False _original_basename=podman_network_config.j2 checksum=54c02c78e157b14b53e73298288ba8f8f6bb3236 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:25:46 np0005465987 python3.9[51007]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:25:47 np0005465987 python3.9[51130]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404346.2341151-248-171812771968997/.source.conf follow=False _original_basename=registries.conf.j2 checksum=b0997da0dac7c72916bfa4feb1650346bde4dfbe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:25:48 np0005465987 python3.9[51282]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:25:48 np0005465987 python3.9[51434]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:25:49 np0005465987 python3.9[51586]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:25:50 np0005465987 python3.9[51738]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:25:51 np0005465987 python3.9[51890]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:25:53 np0005465987 python3.9[52043]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:25:54 np0005465987 python3.9[52197]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:25:55 np0005465987 python3.9[52349]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:25:56 np0005465987 python3.9[52501]: ansible-service_facts Invoked
Oct  2 07:25:56 np0005465987 network[52518]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:25:56 np0005465987 network[52519]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:25:56 np0005465987 network[52520]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:26:01 np0005465987 python3.9[52974]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:26:04 np0005465987 python3.9[53127]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  2 07:26:05 np0005465987 python3.9[53279]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:06 np0005465987 python3.9[53404]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404365.3908482-645-7614833908286/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:07 np0005465987 python3.9[53558]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:07 np0005465987 python3.9[53683]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404366.8502667-690-224113176975273/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:09 np0005465987 python3.9[53837]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:11 np0005465987 python3.9[53991]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:26:12 np0005465987 python3.9[54075]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:26:13 np0005465987 python3.9[54229]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:26:14 np0005465987 python3.9[54313]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:26:14 np0005465987 chronyd[796]: chronyd exiting
Oct  2 07:26:14 np0005465987 systemd[1]: Stopping NTP client/server...
Oct  2 07:26:14 np0005465987 systemd[1]: chronyd.service: Deactivated successfully.
Oct  2 07:26:14 np0005465987 systemd[1]: Stopped NTP client/server.
Oct  2 07:26:14 np0005465987 systemd[1]: Starting NTP client/server...
Oct  2 07:26:14 np0005465987 chronyd[54321]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  2 07:26:14 np0005465987 chronyd[54321]: Frequency -27.200 +/- 0.369 ppm read from /var/lib/chrony/drift
Oct  2 07:26:14 np0005465987 chronyd[54321]: Loaded seccomp filter (level 2)
Oct  2 07:26:14 np0005465987 systemd[1]: Started NTP client/server.
Oct  2 07:26:15 np0005465987 systemd[1]: session-13.scope: Deactivated successfully.
Oct  2 07:26:15 np0005465987 systemd[1]: session-13.scope: Consumed 23.177s CPU time.
Oct  2 07:26:15 np0005465987 systemd-logind[794]: Session 13 logged out. Waiting for processes to exit.
Oct  2 07:26:15 np0005465987 systemd-logind[794]: Removed session 13.
Oct  2 07:26:21 np0005465987 systemd-logind[794]: New session 14 of user zuul.
Oct  2 07:26:21 np0005465987 systemd[1]: Started Session 14 of User zuul.
Oct  2 07:26:21 np0005465987 python3.9[54502]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:22 np0005465987 python3.9[54654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:23 np0005465987 python3.9[54777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404382.2918794-68-250665662013694/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:24 np0005465987 systemd[1]: session-14.scope: Deactivated successfully.
Oct  2 07:26:24 np0005465987 systemd[1]: session-14.scope: Consumed 1.434s CPU time.
Oct  2 07:26:24 np0005465987 systemd-logind[794]: Session 14 logged out. Waiting for processes to exit.
Oct  2 07:26:24 np0005465987 systemd-logind[794]: Removed session 14.
Oct  2 07:26:30 np0005465987 systemd-logind[794]: New session 15 of user zuul.
Oct  2 07:26:30 np0005465987 systemd[1]: Started Session 15 of User zuul.
Oct  2 07:26:31 np0005465987 python3.9[54955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:26:32 np0005465987 python3.9[55111]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:33 np0005465987 python3.9[55286]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:34 np0005465987 python3.9[55409]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759404392.6484003-89-111858164568112/.source.json _original_basename=.kafymgjf follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:35 np0005465987 python3.9[55561]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:35 np0005465987 python3.9[55684]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404394.6524293-158-62288627788899/.source _original_basename=.7koc8x2z follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:36 np0005465987 python3.9[55836]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:26:37 np0005465987 python3.9[55988]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:37 np0005465987 python3.9[56111]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404396.6764884-230-263160125320577/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:26:38 np0005465987 python3.9[56263]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:38 np0005465987 python3.9[56386]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759404397.7795758-230-132200490509379/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:26:39 np0005465987 python3.9[56538]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:40 np0005465987 python3.9[56690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:41 np0005465987 python3.9[56813]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404400.1209137-341-29008348038951/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:41 np0005465987 python3.9[56965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:42 np0005465987 python3.9[57088]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404401.5266187-386-13271675662990/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:43 np0005465987 python3.9[57240]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:26:43 np0005465987 systemd[1]: Reloading.
Oct  2 07:26:44 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:26:44 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:26:44 np0005465987 systemd[1]: Reloading.
Oct  2 07:26:44 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:26:44 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:26:44 np0005465987 systemd[1]: Starting EDPM Container Shutdown...
Oct  2 07:26:44 np0005465987 systemd[1]: Finished EDPM Container Shutdown.
Oct  2 07:26:45 np0005465987 python3.9[57467]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:45 np0005465987 python3.9[57590]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404404.8506546-455-40718745680241/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:46 np0005465987 python3.9[57742]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:47 np0005465987 python3.9[57865]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404406.2634752-500-270655309741381/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:26:48 np0005465987 python3.9[58017]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:26:48 np0005465987 systemd[1]: Reloading.
Oct  2 07:26:48 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:26:48 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:26:48 np0005465987 systemd[1]: Reloading.
Oct  2 07:26:48 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:26:48 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:26:48 np0005465987 systemd[1]: Starting Create netns directory...
Oct  2 07:26:48 np0005465987 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:26:48 np0005465987 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:26:48 np0005465987 systemd[1]: Finished Create netns directory.
Oct  2 07:26:49 np0005465987 python3.9[58244]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:26:49 np0005465987 network[58261]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:26:49 np0005465987 network[58262]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:26:49 np0005465987 network[58263]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:26:54 np0005465987 python3.9[58527]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:26:54 np0005465987 systemd[1]: Reloading.
Oct  2 07:26:54 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:26:54 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:26:54 np0005465987 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  2 07:26:54 np0005465987 iptables.init[58566]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  2 07:26:54 np0005465987 iptables.init[58566]: iptables: Flushing firewall rules: [  OK  ]
Oct  2 07:26:54 np0005465987 systemd[1]: iptables.service: Deactivated successfully.
Oct  2 07:26:54 np0005465987 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  2 07:26:55 np0005465987 python3.9[58763]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:26:56 np0005465987 python3.9[58917]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:26:56 np0005465987 systemd[1]: Reloading.
Oct  2 07:26:56 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:26:56 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:26:56 np0005465987 systemd[1]: Starting Netfilter Tables...
Oct  2 07:26:56 np0005465987 systemd[1]: Finished Netfilter Tables.
Oct  2 07:26:57 np0005465987 python3.9[59109]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:26:59 np0005465987 python3.9[59262]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:26:59 np0005465987 python3.9[59387]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404418.6888514-707-9560750909767/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:27:00 np0005465987 python3.9[59538]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:27:26 np0005465987 systemd[1]: session-15.scope: Deactivated successfully.
Oct  2 07:27:26 np0005465987 systemd[1]: session-15.scope: Consumed 17.063s CPU time.
Oct  2 07:27:26 np0005465987 systemd-logind[794]: Session 15 logged out. Waiting for processes to exit.
Oct  2 07:27:26 np0005465987 systemd-logind[794]: Removed session 15.
Oct  2 07:27:39 np0005465987 systemd-logind[794]: New session 16 of user zuul.
Oct  2 07:27:39 np0005465987 systemd[1]: Started Session 16 of User zuul.
Oct  2 07:27:40 np0005465987 python3.9[59731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:27:42 np0005465987 python3.9[59887]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:27:43 np0005465987 python3.9[60062]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:27:43 np0005465987 python3.9[60140]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.wunfj5kh recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:27:44 np0005465987 python3.9[60292]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:27:44 np0005465987 python3.9[60370]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.eylpcguu recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:27:45 np0005465987 python3.9[60522]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:27:46 np0005465987 python3.9[60674]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:27:46 np0005465987 python3.9[60752]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:27:47 np0005465987 python3.9[60904]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:27:47 np0005465987 python3.9[60982]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:27:48 np0005465987 python3.9[61134]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:27:49 np0005465987 python3.9[61286]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:27:50 np0005465987 python3.9[61364]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:27:50 np0005465987 python3.9[61516]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:27:51 np0005465987 python3.9[61594]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:27:52 np0005465987 python3.9[61746]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:27:52 np0005465987 systemd[1]: Reloading.
Oct  2 07:27:52 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:27:52 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:27:53 np0005465987 python3.9[61935]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:27:53 np0005465987 python3.9[62013]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:27:54 np0005465987 python3.9[62165]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:27:55 np0005465987 python3.9[62243]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:27:56 np0005465987 python3.9[62395]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:27:56 np0005465987 systemd[1]: Reloading.
Oct  2 07:27:56 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:27:56 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:27:56 np0005465987 systemd[1]: Starting Create netns directory...
Oct  2 07:27:56 np0005465987 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:27:56 np0005465987 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:27:56 np0005465987 systemd[1]: Finished Create netns directory.
Oct  2 07:27:57 np0005465987 python3.9[62586]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:27:57 np0005465987 network[62603]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:27:57 np0005465987 network[62604]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:27:57 np0005465987 network[62605]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:28:02 np0005465987 python3.9[62870]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:02 np0005465987 python3.9[62948]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:03 np0005465987 python3.9[63100]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:04 np0005465987 python3.9[63252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:05 np0005465987 python3.9[63375]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404484.4410763-614-236533064206004/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:06 np0005465987 python3.9[63527]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 07:28:06 np0005465987 systemd[1]: Starting Time & Date Service...
Oct  2 07:28:06 np0005465987 systemd[1]: Started Time & Date Service.
Oct  2 07:28:07 np0005465987 python3.9[63683]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:08 np0005465987 python3.9[63835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:08 np0005465987 python3.9[63958]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404488.061544-720-79375243412166/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:10 np0005465987 python3.9[64110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:10 np0005465987 python3.9[64233]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404489.6361883-764-199276062884587/.source.yaml _original_basename=.f2symxhy follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:11 np0005465987 python3.9[64385]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:12 np0005465987 python3.9[64508]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404491.0761902-809-177715143051142/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:13 np0005465987 python3.9[64660]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:28:13 np0005465987 python3.9[64813]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:28:15 np0005465987 python3[64966]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:28:15 np0005465987 python3.9[65118]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:16 np0005465987 python3.9[65241]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404495.395336-926-84673464125407/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:17 np0005465987 python3.9[65393]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:17 np0005465987 python3.9[65516]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404496.8522081-971-34602336957053/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:18 np0005465987 python3.9[65668]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:19 np0005465987 python3.9[65791]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404498.4643774-1016-34789665320136/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:20 np0005465987 python3.9[65943]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:20 np0005465987 python3.9[66066]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404499.915132-1061-10833055289509/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:22 np0005465987 python3.9[66218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:28:22 np0005465987 python3.9[66341]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759404501.777771-1106-13960371166937/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:23 np0005465987 chronyd[54321]: Selected source 172.97.210.214 (pool.ntp.org)
Oct  2 07:28:23 np0005465987 python3.9[66493]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:24 np0005465987 python3.9[66645]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:28:25 np0005465987 python3.9[66804]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:26 np0005465987 python3.9[66957]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:27 np0005465987 python3.9[67109]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:28 np0005465987 python3.9[67261]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:28:28 np0005465987 python3.9[67414]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:28:29 np0005465987 systemd[1]: session-16.scope: Deactivated successfully.
Oct  2 07:28:29 np0005465987 systemd[1]: session-16.scope: Consumed 28.514s CPU time.
Oct  2 07:28:29 np0005465987 systemd-logind[794]: Session 16 logged out. Waiting for processes to exit.
Oct  2 07:28:29 np0005465987 systemd-logind[794]: Removed session 16.
Oct  2 07:28:35 np0005465987 systemd-logind[794]: New session 17 of user zuul.
Oct  2 07:28:35 np0005465987 systemd[1]: Started Session 17 of User zuul.
Oct  2 07:28:35 np0005465987 python3.9[67595]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  2 07:28:36 np0005465987 python3.9[67747]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:28:36 np0005465987 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 07:28:37 np0005465987 python3.9[67901]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:28:38 np0005465987 python3.9[68053]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpTq9G8ymc65djWd0YMUA1/KMKQbBxw7LoyOCyAnPotUx6UyYfBtjYX5I4TzqzEugao1w+4AHDZ5XKSwr8sv9kaSGm0ERmNxz22+5cmKwWxcvUfNGQQXbk6gk6z5p0qpH/Ue9e19xDUC+RDUMGcwrysoGQ05aVcGDaEmNUxvYjj0UUfs45KX/pHPk5xQ4c0WjiL0BfzPJmphY2PAj6O9b4iFA3HjIJgvQ3+i3jEOkvA1FsXm5s7O1/wEjqwsdfKPlX0LUuCqXyxI4uhWY16Ofi89lEtsdQRwFyoZcDMJUDHMH8oJSopUNwwMEe7UBD1MHJSIzrd6NUGnvRjhqH6dE/IoT2X3f4JN/Six+J9ayDqiIkd1QNsJzPBr6G2Lj/dQbUusb3nXhPk5TXKMOXm5i+J940nYQv8/Y9rf2H1qltGaDEOS95ktKpcL6EVplOsQand/Qmb/ShKbiAo2dr3YC3v/FFE2AAj+0Dnh4xob14bhivkYHDhIF0zyzcVGhHZXc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICufzWCrq7lQCIqxq8UNP+WfGRQD+uOEPLr+ZneqofrM#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHP494uEOdMq07v1W25s7bKFki4bQuHkde7xWzYJuUT44SD4tSCrPbQiOkLCqtg9H5yxKL0Ovnl22PYLf1HMKAs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ry6wCZyHJmZsI4Z83U5DYzCaQhL5JmDbykEZokepEcnLFJt20bbnTU0eQzXJylkCgp7rhmpZo7V7qVNnZUMI3aLUHK30Yr5jzQVofHBRg6ZnAIq1MAwqwGH1s6vfNo+//zth4OMHvolMSEO6zSmOWeAsuHM2DTEJ6IdRasKfhOCc3oI/Tcf5vOUyVGg/BH+fFOHKzPiyJNXozsvw2u4ppfdkMJvVC9w2oTNHMIGcDxSsx0zD2bLdYe5l23tFIOaBM149ktg5KPPsKYyQFymOi5qJHHnf9027MqZ4N7Z9SYuQrqt2nY4C/XmaVFOmUIFNNMZ5qMWDsc38V9cHCgurSaMsQ4em1srXr9nzADLh9bw4WksIRfrtt3twMp7FG9fMsw8rdmFt0+4/IdHr/3wCmHeF07qp10kJPXa5z9dApoIKiQlbIl+UCzlaN5tHD6vb4q0MyhqAtU4mSA1zGz67c2lLSGbF4FTgU9yza15FZjHzQ0ArNu/1KIheA8nrpkE=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHwAEaDivXDvJkCgJw1MVhYQArg6qfdDb4SKBZRPdoOc#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNMghqQyWdigdn5yyuBSIQ3tHLq/tZwQO222aoRtckuDI9Ml6snE/xKJ7YWmTvRTsqj2tqCqXIllFFfreYY7Apw=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFg5rufAFy1itLjBBGlAJUDsQsaZUavZeI3stNJBLolkBBMB4sBpwAvQFbu2iUhtVavUC7q9xD2LsX0DVBu9DCaQn6tETqUUvMQqzvmaXd34gwo5fH6vo+bjqVdZEih0pIVI1O2OfOUvnv2MFLdKx8MWLQd54beGjWQsC3xCnYVuh0W/aAQtRC2EA77nBo+r40u5V3HXOhdmUbFNvL0r6I8FwP4IvbKC5jkBTtqIzewh+/cyJrURCh0aCpeUjBqNqw3ADhtuR2h5n3ioq+IwPXbhHViJUWQyJ5XKmlSzupEEYA+RV8i1Y3eHJK2RuYlCXkpRP3MEsyBxmISTPhVdQwfxClvyi/mTQkl6k5XFGyZher7KbE6lx4qzp8iCOyOWkw32N3tG0AlnOtPI5HJw8uKbwWl2Apb7RncDQ5fpNOKNFcB1sg61g2Vvew7xJs62OxhkOTiSkEEUYoFfXAqNLiH8gC0+Go12qYleZKbfzL00BDT2boQ2UxYn2rWK7YifU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB29M+5Yr1BRNmm2RoLe921umFtraZRFTbdptrBdgsAV#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAv7vUjfSNyE5eIqsBh1jfLF/N1YKOXT7KtCRIxAQ1i9+ljB9j4j/dQgL6TGk3m+hQRPyAVxTDwUpeBxHWIpFjU=#012 create=True mode=0644 path=/tmp/ansible.qupq6s2l state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:39 np0005465987 python3.9[68205]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.qupq6s2l' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:28:40 np0005465987 python3.9[68359]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.qupq6s2l state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:28:41 np0005465987 systemd[1]: session-17.scope: Deactivated successfully.
Oct  2 07:28:41 np0005465987 systemd[1]: session-17.scope: Consumed 3.178s CPU time.
Oct  2 07:28:41 np0005465987 systemd-logind[794]: Session 17 logged out. Waiting for processes to exit.
Oct  2 07:28:41 np0005465987 systemd-logind[794]: Removed session 17.
Oct  2 07:28:57 np0005465987 systemd-logind[794]: New session 18 of user zuul.
Oct  2 07:28:57 np0005465987 systemd[1]: Started Session 18 of User zuul.
Oct  2 07:28:59 np0005465987 python3.9[68537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:29:00 np0005465987 python3.9[68693]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 07:29:01 np0005465987 python3.9[68847]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:29:02 np0005465987 python3.9[69000]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:29:03 np0005465987 python3.9[69153]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:29:04 np0005465987 python3.9[69307]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:29:05 np0005465987 python3.9[69462]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:29:05 np0005465987 systemd-logind[794]: Session 18 logged out. Waiting for processes to exit.
Oct  2 07:29:05 np0005465987 systemd[1]: session-18.scope: Deactivated successfully.
Oct  2 07:29:05 np0005465987 systemd[1]: session-18.scope: Consumed 4.094s CPU time.
Oct  2 07:29:05 np0005465987 systemd-logind[794]: Removed session 18.
Oct  2 07:29:13 np0005465987 systemd-logind[794]: New session 19 of user zuul.
Oct  2 07:29:13 np0005465987 systemd[1]: Started Session 19 of User zuul.
Oct  2 07:29:15 np0005465987 python3.9[69640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:29:16 np0005465987 python3.9[69796]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:29:17 np0005465987 python3.9[69880]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:29:19 np0005465987 python3.9[70031]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:29:21 np0005465987 python3.9[70183]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:29:22 np0005465987 python3.9[70333]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:29:22 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:29:22 np0005465987 python3.9[70484]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:29:23 np0005465987 systemd-logind[794]: Session 19 logged out. Waiting for processes to exit.
Oct  2 07:29:23 np0005465987 systemd[1]: session-19.scope: Deactivated successfully.
Oct  2 07:29:23 np0005465987 systemd[1]: session-19.scope: Consumed 5.559s CPU time.
Oct  2 07:29:23 np0005465987 systemd-logind[794]: Removed session 19.
Oct  2 07:29:32 np0005465987 systemd-logind[794]: New session 20 of user zuul.
Oct  2 07:29:32 np0005465987 systemd[1]: Started Session 20 of User zuul.
Oct  2 07:29:39 np0005465987 python3[71250]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:29:41 np0005465987 python3[71346]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 07:29:43 np0005465987 python3[71373]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:29:43 np0005465987 python3[71399]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:29:43 np0005465987 kernel: loop: module loaded
Oct  2 07:29:43 np0005465987 kernel: loop3: detected capacity change from 0 to 14680064
Oct  2 07:29:43 np0005465987 python3[71434]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:29:43 np0005465987 lvm[71437]: PV /dev/loop3 not used.
Oct  2 07:29:44 np0005465987 lvm[71439]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:29:44 np0005465987 lvm[71445]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:29:44 np0005465987 lvm[71445]: VG ceph_vg0 finished
Oct  2 07:29:44 np0005465987 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  2 07:29:44 np0005465987 lvm[71446]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  2 07:29:44 np0005465987 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  2 07:29:45 np0005465987 python3[71527]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:29:45 np0005465987 python3[71600]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759404584.8502269-33530-186688361168386/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:29:46 np0005465987 python3[71650]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:29:46 np0005465987 systemd[1]: Reloading.
Oct  2 07:29:46 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:29:46 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:29:46 np0005465987 systemd[1]: Starting Ceph OSD losetup...
Oct  2 07:29:46 np0005465987 bash[71690]: /dev/loop3: [64513]:4349018 (/var/lib/ceph-osd-0.img)
Oct  2 07:29:46 np0005465987 systemd[1]: Finished Ceph OSD losetup.
Oct  2 07:29:46 np0005465987 lvm[71692]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:29:46 np0005465987 lvm[71692]: VG ceph_vg0 finished
Oct  2 07:29:48 np0005465987 python3[71716]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:29:58 np0005465987 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 07:32:12 np0005465987 systemd[1]: Created slice User Slice of UID 42477.
Oct  2 07:32:12 np0005465987 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  2 07:32:12 np0005465987 systemd-logind[794]: New session 21 of user ceph-admin.
Oct  2 07:32:12 np0005465987 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  2 07:32:12 np0005465987 systemd[1]: Starting User Manager for UID 42477...
Oct  2 07:32:12 np0005465987 systemd[71766]: Queued start job for default target Main User Target.
Oct  2 07:32:12 np0005465987 systemd[71766]: Created slice User Application Slice.
Oct  2 07:32:12 np0005465987 systemd[71766]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 07:32:12 np0005465987 systemd[71766]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 07:32:12 np0005465987 systemd[71766]: Reached target Paths.
Oct  2 07:32:12 np0005465987 systemd[71766]: Reached target Timers.
Oct  2 07:32:12 np0005465987 systemd[71766]: Starting D-Bus User Message Bus Socket...
Oct  2 07:32:12 np0005465987 systemd[71766]: Starting Create User's Volatile Files and Directories...
Oct  2 07:32:12 np0005465987 systemd-logind[794]: New session 23 of user ceph-admin.
Oct  2 07:32:12 np0005465987 systemd[71766]: Listening on D-Bus User Message Bus Socket.
Oct  2 07:32:12 np0005465987 systemd[71766]: Reached target Sockets.
Oct  2 07:32:12 np0005465987 systemd[71766]: Finished Create User's Volatile Files and Directories.
Oct  2 07:32:12 np0005465987 systemd[71766]: Reached target Basic System.
Oct  2 07:32:12 np0005465987 systemd[71766]: Reached target Main User Target.
Oct  2 07:32:12 np0005465987 systemd[71766]: Startup finished in 114ms.
Oct  2 07:32:12 np0005465987 systemd[1]: Started User Manager for UID 42477.
Oct  2 07:32:12 np0005465987 systemd[1]: Started Session 21 of User ceph-admin.
Oct  2 07:32:12 np0005465987 systemd[1]: Started Session 23 of User ceph-admin.
Oct  2 07:32:12 np0005465987 systemd-logind[794]: New session 24 of user ceph-admin.
Oct  2 07:32:12 np0005465987 systemd[1]: Started Session 24 of User ceph-admin.
Oct  2 07:32:13 np0005465987 systemd-logind[794]: New session 25 of user ceph-admin.
Oct  2 07:32:13 np0005465987 systemd[1]: Started Session 25 of User ceph-admin.
Oct  2 07:32:13 np0005465987 systemd-logind[794]: New session 26 of user ceph-admin.
Oct  2 07:32:13 np0005465987 systemd[1]: Started Session 26 of User ceph-admin.
Oct  2 07:32:13 np0005465987 systemd-logind[794]: New session 27 of user ceph-admin.
Oct  2 07:32:13 np0005465987 systemd[1]: Started Session 27 of User ceph-admin.
Oct  2 07:32:14 np0005465987 systemd-logind[794]: New session 28 of user ceph-admin.
Oct  2 07:32:14 np0005465987 systemd[1]: Started Session 28 of User ceph-admin.
Oct  2 07:32:14 np0005465987 systemd-logind[794]: New session 29 of user ceph-admin.
Oct  2 07:32:14 np0005465987 systemd[1]: Started Session 29 of User ceph-admin.
Oct  2 07:32:14 np0005465987 systemd-logind[794]: New session 30 of user ceph-admin.
Oct  2 07:32:14 np0005465987 systemd[1]: Started Session 30 of User ceph-admin.
Oct  2 07:32:15 np0005465987 systemd-logind[794]: New session 31 of user ceph-admin.
Oct  2 07:32:15 np0005465987 systemd[1]: Started Session 31 of User ceph-admin.
Oct  2 07:32:15 np0005465987 systemd-logind[794]: New session 32 of user ceph-admin.
Oct  2 07:32:15 np0005465987 systemd[1]: Started Session 32 of User ceph-admin.
Oct  2 07:32:16 np0005465987 systemd-logind[794]: New session 33 of user ceph-admin.
Oct  2 07:32:16 np0005465987 systemd[1]: Started Session 33 of User ceph-admin.
Oct  2 07:32:16 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:16 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:17 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:17 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:17 np0005465987 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 72738 (sysctl)
Oct  2 07:32:17 np0005465987 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  2 07:32:17 np0005465987 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  2 07:32:18 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:18 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:21 np0005465987 systemd[1]: var-lib-containers-storage-overlay-compat2925873349-merged.mount: Deactivated successfully.
Oct  2 07:32:21 np0005465987 systemd[1]: var-lib-containers-storage-overlay-compat2925873349-lower\x2dmapped.mount: Deactivated successfully.
Oct  2 07:32:35 np0005465987 podman[73016]: 2025-10-02 11:32:35.172932053 +0000 UTC m=+16.136615248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:35 np0005465987 podman[73016]: 2025-10-02 11:32:35.190302 +0000 UTC m=+16.153985165 container create 7f5930c34ff79a915612fe5dd0bd54fed53060eb93bd6d17f7410afa6b5d441c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 07:32:35 np0005465987 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck203436425-merged.mount: Deactivated successfully.
Oct  2 07:32:35 np0005465987 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  2 07:32:35 np0005465987 systemd[1]: Started libpod-conmon-7f5930c34ff79a915612fe5dd0bd54fed53060eb93bd6d17f7410afa6b5d441c.scope.
Oct  2 07:32:35 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:35 np0005465987 podman[73016]: 2025-10-02 11:32:35.274104389 +0000 UTC m=+16.237787564 container init 7f5930c34ff79a915612fe5dd0bd54fed53060eb93bd6d17f7410afa6b5d441c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 07:32:35 np0005465987 podman[73016]: 2025-10-02 11:32:35.281494958 +0000 UTC m=+16.245178123 container start 7f5930c34ff79a915612fe5dd0bd54fed53060eb93bd6d17f7410afa6b5d441c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 07:32:35 np0005465987 podman[73016]: 2025-10-02 11:32:35.285993988 +0000 UTC m=+16.249677173 container attach 7f5930c34ff79a915612fe5dd0bd54fed53060eb93bd6d17f7410afa6b5d441c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tu, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 07:32:35 np0005465987 festive_tu[73078]: 167 167
Oct  2 07:32:35 np0005465987 systemd[1]: libpod-7f5930c34ff79a915612fe5dd0bd54fed53060eb93bd6d17f7410afa6b5d441c.scope: Deactivated successfully.
Oct  2 07:32:35 np0005465987 podman[73016]: 2025-10-02 11:32:35.287195351 +0000 UTC m=+16.250878516 container died 7f5930c34ff79a915612fe5dd0bd54fed53060eb93bd6d17f7410afa6b5d441c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:35 np0005465987 systemd[1]: var-lib-containers-storage-overlay-0d876e49c0a8c265dc24e627cb88a86e9027b2d5202ce3883e5fb99e5ae618b4-merged.mount: Deactivated successfully.
Oct  2 07:32:35 np0005465987 podman[73016]: 2025-10-02 11:32:35.328194691 +0000 UTC m=+16.291877856 container remove 7f5930c34ff79a915612fe5dd0bd54fed53060eb93bd6d17f7410afa6b5d441c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tu, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 07:32:35 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:35 np0005465987 systemd[1]: libpod-conmon-7f5930c34ff79a915612fe5dd0bd54fed53060eb93bd6d17f7410afa6b5d441c.scope: Deactivated successfully.
Oct  2 07:32:35 np0005465987 podman[73105]: 2025-10-02 11:32:35.472229847 +0000 UTC m=+0.038562096 container create fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 07:32:35 np0005465987 systemd[1]: Started libpod-conmon-fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618.scope.
Oct  2 07:32:35 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:35 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29714b2404e576e6437bbaf596be883caa53d8b6d3816d605403dc357cfcae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:35 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29714b2404e576e6437bbaf596be883caa53d8b6d3816d605403dc357cfcae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:35 np0005465987 podman[73105]: 2025-10-02 11:32:35.546644744 +0000 UTC m=+0.112976923 container init fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:35 np0005465987 podman[73105]: 2025-10-02 11:32:35.453305279 +0000 UTC m=+0.019637458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:35 np0005465987 podman[73105]: 2025-10-02 11:32:35.554893516 +0000 UTC m=+0.121225665 container start fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:35 np0005465987 podman[73105]: 2025-10-02 11:32:35.559064438 +0000 UTC m=+0.125396587 container attach fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]: [
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:    {
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        "available": false,
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        "ceph_device": false,
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        "lsm_data": {},
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        "lvs": [],
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        "path": "/dev/sr0",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        "rejected_reasons": [
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "Insufficient space (<5GB)",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "Has a FileSystem"
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        ],
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        "sys_api": {
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "actuators": null,
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "device_nodes": "sr0",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "devname": "sr0",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "human_readable_size": "482.00 KB",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "id_bus": "ata",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "model": "QEMU DVD-ROM",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "nr_requests": "2",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "parent": "/dev/sr0",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "partitions": {},
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "path": "/dev/sr0",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "removable": "1",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "rev": "2.5+",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "ro": "0",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "rotational": "0",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "sas_address": "",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "sas_device_handle": "",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "scheduler_mode": "mq-deadline",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "sectors": 0,
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "sectorsize": "2048",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "size": 493568.0,
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "support_discard": "2048",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "type": "disk",
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:            "vendor": "QEMU"
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:        }
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]:    }
Oct  2 07:32:36 np0005465987 heuristic_burnell[73121]: ]
Oct  2 07:32:36 np0005465987 systemd[1]: libpod-fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618.scope: Deactivated successfully.
Oct  2 07:32:36 np0005465987 podman[73105]: 2025-10-02 11:32:36.602561156 +0000 UTC m=+1.168893335 container died fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 07:32:36 np0005465987 systemd[1]: libpod-fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618.scope: Consumed 1.035s CPU time.
Oct  2 07:32:36 np0005465987 systemd[1]: var-lib-containers-storage-overlay-bd29714b2404e576e6437bbaf596be883caa53d8b6d3816d605403dc357cfcae-merged.mount: Deactivated successfully.
Oct  2 07:32:36 np0005465987 podman[73105]: 2025-10-02 11:32:36.722008063 +0000 UTC m=+1.288340212 container remove fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 07:32:36 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:36 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:36 np0005465987 systemd[1]: libpod-conmon-fad2628558085321fa39c1c4b8c56e2dda692276b245b97842781ec915035618.scope: Deactivated successfully.
Oct  2 07:32:41 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:41 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:41 np0005465987 podman[75911]: 2025-10-02 11:32:41.365295614 +0000 UTC m=+0.040027275 container create 75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:32:41 np0005465987 systemd[1]: Started libpod-conmon-75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4.scope.
Oct  2 07:32:41 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:41 np0005465987 podman[75911]: 2025-10-02 11:32:41.434680097 +0000 UTC m=+0.109411778 container init 75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 07:32:41 np0005465987 podman[75911]: 2025-10-02 11:32:41.440491153 +0000 UTC m=+0.115222814 container start 75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 07:32:41 np0005465987 podman[75911]: 2025-10-02 11:32:41.34609657 +0000 UTC m=+0.020828241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:41 np0005465987 podman[75911]: 2025-10-02 11:32:41.443421651 +0000 UTC m=+0.118153332 container attach 75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:32:41 np0005465987 happy_booth[75928]: 167 167
Oct  2 07:32:41 np0005465987 systemd[1]: libpod-75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4.scope: Deactivated successfully.
Oct  2 07:32:41 np0005465987 conmon[75928]: conmon 75001e7e27f123a6ca7f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4.scope/container/memory.events
Oct  2 07:32:41 np0005465987 podman[75911]: 2025-10-02 11:32:41.446274058 +0000 UTC m=+0.121005709 container died 75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 07:32:41 np0005465987 podman[75911]: 2025-10-02 11:32:41.487645219 +0000 UTC m=+0.162376900 container remove 75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:32:41 np0005465987 systemd[1]: libpod-conmon-75001e7e27f123a6ca7fa23fada69844fb29eee5252c46c14eaf342dc5ca26f4.scope: Deactivated successfully.
Oct  2 07:32:41 np0005465987 systemd[1]: Reloading.
Oct  2 07:32:41 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:32:41 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:32:41 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:41 np0005465987 systemd[1]: Reloading.
Oct  2 07:32:41 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:32:41 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:32:42 np0005465987 systemd[1]: Reached target All Ceph clusters and services.
Oct  2 07:32:42 np0005465987 systemd[1]: Reloading.
Oct  2 07:32:42 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:32:42 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:32:42 np0005465987 systemd[1]: Reached target Ceph cluster fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct  2 07:32:42 np0005465987 systemd[1]: Reloading.
Oct  2 07:32:42 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:32:42 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:32:42 np0005465987 systemd[1]: Reloading.
Oct  2 07:32:42 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:32:42 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:32:42 np0005465987 systemd[1]: Created slice Slice /system/ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct  2 07:32:42 np0005465987 systemd[1]: Reached target System Time Set.
Oct  2 07:32:42 np0005465987 systemd[1]: Reached target System Time Synchronized.
Oct  2 07:32:42 np0005465987 systemd[1]: Starting Ceph crash.compute-1 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct  2 07:32:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:32:42 np0005465987 podman[76188]: 2025-10-02 11:32:42.952686722 +0000 UTC m=+0.035739520 container create 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:42 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae108e696c9de5af841410be63f981bdb729af9227fa6e95cd5d041c541678b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:42 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae108e696c9de5af841410be63f981bdb729af9227fa6e95cd5d041c541678b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:42 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae108e696c9de5af841410be63f981bdb729af9227fa6e95cd5d041c541678b/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:43 np0005465987 podman[76188]: 2025-10-02 11:32:43.001116862 +0000 UTC m=+0.084169660 container init 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 07:32:43 np0005465987 podman[76188]: 2025-10-02 11:32:43.00510944 +0000 UTC m=+0.088162238 container start 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 07:32:43 np0005465987 bash[76188]: 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257
Oct  2 07:32:43 np0005465987 podman[76188]: 2025-10-02 11:32:42.937814044 +0000 UTC m=+0.020866862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:43 np0005465987 systemd[1]: Started Ceph crash.compute-1 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct  2 07:32:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1[76204]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  2 07:32:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1[76204]: 2025-10-02T11:32:43.408+0000 7ff5de74b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  2 07:32:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1[76204]: 2025-10-02T11:32:43.408+0000 7ff5de74b640 -1 AuthRegistry(0x7ff5d8066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  2 07:32:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1[76204]: 2025-10-02T11:32:43.408+0000 7ff5de74b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  2 07:32:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1[76204]: 2025-10-02T11:32:43.408+0000 7ff5de74b640 -1 AuthRegistry(0x7ff5de74a000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  2 07:32:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1[76204]: 2025-10-02T11:32:43.411+0000 7ff5d7fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  2 07:32:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1[76204]: 2025-10-02T11:32:43.411+0000 7ff5de74b640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  2 07:32:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1[76204]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  2 07:32:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1[76204]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  2 07:32:43 np0005465987 podman[76359]: 2025-10-02 11:32:43.585764225 +0000 UTC m=+0.039356597 container create cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_babbage, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:43 np0005465987 systemd[1]: Started libpod-conmon-cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130.scope.
Oct  2 07:32:43 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:43 np0005465987 podman[76359]: 2025-10-02 11:32:43.567796303 +0000 UTC m=+0.021388705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:43 np0005465987 podman[76359]: 2025-10-02 11:32:43.666844982 +0000 UTC m=+0.120437354 container init cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_babbage, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:32:43 np0005465987 podman[76359]: 2025-10-02 11:32:43.673178611 +0000 UTC m=+0.126770983 container start cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:32:43 np0005465987 podman[76359]: 2025-10-02 11:32:43.676133271 +0000 UTC m=+0.129725673 container attach cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:32:43 np0005465987 systemd[1]: libpod-cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130.scope: Deactivated successfully.
Oct  2 07:32:43 np0005465987 brave_babbage[76375]: 167 167
Oct  2 07:32:43 np0005465987 podman[76359]: 2025-10-02 11:32:43.67981726 +0000 UTC m=+0.133409632 container died cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:32:43 np0005465987 conmon[76375]: conmon cfb74bcbe57747c7cd24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130.scope/container/memory.events
Oct  2 07:32:43 np0005465987 systemd[1]: var-lib-containers-storage-overlay-2bd666924e2d3e7bc3589dc7a01726697879f32f3f9bafe0e734efe23b3b2296-merged.mount: Deactivated successfully.
Oct  2 07:32:43 np0005465987 podman[76359]: 2025-10-02 11:32:43.720937953 +0000 UTC m=+0.174530335 container remove cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_babbage, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:32:43 np0005465987 systemd[1]: libpod-conmon-cfb74bcbe57747c7cd248b3fe0b5333b69f8b3196aae12889279e13085ae4130.scope: Deactivated successfully.
Oct  2 07:32:43 np0005465987 podman[76399]: 2025-10-02 11:32:43.86832978 +0000 UTC m=+0.036533022 container create 0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:32:43 np0005465987 systemd[1]: Started libpod-conmon-0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a.scope.
Oct  2 07:32:43 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:43 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb5e643593f3b0878ff3b4d1c9781c5fcc30a62d8277f589d6d520c11ae0466/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:43 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb5e643593f3b0878ff3b4d1c9781c5fcc30a62d8277f589d6d520c11ae0466/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:43 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb5e643593f3b0878ff3b4d1c9781c5fcc30a62d8277f589d6d520c11ae0466/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:43 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb5e643593f3b0878ff3b4d1c9781c5fcc30a62d8277f589d6d520c11ae0466/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:43 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb5e643593f3b0878ff3b4d1c9781c5fcc30a62d8277f589d6d520c11ae0466/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:43 np0005465987 podman[76399]: 2025-10-02 11:32:43.945951634 +0000 UTC m=+0.114154896 container init 0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:32:43 np0005465987 podman[76399]: 2025-10-02 11:32:43.850319127 +0000 UTC m=+0.018522369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:43 np0005465987 podman[76399]: 2025-10-02 11:32:43.953633479 +0000 UTC m=+0.121836721 container start 0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:32:43 np0005465987 podman[76399]: 2025-10-02 11:32:43.95699584 +0000 UTC m=+0.125199082 container attach 0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 07:32:44 np0005465987 youthful_jang[76416]: --> passed data devices: 0 physical, 1 LVM
Oct  2 07:32:44 np0005465987 youthful_jang[76416]: --> relative data size: 1.0
Oct  2 07:32:44 np0005465987 youthful_jang[76416]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 07:32:44 np0005465987 youthful_jang[76416]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a3ccd8b9-4533-4913-ba5b-13fae1978607
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 07:32:45 np0005465987 lvm[76463]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:32:45 np0005465987 lvm[76463]: VG ceph_vg0 finished
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: stderr: got monmap epoch 1
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: --> Creating keyring file for osd.0
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct  2 07:32:45 np0005465987 youthful_jang[76416]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a3ccd8b9-4533-4913-ba5b-13fae1978607 --setuser ceph --setgroup ceph
Oct  2 07:32:47 np0005465987 youthful_jang[76416]: stderr: 2025-10-02T11:32:45.783+0000 7f3896646740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 07:32:47 np0005465987 youthful_jang[76416]: stderr: 2025-10-02T11:32:45.783+0000 7f3896646740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 07:32:47 np0005465987 youthful_jang[76416]: stderr: 2025-10-02T11:32:45.783+0000 7f3896646740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 07:32:47 np0005465987 youthful_jang[76416]: stderr: 2025-10-02T11:32:45.783+0000 7f3896646740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct  2 07:32:47 np0005465987 youthful_jang[76416]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  2 07:32:48 np0005465987 youthful_jang[76416]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:32:48 np0005465987 youthful_jang[76416]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  2 07:32:48 np0005465987 youthful_jang[76416]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:48 np0005465987 youthful_jang[76416]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:48 np0005465987 youthful_jang[76416]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:32:48 np0005465987 youthful_jang[76416]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:32:48 np0005465987 youthful_jang[76416]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  2 07:32:48 np0005465987 youthful_jang[76416]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  2 07:32:48 np0005465987 systemd[1]: libpod-0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a.scope: Deactivated successfully.
Oct  2 07:32:48 np0005465987 systemd[1]: libpod-0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a.scope: Consumed 2.201s CPU time.
Oct  2 07:32:48 np0005465987 podman[76399]: 2025-10-02 11:32:48.089276916 +0000 UTC m=+4.257480158 container died 0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 07:32:48 np0005465987 systemd[1]: var-lib-containers-storage-overlay-4bb5e643593f3b0878ff3b4d1c9781c5fcc30a62d8277f589d6d520c11ae0466-merged.mount: Deactivated successfully.
Oct  2 07:32:48 np0005465987 podman[76399]: 2025-10-02 11:32:48.14718257 +0000 UTC m=+4.315385812 container remove 0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:32:48 np0005465987 systemd[1]: libpod-conmon-0d087b52679309933d51c2aa7240aa4a102250f8907b3c981cb35c82585ab45a.scope: Deactivated successfully.
Oct  2 07:32:48 np0005465987 podman[77524]: 2025-10-02 11:32:48.668928795 +0000 UTC m=+0.038025842 container create 72e2288724aee273d54486c8cdd36f6ed3e00747895efe649b47b2bfaaf46d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:48 np0005465987 systemd[1]: Started libpod-conmon-72e2288724aee273d54486c8cdd36f6ed3e00747895efe649b47b2bfaaf46d41.scope.
Oct  2 07:32:48 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:48 np0005465987 podman[77524]: 2025-10-02 11:32:48.747331319 +0000 UTC m=+0.116428386 container init 72e2288724aee273d54486c8cdd36f6ed3e00747895efe649b47b2bfaaf46d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_einstein, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:32:48 np0005465987 podman[77524]: 2025-10-02 11:32:48.650845029 +0000 UTC m=+0.019942096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:48 np0005465987 podman[77524]: 2025-10-02 11:32:48.753085474 +0000 UTC m=+0.122182521 container start 72e2288724aee273d54486c8cdd36f6ed3e00747895efe649b47b2bfaaf46d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_einstein, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  2 07:32:48 np0005465987 podman[77524]: 2025-10-02 11:32:48.755961271 +0000 UTC m=+0.125058328 container attach 72e2288724aee273d54486c8cdd36f6ed3e00747895efe649b47b2bfaaf46d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_einstein, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 07:32:48 np0005465987 affectionate_einstein[77540]: 167 167
Oct  2 07:32:48 np0005465987 podman[77524]: 2025-10-02 11:32:48.758560181 +0000 UTC m=+0.127657228 container died 72e2288724aee273d54486c8cdd36f6ed3e00747895efe649b47b2bfaaf46d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_einstein, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 07:32:48 np0005465987 systemd[1]: libpod-72e2288724aee273d54486c8cdd36f6ed3e00747895efe649b47b2bfaaf46d41.scope: Deactivated successfully.
Oct  2 07:32:48 np0005465987 systemd[1]: var-lib-containers-storage-overlay-dd11e23b1b96626c7490aec2f657d6382484a14b89291047f0dcf27c780de705-merged.mount: Deactivated successfully.
Oct  2 07:32:48 np0005465987 podman[77524]: 2025-10-02 11:32:48.792502682 +0000 UTC m=+0.161599729 container remove 72e2288724aee273d54486c8cdd36f6ed3e00747895efe649b47b2bfaaf46d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:48 np0005465987 systemd[1]: libpod-conmon-72e2288724aee273d54486c8cdd36f6ed3e00747895efe649b47b2bfaaf46d41.scope: Deactivated successfully.
Oct  2 07:32:48 np0005465987 podman[77565]: 2025-10-02 11:32:48.926430967 +0000 UTC m=+0.034756845 container create faaa52e80e2ddfc10aa6481b3bbd609acaade992a7357d2a5501904b15d27eae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khorana, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:32:48 np0005465987 systemd[1]: Started libpod-conmon-faaa52e80e2ddfc10aa6481b3bbd609acaade992a7357d2a5501904b15d27eae.scope.
Oct  2 07:32:48 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:48 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3906f144e108a65755f97fd259bcef1e942eb010c5da33a6c7636e8bbc66ba8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:48 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3906f144e108a65755f97fd259bcef1e942eb010c5da33a6c7636e8bbc66ba8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:48 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3906f144e108a65755f97fd259bcef1e942eb010c5da33a6c7636e8bbc66ba8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:48 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3906f144e108a65755f97fd259bcef1e942eb010c5da33a6c7636e8bbc66ba8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:49 np0005465987 podman[77565]: 2025-10-02 11:32:48.910579321 +0000 UTC m=+0.018905199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:49 np0005465987 podman[77565]: 2025-10-02 11:32:49.007867862 +0000 UTC m=+0.116193760 container init faaa52e80e2ddfc10aa6481b3bbd609acaade992a7357d2a5501904b15d27eae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khorana, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 07:32:49 np0005465987 podman[77565]: 2025-10-02 11:32:49.014027728 +0000 UTC m=+0.122353606 container start faaa52e80e2ddfc10aa6481b3bbd609acaade992a7357d2a5501904b15d27eae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khorana, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:32:49 np0005465987 podman[77565]: 2025-10-02 11:32:49.01709802 +0000 UTC m=+0.125423898 container attach faaa52e80e2ddfc10aa6481b3bbd609acaade992a7357d2a5501904b15d27eae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]: {
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:    "0": [
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:        {
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "devices": [
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "/dev/loop3"
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            ],
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "lv_name": "ceph_lv0",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "lv_size": "7511998464",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=mFRfk0-yS3t-BjM2-lrlU-2QNt-S7EQ-c0FWiA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=fd4c5763-22d1-50ea-ad0b-96a3dc3040b2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a3ccd8b9-4533-4913-ba5b-13fae1978607,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "lv_uuid": "mFRfk0-yS3t-BjM2-lrlU-2QNt-S7EQ-c0FWiA",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "name": "ceph_lv0",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "tags": {
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.block_uuid": "mFRfk0-yS3t-BjM2-lrlU-2QNt-S7EQ-c0FWiA",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.cephx_lockbox_secret": "",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.cluster_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.cluster_name": "ceph",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.crush_device_class": "",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.encrypted": "0",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.osd_fsid": "a3ccd8b9-4533-4913-ba5b-13fae1978607",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.osd_id": "0",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.type": "block",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:                "ceph.vdo": "0"
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            },
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "type": "block",
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:            "vg_name": "ceph_vg0"
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:        }
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]:    ]
Oct  2 07:32:49 np0005465987 xenodochial_khorana[77581]: }
Oct  2 07:32:49 np0005465987 systemd[1]: libpod-faaa52e80e2ddfc10aa6481b3bbd609acaade992a7357d2a5501904b15d27eae.scope: Deactivated successfully.
Oct  2 07:32:49 np0005465987 podman[77565]: 2025-10-02 11:32:49.736913001 +0000 UTC m=+0.845238879 container died faaa52e80e2ddfc10aa6481b3bbd609acaade992a7357d2a5501904b15d27eae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khorana, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:49 np0005465987 systemd[1]: var-lib-containers-storage-overlay-3906f144e108a65755f97fd259bcef1e942eb010c5da33a6c7636e8bbc66ba8c-merged.mount: Deactivated successfully.
Oct  2 07:32:49 np0005465987 podman[77565]: 2025-10-02 11:32:49.798393521 +0000 UTC m=+0.906719399 container remove faaa52e80e2ddfc10aa6481b3bbd609acaade992a7357d2a5501904b15d27eae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 07:32:49 np0005465987 systemd[1]: libpod-conmon-faaa52e80e2ddfc10aa6481b3bbd609acaade992a7357d2a5501904b15d27eae.scope: Deactivated successfully.
Oct  2 07:32:50 np0005465987 podman[77743]: 2025-10-02 11:32:50.318995225 +0000 UTC m=+0.037754205 container create bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cohen, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 07:32:50 np0005465987 systemd[1]: Started libpod-conmon-bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123.scope.
Oct  2 07:32:50 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:50 np0005465987 podman[77743]: 2025-10-02 11:32:50.397137392 +0000 UTC m=+0.115896392 container init bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cohen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 07:32:50 np0005465987 podman[77743]: 2025-10-02 11:32:50.302963644 +0000 UTC m=+0.021722644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:50 np0005465987 podman[77743]: 2025-10-02 11:32:50.402805255 +0000 UTC m=+0.121564235 container start bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 07:32:50 np0005465987 elegant_cohen[77759]: 167 167
Oct  2 07:32:50 np0005465987 podman[77743]: 2025-10-02 11:32:50.406656907 +0000 UTC m=+0.125415887 container attach bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cohen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 07:32:50 np0005465987 systemd[1]: libpod-bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123.scope: Deactivated successfully.
Oct  2 07:32:50 np0005465987 conmon[77759]: conmon bfefccd5f4e71672c32a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123.scope/container/memory.events
Oct  2 07:32:50 np0005465987 podman[77743]: 2025-10-02 11:32:50.408128026 +0000 UTC m=+0.126887006 container died bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:32:50 np0005465987 systemd[1]: var-lib-containers-storage-overlay-c4417b84301b1b3822952f5f87f7736368d353494826a69f90c62e77d29e6b9b-merged.mount: Deactivated successfully.
Oct  2 07:32:50 np0005465987 podman[77743]: 2025-10-02 11:32:50.440256099 +0000 UTC m=+0.159015079 container remove bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 07:32:50 np0005465987 systemd[1]: libpod-conmon-bfefccd5f4e71672c32a579d9ae05a1d6259aa85d66ffedf1e2b62234fad2123.scope: Deactivated successfully.
Oct  2 07:32:50 np0005465987 podman[77791]: 2025-10-02 11:32:50.65713063 +0000 UTC m=+0.034754733 container create 68e8d0eea297cdae033abfde9ebb58f8b32803ed30cd897859082194640bfee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate-test, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:32:50 np0005465987 systemd[1]: Started libpod-conmon-68e8d0eea297cdae033abfde9ebb58f8b32803ed30cd897859082194640bfee7.scope.
Oct  2 07:32:50 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:50 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80634cddba739be50ca5f6dc91d673b148d13439da1d2a3de610d0f18aa6c5dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:50 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80634cddba739be50ca5f6dc91d673b148d13439da1d2a3de610d0f18aa6c5dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:50 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80634cddba739be50ca5f6dc91d673b148d13439da1d2a3de610d0f18aa6c5dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:50 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80634cddba739be50ca5f6dc91d673b148d13439da1d2a3de610d0f18aa6c5dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:50 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80634cddba739be50ca5f6dc91d673b148d13439da1d2a3de610d0f18aa6c5dd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:50 np0005465987 podman[77791]: 2025-10-02 11:32:50.721291972 +0000 UTC m=+0.098916095 container init 68e8d0eea297cdae033abfde9ebb58f8b32803ed30cd897859082194640bfee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate-test, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 07:32:50 np0005465987 podman[77791]: 2025-10-02 11:32:50.727914251 +0000 UTC m=+0.105538354 container start 68e8d0eea297cdae033abfde9ebb58f8b32803ed30cd897859082194640bfee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate-test, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 07:32:50 np0005465987 podman[77791]: 2025-10-02 11:32:50.730826819 +0000 UTC m=+0.108450922 container attach 68e8d0eea297cdae033abfde9ebb58f8b32803ed30cd897859082194640bfee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct  2 07:32:50 np0005465987 podman[77791]: 2025-10-02 11:32:50.643165345 +0000 UTC m=+0.020789478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:51 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate-test[77807]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  2 07:32:51 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate-test[77807]:                            [--no-systemd] [--no-tmpfs]
Oct  2 07:32:51 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate-test[77807]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  2 07:32:51 np0005465987 systemd[1]: libpod-68e8d0eea297cdae033abfde9ebb58f8b32803ed30cd897859082194640bfee7.scope: Deactivated successfully.
Oct  2 07:32:51 np0005465987 podman[77791]: 2025-10-02 11:32:51.405818257 +0000 UTC m=+0.783442370 container died 68e8d0eea297cdae033abfde9ebb58f8b32803ed30cd897859082194640bfee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate-test, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:32:51 np0005465987 systemd[1]: var-lib-containers-storage-overlay-80634cddba739be50ca5f6dc91d673b148d13439da1d2a3de610d0f18aa6c5dd-merged.mount: Deactivated successfully.
Oct  2 07:32:51 np0005465987 podman[77791]: 2025-10-02 11:32:51.461020688 +0000 UTC m=+0.838644791 container remove 68e8d0eea297cdae033abfde9ebb58f8b32803ed30cd897859082194640bfee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate-test, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 07:32:51 np0005465987 systemd[1]: libpod-conmon-68e8d0eea297cdae033abfde9ebb58f8b32803ed30cd897859082194640bfee7.scope: Deactivated successfully.
Oct  2 07:32:51 np0005465987 systemd[1]: Reloading.
Oct  2 07:32:51 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:32:51 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:32:51 np0005465987 systemd[1]: Reloading.
Oct  2 07:32:51 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:32:51 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:32:52 np0005465987 systemd[1]: Starting Ceph osd.0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct  2 07:32:52 np0005465987 podman[77967]: 2025-10-02 11:32:52.346930187 +0000 UTC m=+0.036875061 container create 49d495f9c8b5fd6a4a3f0ff17dccc310db9512568a01d1b848016e87dc0f2808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:32:52 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:52 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36cca6a9ef5fb2f5af045d036306464562f490f586f66d477e1489468de727e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:52 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36cca6a9ef5fb2f5af045d036306464562f490f586f66d477e1489468de727e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:52 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36cca6a9ef5fb2f5af045d036306464562f490f586f66d477e1489468de727e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:52 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36cca6a9ef5fb2f5af045d036306464562f490f586f66d477e1489468de727e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:52 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36cca6a9ef5fb2f5af045d036306464562f490f586f66d477e1489468de727e0/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:52 np0005465987 podman[77967]: 2025-10-02 11:32:52.416486203 +0000 UTC m=+0.106431107 container init 49d495f9c8b5fd6a4a3f0ff17dccc310db9512568a01d1b848016e87dc0f2808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:52 np0005465987 podman[77967]: 2025-10-02 11:32:52.42192928 +0000 UTC m=+0.111874164 container start 49d495f9c8b5fd6a4a3f0ff17dccc310db9512568a01d1b848016e87dc0f2808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 07:32:52 np0005465987 podman[77967]: 2025-10-02 11:32:52.327618359 +0000 UTC m=+0.017563253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:52 np0005465987 podman[77967]: 2025-10-02 11:32:52.425552518 +0000 UTC m=+0.115497422 container attach 49d495f9c8b5fd6a4a3f0ff17dccc310db9512568a01d1b848016e87dc0f2808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 07:32:53 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate[77982]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:32:53 np0005465987 bash[77967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:32:53 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate[77982]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:32:53 np0005465987 bash[77967]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:32:53 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate[77982]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:32:53 np0005465987 bash[77967]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:32:53 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate[77982]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:32:53 np0005465987 bash[77967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:32:53 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate[77982]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:53 np0005465987 bash[77967]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:53 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate[77982]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:32:53 np0005465987 bash[77967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:32:53 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate[77982]: --> ceph-volume raw activate successful for osd ID: 0
Oct  2 07:32:53 np0005465987 bash[77967]: --> ceph-volume raw activate successful for osd ID: 0
Oct  2 07:32:53 np0005465987 systemd[1]: libpod-49d495f9c8b5fd6a4a3f0ff17dccc310db9512568a01d1b848016e87dc0f2808.scope: Deactivated successfully.
Oct  2 07:32:53 np0005465987 podman[77967]: 2025-10-02 11:32:53.334637268 +0000 UTC m=+1.024582142 container died 49d495f9c8b5fd6a4a3f0ff17dccc310db9512568a01d1b848016e87dc0f2808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 07:32:53 np0005465987 systemd[1]: var-lib-containers-storage-overlay-36cca6a9ef5fb2f5af045d036306464562f490f586f66d477e1489468de727e0-merged.mount: Deactivated successfully.
Oct  2 07:32:53 np0005465987 podman[77967]: 2025-10-02 11:32:53.391488774 +0000 UTC m=+1.081433648 container remove 49d495f9c8b5fd6a4a3f0ff17dccc310db9512568a01d1b848016e87dc0f2808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0-activate, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:32:53 np0005465987 podman[78140]: 2025-10-02 11:32:53.590500565 +0000 UTC m=+0.038605756 container create 070e0f0e9cfdb9b4569905c3230d4d23ed5700cf9ebe7af4554e497b2d74847f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 07:32:53 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d59fbed55ef3a98854d04101b515869f4ba9bf7dd83d5781c43d73b24e87187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:53 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d59fbed55ef3a98854d04101b515869f4ba9bf7dd83d5781c43d73b24e87187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:53 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d59fbed55ef3a98854d04101b515869f4ba9bf7dd83d5781c43d73b24e87187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:53 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d59fbed55ef3a98854d04101b515869f4ba9bf7dd83d5781c43d73b24e87187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:53 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d59fbed55ef3a98854d04101b515869f4ba9bf7dd83d5781c43d73b24e87187/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:53 np0005465987 podman[78140]: 2025-10-02 11:32:53.644965548 +0000 UTC m=+0.093070739 container init 070e0f0e9cfdb9b4569905c3230d4d23ed5700cf9ebe7af4554e497b2d74847f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct  2 07:32:53 np0005465987 podman[78140]: 2025-10-02 11:32:53.651827922 +0000 UTC m=+0.099933113 container start 070e0f0e9cfdb9b4569905c3230d4d23ed5700cf9ebe7af4554e497b2d74847f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 07:32:53 np0005465987 bash[78140]: 070e0f0e9cfdb9b4569905c3230d4d23ed5700cf9ebe7af4554e497b2d74847f
Oct  2 07:32:53 np0005465987 podman[78140]: 2025-10-02 11:32:53.575177795 +0000 UTC m=+0.023282986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:53 np0005465987 systemd[1]: Started Ceph osd.0 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: pidfile_write: ignore empty --pid-file
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bdev(0x558e08a8b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bdev(0x558e08a8b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bdev(0x558e08a8b800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bdev(0x558e098c3800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bdev(0x558e098c3800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bdev(0x558e098c3800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bdev(0x558e098c3800 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:32:53 np0005465987 ceph-osd[78159]: bdev(0x558e08a8b800 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: load: jerasure load: lrc 
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09944c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09945400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09945400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09945400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluefs mount
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluefs mount shared_bdev_used = 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: RocksDB version: 7.9.2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Git sha 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: DB SUMMARY
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: DB Session ID:  ITPCEAFN4DGLC2DAG8P5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: CURRENT file:  CURRENT
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                         Options.error_if_exists: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.create_if_missing: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                                     Options.env: 0x558e09915c70
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                                Options.info_log: 0x558e08b08ba0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                              Options.statistics: (nil)
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.use_fsync: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                              Options.db_log_dir: 
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.write_buffer_manager: 0x558e09a1e460
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.unordered_write: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.row_cache: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                              Options.wal_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.two_write_queues: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.wal_compression: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.atomic_flush: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.max_background_jobs: 4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.max_background_compactions: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.max_subcompactions: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.max_open_files: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Compression algorithms supported:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kZSTD supported: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kXpressCompression supported: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kZlibCompression supported: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b08600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afedd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b08600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afedd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b08600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afedd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b08600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afedd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b08600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afedd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b08600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afedd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b08600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afedd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b085c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afe430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b085c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afe430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b085c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08afe430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 77221839-4815-4e70-9e36-b35d96e6cbd6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404774556764, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404774556990, "job": 1, "event": "recovery_finished"}
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: freelist init
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: freelist _read_cfg
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluefs umount
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09945400 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09945400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09945400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bdev(0x558e09945400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluefs mount
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluefs mount shared_bdev_used = 4718592
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: RocksDB version: 7.9.2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Git sha 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: DB SUMMARY
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: DB Session ID:  ITPCEAFN4DGLC2DAG8P4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: CURRENT file:  CURRENT
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                         Options.error_if_exists: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.create_if_missing: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                                     Options.env: 0x558e08b4a690
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                                Options.info_log: 0x558e08b098a0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                              Options.statistics: (nil)
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.use_fsync: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                              Options.db_log_dir: 
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.write_buffer_manager: 0x558e09a1e460
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.unordered_write: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.row_cache: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                              Options.wal_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.two_write_queues: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.wal_compression: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.atomic_flush: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.max_background_jobs: 4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.max_background_compactions: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.max_subcompactions: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.max_open_files: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Compression algorithms supported:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kZSTD supported: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kXpressCompression supported: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kZlibCompression supported: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08ae5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08ae5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08ae5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08ae5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08ae5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08ae5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08ae5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b09e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b09e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:           Options.merge_operator: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e08b09e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e08aff770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.compression: LZ4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.num_levels: 7
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 77221839-4815-4e70-9e36-b35d96e6cbd6
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404774826692, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404774840333, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404774, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "77221839-4815-4e70-9e36-b35d96e6cbd6", "db_session_id": "ITPCEAFN4DGLC2DAG8P4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:32:54 np0005465987 podman[78517]: 2025-10-02 11:32:54.842774678 +0000 UTC m=+0.042843800 container create 05e59e2bdec339dd9b89d7af16b66cd4fcd579f6f16aea89126177349597b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404774846456, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404774, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "77221839-4815-4e70-9e36-b35d96e6cbd6", "db_session_id": "ITPCEAFN4DGLC2DAG8P4", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404774857128, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404774, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "77221839-4815-4e70-9e36-b35d96e6cbd6", "db_session_id": "ITPCEAFN4DGLC2DAG8P4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404774858829, "job": 1, "event": "recovery_finished"}
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558e08bbdc00
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: DB pointer 0x558e09a07a00
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 460.80 MB usag
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  2 07:32:54 np0005465987 systemd[1]: Started libpod-conmon-05e59e2bdec339dd9b89d7af16b66cd4fcd579f6f16aea89126177349597b48e.scope.
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: _get_class not permitted to load lua
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: _get_class not permitted to load sdk
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: _get_class not permitted to load test_remote_reads
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: osd.0 0 load_pgs
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: osd.0 0 load_pgs opened 0 pgs
Oct  2 07:32:54 np0005465987 ceph-osd[78159]: osd.0 0 log_to_monitors true
Oct  2 07:32:54 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0[78155]: 2025-10-02T11:32:54.887+0000 7f24f0e3e740 -1 osd.0 0 log_to_monitors true
Oct  2 07:32:54 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:54 np0005465987 podman[78517]: 2025-10-02 11:32:54.821261091 +0000 UTC m=+0.021330233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:54 np0005465987 podman[78517]: 2025-10-02 11:32:54.923203157 +0000 UTC m=+0.123272299 container init 05e59e2bdec339dd9b89d7af16b66cd4fcd579f6f16aea89126177349597b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:32:54 np0005465987 podman[78517]: 2025-10-02 11:32:54.928836278 +0000 UTC m=+0.128905400 container start 05e59e2bdec339dd9b89d7af16b66cd4fcd579f6f16aea89126177349597b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:32:54 np0005465987 podman[78517]: 2025-10-02 11:32:54.931575282 +0000 UTC m=+0.131644414 container attach 05e59e2bdec339dd9b89d7af16b66cd4fcd579f6f16aea89126177349597b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  2 07:32:54 np0005465987 infallible_khorana[78720]: 167 167
Oct  2 07:32:54 np0005465987 systemd[1]: libpod-05e59e2bdec339dd9b89d7af16b66cd4fcd579f6f16aea89126177349597b48e.scope: Deactivated successfully.
Oct  2 07:32:54 np0005465987 podman[78517]: 2025-10-02 11:32:54.934147941 +0000 UTC m=+0.134217063 container died 05e59e2bdec339dd9b89d7af16b66cd4fcd579f6f16aea89126177349597b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 07:32:54 np0005465987 systemd[1]: var-lib-containers-storage-overlay-0bb65438b4a376cfadfe2720932af214e6fb62d770f3cdb49f90f9f56eb3c76a-merged.mount: Deactivated successfully.
Oct  2 07:32:54 np0005465987 podman[78517]: 2025-10-02 11:32:54.969579223 +0000 UTC m=+0.169648345 container remove 05e59e2bdec339dd9b89d7af16b66cd4fcd579f6f16aea89126177349597b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_khorana, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:32:54 np0005465987 systemd[1]: libpod-conmon-05e59e2bdec339dd9b89d7af16b66cd4fcd579f6f16aea89126177349597b48e.scope: Deactivated successfully.
Oct  2 07:32:55 np0005465987 podman[78773]: 2025-10-02 11:32:55.117526054 +0000 UTC m=+0.048198706 container create eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 07:32:55 np0005465987 systemd[1]: Started libpod-conmon-eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5.scope.
Oct  2 07:32:55 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:32:55 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dce67d2dba8f6faf7fb433b91364fe061cbea6e62556d7dd369de98b85f6fcaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:55 np0005465987 podman[78773]: 2025-10-02 11:32:55.094740641 +0000 UTC m=+0.025413313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:32:55 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dce67d2dba8f6faf7fb433b91364fe061cbea6e62556d7dd369de98b85f6fcaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:55 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dce67d2dba8f6faf7fb433b91364fe061cbea6e62556d7dd369de98b85f6fcaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:55 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dce67d2dba8f6faf7fb433b91364fe061cbea6e62556d7dd369de98b85f6fcaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:32:55 np0005465987 podman[78773]: 2025-10-02 11:32:55.201017174 +0000 UTC m=+0.131689826 container init eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:32:55 np0005465987 podman[78773]: 2025-10-02 11:32:55.208032443 +0000 UTC m=+0.138705095 container start eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 07:32:55 np0005465987 podman[78773]: 2025-10-02 11:32:55.213982872 +0000 UTC m=+0.144655534 container attach eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 07:32:55 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  2 07:32:55 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  2 07:32:55 np0005465987 exciting_allen[78789]: {
Oct  2 07:32:55 np0005465987 exciting_allen[78789]:    "a3ccd8b9-4533-4913-ba5b-13fae1978607": {
Oct  2 07:32:55 np0005465987 exciting_allen[78789]:        "ceph_fsid": "fd4c5763-22d1-50ea-ad0b-96a3dc3040b2",
Oct  2 07:32:55 np0005465987 exciting_allen[78789]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 07:32:55 np0005465987 exciting_allen[78789]:        "osd_id": 0,
Oct  2 07:32:55 np0005465987 exciting_allen[78789]:        "osd_uuid": "a3ccd8b9-4533-4913-ba5b-13fae1978607",
Oct  2 07:32:55 np0005465987 exciting_allen[78789]:        "type": "bluestore"
Oct  2 07:32:55 np0005465987 exciting_allen[78789]:    }
Oct  2 07:32:55 np0005465987 exciting_allen[78789]: }
Oct  2 07:32:56 np0005465987 systemd[1]: libpod-eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5.scope: Deactivated successfully.
Oct  2 07:32:56 np0005465987 conmon[78789]: conmon eeb4228b1a57faf7fec2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5.scope/container/memory.events
Oct  2 07:32:56 np0005465987 podman[78773]: 2025-10-02 11:32:56.011447498 +0000 UTC m=+0.942120150 container died eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 07:32:56 np0005465987 systemd[1]: var-lib-containers-storage-overlay-dce67d2dba8f6faf7fb433b91364fe061cbea6e62556d7dd369de98b85f6fcaa-merged.mount: Deactivated successfully.
Oct  2 07:32:56 np0005465987 ceph-osd[78159]: osd.0 0 done with init, starting boot process
Oct  2 07:32:56 np0005465987 ceph-osd[78159]: osd.0 0 start_boot
Oct  2 07:32:56 np0005465987 ceph-osd[78159]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  2 07:32:56 np0005465987 ceph-osd[78159]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  2 07:32:56 np0005465987 ceph-osd[78159]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  2 07:32:56 np0005465987 ceph-osd[78159]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  2 07:32:56 np0005465987 ceph-osd[78159]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct  2 07:32:57 np0005465987 podman[78773]: 2025-10-02 11:32:57.134525063 +0000 UTC m=+2.065197715 container remove eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_allen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:32:57 np0005465987 systemd[1]: libpod-conmon-eeb4228b1a57faf7fec2a3b6754327efad9c64fe3b0d2152f2c73fde7cc046d5.scope: Deactivated successfully.
Oct  2 07:33:00 np0005465987 podman[79042]: 2025-10-02 11:33:00.184863256 +0000 UTC m=+0.105248345 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:33:00 np0005465987 podman[79042]: 2025-10-02 11:33:00.368093875 +0000 UTC m=+0.288478974 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:33:01 np0005465987 podman[79235]: 2025-10-02 11:33:01.224929326 +0000 UTC m=+0.087777110 container create 6e0396ef55652603937a4d8081700236c8055fc53544468a586b66192695dafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 07:33:01 np0005465987 podman[79235]: 2025-10-02 11:33:01.159157298 +0000 UTC m=+0.022005102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:33:01 np0005465987 systemd[1]: Started libpod-conmon-6e0396ef55652603937a4d8081700236c8055fc53544468a586b66192695dafd.scope.
Oct  2 07:33:01 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:33:01 np0005465987 podman[79235]: 2025-10-02 11:33:01.442824782 +0000 UTC m=+0.305672566 container init 6e0396ef55652603937a4d8081700236c8055fc53544468a586b66192695dafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 07:33:01 np0005465987 podman[79235]: 2025-10-02 11:33:01.456390676 +0000 UTC m=+0.319238460 container start 6e0396ef55652603937a4d8081700236c8055fc53544468a586b66192695dafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_heyrovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 07:33:01 np0005465987 hungry_heyrovsky[79251]: 167 167
Oct  2 07:33:01 np0005465987 systemd[1]: libpod-6e0396ef55652603937a4d8081700236c8055fc53544468a586b66192695dafd.scope: Deactivated successfully.
Oct  2 07:33:01 np0005465987 podman[79235]: 2025-10-02 11:33:01.482299772 +0000 UTC m=+0.345147556 container attach 6e0396ef55652603937a4d8081700236c8055fc53544468a586b66192695dafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_heyrovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:33:01 np0005465987 podman[79235]: 2025-10-02 11:33:01.483049802 +0000 UTC m=+0.345897616 container died 6e0396ef55652603937a4d8081700236c8055fc53544468a586b66192695dafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_heyrovsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 07:33:01 np0005465987 systemd[1]: var-lib-containers-storage-overlay-2e41d5d6b25d9448f3a5be7a34307054ad0dc91ebe26156e07c6528a66624e6d-merged.mount: Deactivated successfully.
Oct  2 07:33:01 np0005465987 podman[79235]: 2025-10-02 11:33:01.596027584 +0000 UTC m=+0.458875368 container remove 6e0396ef55652603937a4d8081700236c8055fc53544468a586b66192695dafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 07:33:01 np0005465987 systemd[1]: libpod-conmon-6e0396ef55652603937a4d8081700236c8055fc53544468a586b66192695dafd.scope: Deactivated successfully.
Oct  2 07:33:01 np0005465987 podman[79277]: 2025-10-02 11:33:01.754955141 +0000 UTC m=+0.057028922 container create 7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 07:33:01 np0005465987 systemd[1]: Started libpod-conmon-7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428.scope.
Oct  2 07:33:01 np0005465987 podman[79277]: 2025-10-02 11:33:01.721437581 +0000 UTC m=+0.023511372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:33:01 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:33:01 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d3934e8f7af00e4ca4f9e64c52a03897a99abef1e45d1f2b3e2357a5f81a0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:01 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d3934e8f7af00e4ca4f9e64c52a03897a99abef1e45d1f2b3e2357a5f81a0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:01 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d3934e8f7af00e4ca4f9e64c52a03897a99abef1e45d1f2b3e2357a5f81a0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:01 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d3934e8f7af00e4ca4f9e64c52a03897a99abef1e45d1f2b3e2357a5f81a0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:01 np0005465987 podman[79277]: 2025-10-02 11:33:01.94421601 +0000 UTC m=+0.246289791 container init 7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 07:33:01 np0005465987 podman[79277]: 2025-10-02 11:33:01.951362062 +0000 UTC m=+0.253435843 container start 7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_robinson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:33:01 np0005465987 podman[79277]: 2025-10-02 11:33:01.970999419 +0000 UTC m=+0.273073290 container attach 7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 17.016 iops: 4356.019 elapsed_sec: 0.689
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: log_channel(cluster) log [WRN] : OSD bench result of 4356.019216 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: osd.0 0 waiting for initial osdmap
Oct  2 07:33:02 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0[78155]: 2025-10-02T11:33:02.490+0000 7f24ecdbe640 -1 osd.0 0 waiting for initial osdmap
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: osd.0 9 set_numa_affinity not setting numa affinity
Oct  2 07:33:02 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-osd-0[78155]: 2025-10-02T11:33:02.528+0000 7f24e83e6640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 07:33:02 np0005465987 ceph-osd[78159]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]: [
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:    {
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        "available": false,
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        "ceph_device": false,
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        "lsm_data": {},
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        "lvs": [],
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        "path": "/dev/sr0",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        "rejected_reasons": [
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "Has a FileSystem",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "Insufficient space (<5GB)"
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        ],
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        "sys_api": {
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "actuators": null,
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "device_nodes": "sr0",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "devname": "sr0",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "human_readable_size": "482.00 KB",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "id_bus": "ata",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "model": "QEMU DVD-ROM",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "nr_requests": "2",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "parent": "/dev/sr0",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "partitions": {},
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "path": "/dev/sr0",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "removable": "1",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "rev": "2.5+",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "ro": "0",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "rotational": "0",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "sas_address": "",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "sas_device_handle": "",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "scheduler_mode": "mq-deadline",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "sectors": 0,
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "sectorsize": "2048",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "size": 493568.0,
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "support_discard": "2048",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "type": "disk",
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:            "vendor": "QEMU"
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:        }
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]:    }
Oct  2 07:33:02 np0005465987 upbeat_robinson[79294]: ]
Oct  2 07:33:02 np0005465987 systemd[1]: libpod-7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428.scope: Deactivated successfully.
Oct  2 07:33:03 np0005465987 systemd[1]: libpod-7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428.scope: Consumed 1.049s CPU time.
Oct  2 07:33:03 np0005465987 podman[79277]: 2025-10-02 11:33:02.999403043 +0000 UTC m=+1.301476814 container died 7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_robinson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:33:03 np0005465987 systemd[1]: var-lib-containers-storage-overlay-88d3934e8f7af00e4ca4f9e64c52a03897a99abef1e45d1f2b3e2357a5f81a0c-merged.mount: Deactivated successfully.
Oct  2 07:33:03 np0005465987 podman[79277]: 2025-10-02 11:33:03.050844693 +0000 UTC m=+1.352918474 container remove 7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:33:03 np0005465987 systemd[1]: libpod-conmon-7dec59ddc1b2c70e70ea9f72ac10b718d8e494f35db3f7fbd7fa919c82d1c428.scope: Deactivated successfully.
Oct  2 07:33:03 np0005465987 ceph-osd[78159]: osd.0 10 state: booting -> active
Oct  2 07:33:10 np0005465987 ceph-osd[78159]: osd.0 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  2 07:33:10 np0005465987 ceph-osd[78159]: osd.0 14 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  2 07:33:10 np0005465987 ceph-osd[78159]: osd.0 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  2 07:33:29 np0005465987 podman[80586]: 2025-10-02 11:33:29.777122973 +0000 UTC m=+0.037949750 container create a6eb21e94db82c7ed9f042d507d0c2608e11e426e6595d05365c2c9e9df4023e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 07:33:29 np0005465987 systemd[1]: Started libpod-conmon-a6eb21e94db82c7ed9f042d507d0c2608e11e426e6595d05365c2c9e9df4023e.scope.
Oct  2 07:33:29 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:33:29 np0005465987 podman[80586]: 2025-10-02 11:33:29.846965428 +0000 UTC m=+0.107792235 container init a6eb21e94db82c7ed9f042d507d0c2608e11e426e6595d05365c2c9e9df4023e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:33:29 np0005465987 podman[80586]: 2025-10-02 11:33:29.853462292 +0000 UTC m=+0.114289069 container start a6eb21e94db82c7ed9f042d507d0c2608e11e426e6595d05365c2c9e9df4023e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:33:29 np0005465987 podman[80586]: 2025-10-02 11:33:29.761397451 +0000 UTC m=+0.022224238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:33:29 np0005465987 podman[80586]: 2025-10-02 11:33:29.85712371 +0000 UTC m=+0.117950507 container attach a6eb21e94db82c7ed9f042d507d0c2608e11e426e6595d05365c2c9e9df4023e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:33:29 np0005465987 happy_hamilton[80603]: 167 167
Oct  2 07:33:29 np0005465987 systemd[1]: libpod-a6eb21e94db82c7ed9f042d507d0c2608e11e426e6595d05365c2c9e9df4023e.scope: Deactivated successfully.
Oct  2 07:33:29 np0005465987 podman[80586]: 2025-10-02 11:33:29.859250837 +0000 UTC m=+0.120077604 container died a6eb21e94db82c7ed9f042d507d0c2608e11e426e6595d05365c2c9e9df4023e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 07:33:29 np0005465987 systemd[1]: var-lib-containers-storage-overlay-94990b8c18eea10a5e1ebdb397c8167051a78549b1a8214628bb22f295e0d931-merged.mount: Deactivated successfully.
Oct  2 07:33:29 np0005465987 podman[80586]: 2025-10-02 11:33:29.900243418 +0000 UTC m=+0.161070215 container remove a6eb21e94db82c7ed9f042d507d0c2608e11e426e6595d05365c2c9e9df4023e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hamilton, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:33:29 np0005465987 systemd[1]: libpod-conmon-a6eb21e94db82c7ed9f042d507d0c2608e11e426e6595d05365c2c9e9df4023e.scope: Deactivated successfully.
Oct  2 07:33:29 np0005465987 podman[80620]: 2025-10-02 11:33:29.976387272 +0000 UTC m=+0.045025900 container create fc0d1bb6a75d7d513f904a9762f184e434f2149c93141b00545728f44849ddf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chatelet, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 07:33:30 np0005465987 systemd[1]: Started libpod-conmon-fc0d1bb6a75d7d513f904a9762f184e434f2149c93141b00545728f44849ddf5.scope.
Oct  2 07:33:30 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:33:30 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3a4e0b71bf38adf185a6621f819d63d483f75b4f72b454151a729e85d5770e/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:30 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3a4e0b71bf38adf185a6621f819d63d483f75b4f72b454151a729e85d5770e/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:30 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3a4e0b71bf38adf185a6621f819d63d483f75b4f72b454151a729e85d5770e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:30 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3a4e0b71bf38adf185a6621f819d63d483f75b4f72b454151a729e85d5770e/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:30 np0005465987 podman[80620]: 2025-10-02 11:33:29.955950563 +0000 UTC m=+0.024589281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:33:30 np0005465987 podman[80620]: 2025-10-02 11:33:30.057301494 +0000 UTC m=+0.125940142 container init fc0d1bb6a75d7d513f904a9762f184e434f2149c93141b00545728f44849ddf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 07:33:30 np0005465987 podman[80620]: 2025-10-02 11:33:30.064952569 +0000 UTC m=+0.133591197 container start fc0d1bb6a75d7d513f904a9762f184e434f2149c93141b00545728f44849ddf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chatelet, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:33:30 np0005465987 podman[80620]: 2025-10-02 11:33:30.068683019 +0000 UTC m=+0.137321647 container attach fc0d1bb6a75d7d513f904a9762f184e434f2149c93141b00545728f44849ddf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chatelet, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:33:30 np0005465987 systemd[1]: libpod-fc0d1bb6a75d7d513f904a9762f184e434f2149c93141b00545728f44849ddf5.scope: Deactivated successfully.
Oct  2 07:33:30 np0005465987 podman[80663]: 2025-10-02 11:33:30.20878656 +0000 UTC m=+0.027590572 container died fc0d1bb6a75d7d513f904a9762f184e434f2149c93141b00545728f44849ddf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 07:33:30 np0005465987 systemd[1]: var-lib-containers-storage-overlay-2a3a4e0b71bf38adf185a6621f819d63d483f75b4f72b454151a729e85d5770e-merged.mount: Deactivated successfully.
Oct  2 07:33:30 np0005465987 podman[80663]: 2025-10-02 11:33:30.244669933 +0000 UTC m=+0.063473925 container remove fc0d1bb6a75d7d513f904a9762f184e434f2149c93141b00545728f44849ddf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chatelet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:33:30 np0005465987 systemd[1]: libpod-conmon-fc0d1bb6a75d7d513f904a9762f184e434f2149c93141b00545728f44849ddf5.scope: Deactivated successfully.
Oct  2 07:33:30 np0005465987 systemd[1]: Reloading.
Oct  2 07:33:30 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:33:30 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:33:30 np0005465987 systemd[1]: Reloading.
Oct  2 07:33:30 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:33:30 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:33:30 np0005465987 systemd[1]: Starting Ceph mon.compute-1 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct  2 07:33:31 np0005465987 podman[80803]: 2025-10-02 11:33:31.088298317 +0000 UTC m=+0.037502017 container create fe37bd460a2cb797d641abfa5ca1fd0437c50012965511584ddfafb135286cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-1, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 07:33:31 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c2691f37ec4f3fab3a340e0c9114505ee86c889337270a1a14afd0de4f88ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:31 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c2691f37ec4f3fab3a340e0c9114505ee86c889337270a1a14afd0de4f88ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:31 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c2691f37ec4f3fab3a340e0c9114505ee86c889337270a1a14afd0de4f88ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:31 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c2691f37ec4f3fab3a340e0c9114505ee86c889337270a1a14afd0de4f88ce/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:31 np0005465987 podman[80803]: 2025-10-02 11:33:31.156204739 +0000 UTC m=+0.105408479 container init fe37bd460a2cb797d641abfa5ca1fd0437c50012965511584ddfafb135286cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:33:31 np0005465987 podman[80803]: 2025-10-02 11:33:31.166080585 +0000 UTC m=+0.115284295 container start fe37bd460a2cb797d641abfa5ca1fd0437c50012965511584ddfafb135286cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct  2 07:33:31 np0005465987 podman[80803]: 2025-10-02 11:33:31.070972002 +0000 UTC m=+0.020175722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:33:31 np0005465987 bash[80803]: fe37bd460a2cb797d641abfa5ca1fd0437c50012965511584ddfafb135286cdd
Oct  2 07:33:31 np0005465987 systemd[1]: Started Ceph mon.compute-1 for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: pidfile_write: ignore empty --pid-file
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: load: jerasure load: lrc 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: RocksDB version: 7.9.2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Git sha 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: DB SUMMARY
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: DB Session ID:  N1IM92GH3TJZCQ2U874J
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: CURRENT file:  CURRENT
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                         Options.error_if_exists: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                       Options.create_if_missing: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                                     Options.env: 0x55f96367bc40
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                                Options.info_log: 0x55f96448efc0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                              Options.statistics: (nil)
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                               Options.use_fsync: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                              Options.db_log_dir: 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                                 Options.wal_dir: 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                    Options.write_buffer_manager: 0x55f96449eb40
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.unordered_write: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                               Options.row_cache: None
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                              Options.wal_filter: None
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.two_write_queues: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.wal_compression: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.atomic_flush: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.max_background_jobs: 2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.max_background_compactions: -1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.max_subcompactions: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.max_total_wal_size: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                          Options.max_open_files: -1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:       Options.compaction_readahead_size: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Compression algorithms supported:
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: #011kZSTD supported: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: #011kXpressCompression supported: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: #011kZlibCompression supported: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:           Options.merge_operator: 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:        Options.compaction_filter: None
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f96448ec00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f9644871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:        Options.write_buffer_size: 33554432
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:  Options.max_write_buffer_number: 2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:          Options.compression: NoCompression
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.num_levels: 7
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1a73ed96-995a-4c76-99da-82d621f02865
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404811207760, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404811209433, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759404811209537, "job": 1, "event": "recovery_finished"}
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f9644b0e00
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: DB pointer 0x55f9645c2000
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(???) e0 preinit fsid fd4c5763-22d1-50ea-ad0b-96a3dc3040b2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).mds e1 new map
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e14 crush map has features 3314933000852226048, adjusting msgr requires
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e14 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e14 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).osd e14 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Added host compute-2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Marking host: compute-0 for OSDSpec preview refresh.
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Marking host: compute-1 for OSDSpec preview refresh.
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Updating compute-1:/etc/ceph/ceph.conf
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Deploying daemon crash.compute-1 on compute-1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.101:0/1556391589' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a3ccd8b9-4533-4913-ba5b-13fae1978607"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.101:0/1556391589' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a3ccd8b9-4533-4913-ba5b-13fae1978607"}]': finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3545942114' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3545942114' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ca035294-3f2d-465d-b3e6-43971a2c0201"}]': finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Deploying daemon osd.0 on compute-1
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Deploying daemon osd.1 on compute-0
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: OSD bench result of 4356.019216 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: osd.0 [v2:192.168.122.101:6800/721300389,v1:192.168.122.101:6801/721300389] boot
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Adjusting osd_memory_target on compute-1 to  5247M
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: OSD bench result of 10020.515737 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Unable to set osd_memory_target on compute-0 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: osd.1 [v2:192.168.122.100:6802/1702934741,v1:192.168.122.100:6803/1702934741] boot
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Updating compute-2:/etc/ceph/ceph.conf
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.client.admin.keyring
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Deploying daemon mon.compute-2 on compute-2
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: Cluster is now healthy
Oct  2 07:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct  2 07:33:37 np0005465987 ceph-mon[80822]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Oct  2 07:33:37 np0005465987 ceph-mon[80822]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Oct  2 07:33:37 np0005465987 ceph-mon[80822]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  2 07:33:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e15 e15: 2 total, 2 up, 2 in
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: Deploying daemon mon.compute-1 on compute-1
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/4219378585' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: mon.compute-0 calling monitor election
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: mon.compute-2 calling monitor election
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:33:40 np0005465987 ceph-mon[80822]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-10-02T11:33:30.114215Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864096,os=Linux}
Oct  2 07:33:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Oct  2 07:33:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Oct  2 07:33:41 np0005465987 ceph-mon[80822]: mon.compute-0 calling monitor election
Oct  2 07:33:41 np0005465987 ceph-mon[80822]: mon.compute-2 calling monitor election
Oct  2 07:33:41 np0005465987 ceph-mon[80822]: mon.compute-1 calling monitor election
Oct  2 07:33:41 np0005465987 ceph-mon[80822]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  2 07:33:41 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 07:33:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e16 _set_new_cache_sizes cache_size:1019939393 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:33:41 np0005465987 podman[81001]: 2025-10-02 11:33:41.984753333 +0000 UTC m=+0.050835427 container create 26c9652e5eb9a81c73d63ea028a025afb6e3f4d8bd2c715e96c90878e0d58b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dijkstra, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:33:42 np0005465987 systemd[1]: Started libpod-conmon-26c9652e5eb9a81c73d63ea028a025afb6e3f4d8bd2c715e96c90878e0d58b48.scope.
Oct  2 07:33:42 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:33:42 np0005465987 podman[81001]: 2025-10-02 11:33:41.957522592 +0000 UTC m=+0.023604716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:33:42 np0005465987 podman[81001]: 2025-10-02 11:33:42.069869087 +0000 UTC m=+0.135951201 container init 26c9652e5eb9a81c73d63ea028a025afb6e3f4d8bd2c715e96c90878e0d58b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:33:42 np0005465987 podman[81001]: 2025-10-02 11:33:42.083835682 +0000 UTC m=+0.149917766 container start 26c9652e5eb9a81c73d63ea028a025afb6e3f4d8bd2c715e96c90878e0d58b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dijkstra, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct  2 07:33:42 np0005465987 podman[81001]: 2025-10-02 11:33:42.086957686 +0000 UTC m=+0.153039780 container attach 26c9652e5eb9a81c73d63ea028a025afb6e3f4d8bd2c715e96c90878e0d58b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dijkstra, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct  2 07:33:42 np0005465987 amazing_dijkstra[81016]: 167 167
Oct  2 07:33:42 np0005465987 systemd[1]: libpod-26c9652e5eb9a81c73d63ea028a025afb6e3f4d8bd2c715e96c90878e0d58b48.scope: Deactivated successfully.
Oct  2 07:33:42 np0005465987 podman[81001]: 2025-10-02 11:33:42.095099104 +0000 UTC m=+0.161181208 container died 26c9652e5eb9a81c73d63ea028a025afb6e3f4d8bd2c715e96c90878e0d58b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dijkstra, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:33:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay-496ffd012cb8e752bd487b4eb08861a07ffc62f5bddc23151a1abf5d66aefb13-merged.mount: Deactivated successfully.
Oct  2 07:33:42 np0005465987 ceph-mon[80822]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:33:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ypnrbl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 07:33:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.ypnrbl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  2 07:33:42 np0005465987 ceph-mon[80822]: Deploying daemon mgr.compute-1.ypnrbl on compute-1
Oct  2 07:33:42 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/1347565839' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:33:42 np0005465987 podman[81001]: 2025-10-02 11:33:42.133413693 +0000 UTC m=+0.199495787 container remove 26c9652e5eb9a81c73d63ea028a025afb6e3f4d8bd2c715e96c90878e0d58b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 07:33:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Oct  2 07:33:42 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:42 np0005465987 systemd[1]: libpod-conmon-26c9652e5eb9a81c73d63ea028a025afb6e3f4d8bd2c715e96c90878e0d58b48.scope: Deactivated successfully.
Oct  2 07:33:42 np0005465987 systemd[1]: Reloading.
Oct  2 07:33:42 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:33:42 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:33:42 np0005465987 systemd[1]: Reloading.
Oct  2 07:33:42 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:33:42 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:33:42 np0005465987 systemd[1]: Starting Ceph mgr.compute-1.ypnrbl for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct  2 07:33:42 np0005465987 podman[81161]: 2025-10-02 11:33:42.900299836 +0000 UTC m=+0.046396915 container create 01c00c066c6a1992c5663888da9517483ab4477f50d8d8f6d3a35fd48923a9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 07:33:42 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dff43c2b2babcd4165f0f731308aae2993fa0bd7004d3d28cd97818d7af3bae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:42 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dff43c2b2babcd4165f0f731308aae2993fa0bd7004d3d28cd97818d7af3bae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:42 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dff43c2b2babcd4165f0f731308aae2993fa0bd7004d3d28cd97818d7af3bae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:42 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dff43c2b2babcd4165f0f731308aae2993fa0bd7004d3d28cd97818d7af3bae/merged/var/lib/ceph/mgr/ceph-compute-1.ypnrbl supports timestamps until 2038 (0x7fffffff)
Oct  2 07:33:42 np0005465987 podman[81161]: 2025-10-02 11:33:42.969150685 +0000 UTC m=+0.115247764 container init 01c00c066c6a1992c5663888da9517483ab4477f50d8d8f6d3a35fd48923a9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 07:33:42 np0005465987 podman[81161]: 2025-10-02 11:33:42.876494438 +0000 UTC m=+0.022591537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:33:42 np0005465987 podman[81161]: 2025-10-02 11:33:42.976519603 +0000 UTC m=+0.122616672 container start 01c00c066c6a1992c5663888da9517483ab4477f50d8d8f6d3a35fd48923a9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:33:42 np0005465987 bash[81161]: 01c00c066c6a1992c5663888da9517483ab4477f50d8d8f6d3a35fd48923a9ae
Oct  2 07:33:42 np0005465987 systemd[1]: Started Ceph mgr.compute-1.ypnrbl for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct  2 07:33:43 np0005465987 ceph-mgr[81180]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:33:43 np0005465987 ceph-mgr[81180]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  2 07:33:43 np0005465987 ceph-mgr[81180]: pidfile_write: ignore empty --pid-file
Oct  2 07:33:43 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'alerts'
Oct  2 07:33:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e18 e18: 2 total, 2 up, 2 in
Oct  2 07:33:43 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/1347565839' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:33:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:33:43 np0005465987 ceph-mgr[81180]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 07:33:43 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'balancer'
Oct  2 07:33:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:43.416+0000 7f9b5962f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 07:33:43 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:43 np0005465987 ceph-mgr[81180]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 07:33:43 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'cephadm'
Oct  2 07:33:43 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:43.683+0000 7f9b5962f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 07:33:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e19 e19: 2 total, 2 up, 2 in
Oct  2 07:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:44 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/2188088816' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:33:44 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:45 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'crash'
Oct  2 07:33:46 np0005465987 ceph-mgr[81180]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 07:33:46 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'dashboard'
Oct  2 07:33:46 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:46.013+0000 7f9b5962f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 07:33:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  2 07:33:46 np0005465987 ceph-mon[80822]: Deploying daemon crash.compute-2 on compute-2
Oct  2 07:33:46 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/2188088816' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:33:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:33:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e19 _set_new_cache_sizes cache_size:1020053343 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:33:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e20 e20: 2 total, 2 up, 2 in
Oct  2 07:33:46 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:47 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'devicehealth'
Oct  2 07:33:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:33:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:33:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:47 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/4165571221' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:33:47 np0005465987 ceph-mgr[81180]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 07:33:47 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'diskprediction_local'
Oct  2 07:33:47 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:47.762+0000 7f9b5962f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e21 e21: 2 total, 2 up, 2 in
Oct  2 07:33:48 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  2 07:33:48 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  2 07:33:48 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]:  from numpy import show_config as show_numpy_config
Oct  2 07:33:48 np0005465987 ceph-mgr[81180]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 07:33:48 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:48.316+0000 7f9b5962f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 07:33:48 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'influx'
Oct  2 07:33:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 21 pg[5.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 21 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=21 pruub=11.247097969s) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active pruub 64.751884460s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 21 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=21 pruub=11.247097969s) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown pruub 64.751884460s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:48 np0005465987 ceph-mgr[81180]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 07:33:48 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'insights'
Oct  2 07:33:48 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:48.559+0000 7f9b5962f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/4165571221' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:33:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:33:48 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'iostat'
Oct  2 07:33:49 np0005465987 ceph-mgr[81180]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 07:33:49 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'k8sevents'
Oct  2 07:33:49 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:49.072+0000 7f9b5962f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 07:33:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e22 e22: 2 total, 2 up, 2 in
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1f( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1c( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1d( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1e( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1b( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.a( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.5( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.4( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.2( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.6( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.9( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.3( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.8( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.7( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.b( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.c( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.d( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.f( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.10( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.11( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.12( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.13( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.e( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.15( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.16( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.14( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.17( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.18( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.19( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1a( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1f( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1e( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.4( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.2( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.7( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.6( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.12( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.18( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.17( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:49 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 22 pg[3.19( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:50 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3857097005' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:33:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e23 e23: 2 total, 2 up, 2 in
Oct  2 07:33:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:50 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'localpool'
Oct  2 07:33:51 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'mds_autoscaler'
Oct  2 07:33:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e23 _set_new_cache_sizes cache_size:1020054714 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:33:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct  2 07:33:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  2 07:33:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4255282364' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:33:51 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3857097005' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:33:51 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.102:0/1986060367' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4adc6a3a-57df-44c5-8148-0263723a70e6"}]: dispatch
Oct  2 07:33:51 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4adc6a3a-57df-44c5-8148-0263723a70e6"}]: dispatch
Oct  2 07:33:51 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'mirroring'
Oct  2 07:33:52 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'nfs'
Oct  2 07:33:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct  2 07:33:52 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4adc6a3a-57df-44c5-8148-0263723a70e6"}]': finished
Oct  2 07:33:52 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/4255282364' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:33:52 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:33:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:52 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:33:52 np0005465987 ceph-mgr[81180]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 07:33:52 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'orchestrator'
Oct  2 07:33:52 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:52.864+0000 7f9b5962f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 07:33:53 np0005465987 ceph-mgr[81180]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 07:33:53 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'osd_perf_query'
Oct  2 07:33:53 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:53.561+0000 7f9b5962f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 07:33:53 np0005465987 ceph-mgr[81180]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 07:33:53 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'osd_support'
Oct  2 07:33:53 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:53.838+0000 7f9b5962f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 07:33:54 np0005465987 ceph-mon[80822]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:33:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct  2 07:33:54 np0005465987 ceph-mgr[81180]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 07:33:54 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'pg_autoscaler'
Oct  2 07:33:54 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:54.093+0000 7f9b5962f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 07:33:54 np0005465987 ceph-mgr[81180]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 07:33:54 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:54.383+0000 7f9b5962f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 07:33:54 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'progress'
Oct  2 07:33:54 np0005465987 ceph-mgr[81180]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 07:33:54 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:54.646+0000 7f9b5962f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 07:33:54 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'prometheus'
Oct  2 07:33:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct  2 07:33:55 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3376646480' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  2 07:33:55 np0005465987 ceph-mgr[81180]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 07:33:55 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'rbd_support'
Oct  2 07:33:55 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:55.709+0000 7f9b5962f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 07:33:56 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:56.031+0000 7f9b5962f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 07:33:56 np0005465987 ceph-mgr[81180]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 07:33:56 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'restful'
Oct  2 07:33:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:33:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:56 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3376646480' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  2 07:33:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:33:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:33:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:33:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.622350693s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.414695740s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.622241974s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.414695740s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621875763s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.414703369s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621881485s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.414718628s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.622421265s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415321350s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.622368813s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415321350s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621756554s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.414703369s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621730804s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.414718628s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621742249s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.414810181s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621697426s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.414810181s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621869087s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415054321s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621843338s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415054321s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621833801s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415161133s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621814728s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415161133s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621708870s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415122986s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621908188s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415344238s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621676445s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415122986s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621888161s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415344238s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621089935s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415283203s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620978355s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415252686s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621026993s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415283203s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621044159s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415344238s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620950699s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415252686s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621075630s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415412903s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621055603s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415412903s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620998383s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415344238s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.621006012s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415458679s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620942116s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415397644s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620989799s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415458679s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620920181s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415397644s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620914459s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415435791s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620893478s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415435791s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620625496s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active pruub 70.415229797s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=28 pruub=8.620609283s) [1] r=-1 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.415229797s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.1f( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.1e( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.a( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.9( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.6( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.4( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.1( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.d( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.c( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.e( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.10( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.13( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.19( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.15( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 28 pg[2.1b( empty local-lis/les=0/0 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:33:56 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'rgw'
Oct  2 07:33:57 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/2736872602' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  2 07:33:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:33:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:33:57 np0005465987 ceph-mgr[81180]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 07:33:57 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'rook'
Oct  2 07:33:57 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:57.510+0000 7f9b5962f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 07:33:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.1( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.4( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.9( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.1e( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.6( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.e( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:57 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=21/21 les/c/f=22/22/0 sis=28) [0] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:33:58 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/2736872602' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  2 07:33:59 np0005465987 ceph-mgr[81180]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 07:33:59 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:59.727+0000 7f9b5962f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 07:33:59 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'selftest'
Oct  2 07:33:59 np0005465987 ceph-mon[80822]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:33:59 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/2891127426' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  2 07:33:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  2 07:33:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct  2 07:33:59 np0005465987 ceph-mgr[81180]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 07:33:59 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:33:59.977+0000 7f9b5962f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 07:33:59 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'snap_schedule'
Oct  2 07:34:00 np0005465987 ceph-mgr[81180]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 07:34:00 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:34:00.257+0000 7f9b5962f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 07:34:00 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'stats'
Oct  2 07:34:00 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'status'
Oct  2 07:34:00 np0005465987 ceph-mon[80822]: Deploying daemon osd.2 on compute-2
Oct  2 07:34:00 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/2891127426' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  2 07:34:00 np0005465987 ceph-mgr[81180]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 07:34:00 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'telegraf'
Oct  2 07:34:00 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:34:00.833+0000 7f9b5962f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 07:34:01 np0005465987 ceph-mgr[81180]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 07:34:01 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:34:01.082+0000 7f9b5962f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 07:34:01 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'telemetry'
Oct  2 07:34:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:01 np0005465987 ceph-mgr[81180]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 07:34:01 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'test_orchestrator'
Oct  2 07:34:01 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:34:01.757+0000 7f9b5962f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 07:34:01 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct  2 07:34:01 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct  2 07:34:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct  2 07:34:02 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3239065561' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  2 07:34:02 np0005465987 ceph-mgr[81180]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 07:34:02 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'volumes'
Oct  2 07:34:02 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:34:02.493+0000 7f9b5962f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 07:34:02 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct  2 07:34:02 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct  2 07:34:03 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3239065561' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  2 07:34:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:03 np0005465987 ceph-mgr[81180]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 07:34:03 np0005465987 ceph-mgr[81180]: mgr[py] Loading python module 'zabbix'
Oct  2 07:34:03 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:34:03.282+0000 7f9b5962f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 07:34:03 np0005465987 ceph-mgr[81180]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 07:34:03 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mgr-compute-1-ypnrbl[81176]: 2025-10-02T11:34:03.536+0000 7f9b5962f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 07:34:03 np0005465987 ceph-mgr[81180]: ms_deliver_dispatch: unhandled message 0x55e5e24f71e0 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Oct  2 07:34:03 np0005465987 ceph-mgr[81180]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct  2 07:34:04 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/817273948' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  2 07:34:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct  2 07:34:05 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/817273948' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  2 07:34:05 np0005465987 ceph-mon[80822]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:34:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e33 e33: 3 total, 2 up, 3 in
Oct  2 07:34:06 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/1622950684' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  2 07:34:06 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/1622950684' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  2 07:34:07 np0005465987 ceph-mon[80822]: from='osd.2 [v2:192.168.122.102:6800/1863586281,v1:192.168.122.102:6801/1863586281]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  2 07:34:07 np0005465987 ceph-mon[80822]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  2 07:34:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e34 e34: 3 total, 2 up, 3 in
Oct  2 07:34:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e35 e35: 3 total, 2 up, 3 in
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=12.500672340s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 86.415489197s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.962555885s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 86.877471924s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=12.500422478s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 86.415351868s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=12.500672340s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415489197s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=12.500379562s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 86.415344238s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.962555885s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.877471924s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=12.500422478s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415351868s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.962208748s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 86.877464294s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965340614s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 86.880622864s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965513229s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 86.880821228s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965340614s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880622864s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965513229s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880821228s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.962208748s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.877464294s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965372086s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 86.880813599s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965321541s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 86.880805969s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965372086s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880813599s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965351105s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active pruub 86.880851746s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965321541s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880805969s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=35 pruub=12.965351105s) [] r=-1 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880851746s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=12.499523163s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 86.415092468s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=12.500379562s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415344238s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 35 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=12.499523163s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415092468s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct  2 07:34:08 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct  2 07:34:08 np0005465987 ceph-mon[80822]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  2 07:34:08 np0005465987 ceph-mon[80822]: from='osd.2 [v2:192.168.122.102:6800/1863586281,v1:192.168.122.102:6801/1863586281]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  2 07:34:08 np0005465987 ceph-mon[80822]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  2 07:34:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:09 np0005465987 podman[81438]: 2025-10-02 11:34:09.081449508 +0000 UTC m=+0.059545532 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:34:09 np0005465987 podman[81438]: 2025-10-02 11:34:09.17300575 +0000 UTC m=+0.151101764 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 07:34:10 np0005465987 ceph-mon[80822]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Oct  2 07:34:10 np0005465987 ceph-mon[80822]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 07:34:10 np0005465987 ceph-mon[80822]: Cluster is now healthy
Oct  2 07:34:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:10 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct  2 07:34:10 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct  2 07:34:11 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/220112243' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  2 07:34:11 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/220112243' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  2 07:34:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:12 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/2085867460' entity='client.admin' 
Oct  2 07:34:12 np0005465987 ceph-mon[80822]: OSD bench result of 6028.959553 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:34:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=8.525080681s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415092468s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=8.525462151s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415489197s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=8.525255203s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415344238s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=8.524978638s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415092468s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=8.525207520s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415344238s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=8.525363922s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415489197s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=8.525119781s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415351868s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=8.525086403s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.415351868s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990297318s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880622864s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.987140656s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.877471924s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.987126350s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.877464294s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990412712s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880821228s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.987083435s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.877471924s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990250587s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880622864s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990392685s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880821228s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990358353s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880813599s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990328789s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880813599s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990283966s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880805969s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990310669s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880851746s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990262032s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880805969s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.987101555s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.877464294s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 36 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/15 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=8.990296364s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.880851746s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: osd.2 [v2:192.168.122.102:6800/1863586281,v1:192.168.122.102:6801/1863586281] boot
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: Saving service ingress.rgw.default spec with placement count:2
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: Updating compute-0:/etc/ceph/ceph.conf
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: Updating compute-1:/etc/ceph/ceph.conf
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: Updating compute-2:/etc/ceph/ceph.conf
Oct  2 07:34:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct  2 07:34:13 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct  2 07:34:14 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct  2 07:34:14 np0005465987 ceph-mon[80822]: Updating compute-1:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct  2 07:34:14 np0005465987 ceph-mon[80822]: Updating compute-0:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct  2 07:34:15 np0005465987 ceph-mon[80822]: Updating compute-2:/var/lib/ceph/fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/config/ceph.conf
Oct  2 07:34:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e2 new map
Oct  2 07:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:34:16.058958+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct  2 07:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct  2 07:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  2 07:34:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:18 np0005465987 ceph-mon[80822]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  2 07:34:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:19 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct  2 07:34:19 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct  2 07:34:20 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3861259132' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  2 07:34:20 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3861259132' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  2 07:34:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:23 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct  2 07:34:23 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct  2 07:34:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mwuxwy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 07:34:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mwuxwy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 07:34:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:24 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct  2 07:34:24 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct  2 07:34:24 np0005465987 ceph-mon[80822]: Deploying daemon rgw.rgw.compute-2.mwuxwy on compute-2
Oct  2 07:34:26 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/3798139046' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  2 07:34:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:27 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct  2 07:34:27 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct  2 07:34:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.tijdss", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 07:34:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.tijdss", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 07:34:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct  2 07:34:27 np0005465987 podman[82641]: 2025-10-02 11:34:27.449868208 +0000 UTC m=+0.049885475 container create d647f8b2d04f18a21c10fbd55b7cef2f75c8d9ca6661fced6f675364f73c3993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 07:34:27 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 39 pg[8.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [0] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:27 np0005465987 systemd[71766]: Starting Mark boot as successful...
Oct  2 07:34:27 np0005465987 systemd[71766]: Finished Mark boot as successful.
Oct  2 07:34:27 np0005465987 systemd[1]: Started libpod-conmon-d647f8b2d04f18a21c10fbd55b7cef2f75c8d9ca6661fced6f675364f73c3993.scope.
Oct  2 07:34:27 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:34:27 np0005465987 podman[82641]: 2025-10-02 11:34:27.426202117 +0000 UTC m=+0.026219414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:34:27 np0005465987 podman[82641]: 2025-10-02 11:34:27.528982248 +0000 UTC m=+0.128999525 container init d647f8b2d04f18a21c10fbd55b7cef2f75c8d9ca6661fced6f675364f73c3993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 07:34:27 np0005465987 podman[82641]: 2025-10-02 11:34:27.534430348 +0000 UTC m=+0.134447615 container start d647f8b2d04f18a21c10fbd55b7cef2f75c8d9ca6661fced6f675364f73c3993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pike, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 07:34:27 np0005465987 podman[82641]: 2025-10-02 11:34:27.536744632 +0000 UTC m=+0.136761899 container attach d647f8b2d04f18a21c10fbd55b7cef2f75c8d9ca6661fced6f675364f73c3993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:34:27 np0005465987 xenodochial_pike[82659]: 167 167
Oct  2 07:34:27 np0005465987 systemd[1]: libpod-d647f8b2d04f18a21c10fbd55b7cef2f75c8d9ca6661fced6f675364f73c3993.scope: Deactivated successfully.
Oct  2 07:34:27 np0005465987 podman[82641]: 2025-10-02 11:34:27.539173849 +0000 UTC m=+0.139191116 container died d647f8b2d04f18a21c10fbd55b7cef2f75c8d9ca6661fced6f675364f73c3993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 07:34:27 np0005465987 systemd[1]: var-lib-containers-storage-overlay-d0c2ec35c694aa1850593475a7dd1210b6158855de2402d06a0da0dc845d3cb3-merged.mount: Deactivated successfully.
Oct  2 07:34:27 np0005465987 podman[82641]: 2025-10-02 11:34:27.570693097 +0000 UTC m=+0.170710354 container remove d647f8b2d04f18a21c10fbd55b7cef2f75c8d9ca6661fced6f675364f73c3993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pike, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:34:27 np0005465987 systemd[1]: libpod-conmon-d647f8b2d04f18a21c10fbd55b7cef2f75c8d9ca6661fced6f675364f73c3993.scope: Deactivated successfully.
Oct  2 07:34:27 np0005465987 systemd[1]: Reloading.
Oct  2 07:34:27 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:34:27 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:34:27 np0005465987 systemd[1]: Reloading.
Oct  2 07:34:27 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:34:27 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:34:28 np0005465987 systemd[1]: Starting Ceph rgw.rgw.compute-1.tijdss for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct  2 07:34:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct  2 07:34:28 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 40 pg[8.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [0] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:28 np0005465987 ceph-mon[80822]: Deploying daemon rgw.rgw.compute-1.tijdss on compute-1
Oct  2 07:34:28 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.102:0/3022269828' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  2 07:34:28 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  2 07:34:28 np0005465987 podman[82805]: 2025-10-02 11:34:28.392346234 +0000 UTC m=+0.107717269 container create e2d52c94dbfa8819abb02a5e58647e3bc02048ffc6852bf685910bfad2c0a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-rgw-rgw-compute-1-tijdss, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:34:28 np0005465987 podman[82805]: 2025-10-02 11:34:28.307832255 +0000 UTC m=+0.023203320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:34:28 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c94d67d1df4d42647248d4d9efc292aeae665c72eb49a312277ebffc66dd80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:34:28 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c94d67d1df4d42647248d4d9efc292aeae665c72eb49a312277ebffc66dd80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:34:28 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c94d67d1df4d42647248d4d9efc292aeae665c72eb49a312277ebffc66dd80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:34:28 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c94d67d1df4d42647248d4d9efc292aeae665c72eb49a312277ebffc66dd80/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.tijdss supports timestamps until 2038 (0x7fffffff)
Oct  2 07:34:28 np0005465987 podman[82805]: 2025-10-02 11:34:28.607031449 +0000 UTC m=+0.322402504 container init e2d52c94dbfa8819abb02a5e58647e3bc02048ffc6852bf685910bfad2c0a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-rgw-rgw-compute-1-tijdss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:34:28 np0005465987 podman[82805]: 2025-10-02 11:34:28.611481602 +0000 UTC m=+0.326852637 container start e2d52c94dbfa8819abb02a5e58647e3bc02048ffc6852bf685910bfad2c0a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-rgw-rgw-compute-1-tijdss, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:34:28 np0005465987 bash[82805]: e2d52c94dbfa8819abb02a5e58647e3bc02048ffc6852bf685910bfad2c0a7d2
Oct  2 07:34:28 np0005465987 systemd[1]: Started Ceph rgw.rgw.compute-1.tijdss for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct  2 07:34:28 np0005465987 radosgw[82824]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:34:28 np0005465987 radosgw[82824]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct  2 07:34:28 np0005465987 radosgw[82824]: framework: beast
Oct  2 07:34:28 np0005465987 radosgw[82824]: framework conf key: endpoint, val: 192.168.122.101:8082
Oct  2 07:34:28 np0005465987 radosgw[82824]: init_numa not setting numa affinity
Oct  2 07:34:29 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct  2 07:34:29 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct  2 07:34:29 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 41 pg[9.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1957857602' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mcnfdf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mcnfdf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 07:34:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct  2 07:34:30 np0005465987 ceph-mon[80822]: Deploying daemon rgw.rgw.compute-0.mcnfdf on compute-0
Oct  2 07:34:30 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.101:0/1957857602' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:34:30 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.102:0/3022269828' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:34:30 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:34:30 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:34:30 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 42 pg[9.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: Deploying daemon haproxy.rgw.default.compute-0.qdmsoe on compute-0
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct  2 07:34:31 np0005465987 ceph-mon[80822]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1957857602' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:34:33 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/308578156' entity='client.rgw.rgw.compute-0.mcnfdf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:34:33 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.101:0/1957857602' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:34:33 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:34:33 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.102:0/3022269828' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:34:33 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:34:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:33 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct  2 07:34:33 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct  2 07:34:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct  2 07:34:34 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/308578156' entity='client.rgw.rgw.compute-0.mcnfdf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 07:34:34 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 07:34:34 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 07:34:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct  2 07:34:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct  2 07:34:34 np0005465987 ceph-mon[80822]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1317742668' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:34:35 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 45 pg[11.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct  2 07:34:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct  2 07:34:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1317742668' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:34:35 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 46 pg[11.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:35 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:34:35 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.101:0/1317742668' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:34:35 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:34:35 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.102:0/2401809657' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:34:35 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.102:0/2401809657' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.101:0/1317742668' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:34:36 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:34:36 np0005465987 radosgw[82824]: LDAP not started since no server URIs were provided in the configuration.
Oct  2 07:34:36 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-rgw-rgw-compute-1-tijdss[82820]: 2025-10-02T11:34:36.511+0000 7f11530da940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  2 07:34:36 np0005465987 radosgw[82824]: framework: beast
Oct  2 07:34:36 np0005465987 radosgw[82824]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  2 07:34:36 np0005465987 radosgw[82824]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  2 07:34:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  2 07:34:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  2 07:34:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  2 07:34:36 np0005465987 radosgw[82824]: starting handler: beast
Oct  2 07:34:36 np0005465987 radosgw[82824]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:34:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  2 07:34:36 np0005465987 radosgw[82824]: mgrc service_daemon_register rgw.24140 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.tijdss,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864096,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=d51d2c37-0c52-473d-a68f-705b7a7c6947,zone_name=default,zonegroup_id=46b6f139-f98e-4349-978c-7decf8294e94,zonegroup_name=default}
Oct  2 07:34:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct  2 07:34:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  2 07:34:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct  2 07:34:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct  2 07:34:37 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct  2 07:34:37 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct  2 07:34:37 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-2.mwuxwy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 07:34:37 np0005465987 ceph-mon[80822]: from='client.? ' entity='client.rgw.rgw.compute-1.tijdss' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 07:34:37 np0005465987 ceph-mon[80822]: from='client.? 192.168.122.100:0/4060792636' entity='client.rgw.rgw.compute-0.mcnfdf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 07:34:37 np0005465987 ceph-mon[80822]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 07:34:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 07:34:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:38.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 07:34:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:38 np0005465987 ceph-mon[80822]: Deploying daemon haproxy.rgw.default.compute-2.jycvzz on compute-2
Oct  2 07:34:39 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct  2 07:34:39 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct  2 07:34:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:40.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:42 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct  2 07:34:42 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct  2 07:34:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:34:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:42.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:34:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:43.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:44.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:44 np0005465987 ceph-mon[80822]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  2 07:34:44 np0005465987 ceph-mon[80822]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  2 07:34:44 np0005465987 ceph-mon[80822]: Deploying daemon keepalived.rgw.default.compute-2.ahfyxt on compute-2
Oct  2 07:34:45 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct  2 07:34:45 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct  2 07:34:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:34:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:45.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct  2 07:34:46 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct  2 07:34:46 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct  2 07:34:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:34:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:46.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:34:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct  2 07:34:46 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 49 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=49 pruub=11.843959808s) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active pruub 123.743438721s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:46 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 49 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=49 pruub=11.843959808s) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown pruub 123.743438721s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:47 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct  2 07:34:47 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct  2 07:34:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:34:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:34:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:34:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:34:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:47.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:34:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1d( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.17( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.16( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.b( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.19( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.3( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.6( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.c( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.15( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1e( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1f( empty local-lis/les=19/20 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.17( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.16( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.b( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.19( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1d( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.3( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.6( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.0( empty local-lis/les=49/50 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.c( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.15( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1e( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1f( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=19/19 les/c/f=20/20/0 sis=49) [0] r=0 lpr=49 pi=[19,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:48.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:34:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:34:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  2 07:34:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  2 07:34:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:34:48 np0005465987 systemd[1]: session-20.scope: Deactivated successfully.
Oct  2 07:34:48 np0005465987 systemd[1]: session-20.scope: Consumed 8.071s CPU time.
Oct  2 07:34:48 np0005465987 systemd-logind[794]: Session 20 logged out. Waiting for processes to exit.
Oct  2 07:34:48 np0005465987 systemd-logind[794]: Removed session 20.
Oct  2 07:34:48 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct  2 07:34:49 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct  2 07:34:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct  2 07:34:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:49.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:49 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  2 07:34:49 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:34:49 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct  2 07:34:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:50.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct  2 07:34:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:34:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  2 07:34:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:34:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:34:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:34:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:34:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:34:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 51 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=51 pruub=12.418788910s) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active pruub 128.465850830s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=51 pruub=12.418788910s) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown pruub 128.465850830s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.3( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.6( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.1( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.9( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.a( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.b( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.7( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.4( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.c( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.2( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.8( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.e( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 52 pg[6.f( empty local-lis/les=23/24 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct  2 07:34:50 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct  2 07:34:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:34:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:51.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:34:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.e( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.5( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.3( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.2( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.7( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.0( empty local-lis/les=51/53 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.1( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.4( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.d( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.f( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.6( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.c( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.9( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.a( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.8( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 53 pg[6.b( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=23/23 les/c/f=24/24/0 sis=51) [0] r=0 lpr=51 pi=[23,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:52.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct  2 07:34:52 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 54 pg[9.0( v 47'1065 (0'0,47'1065] local-lis/les=41/42 n=177 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=54 pruub=10.057437897s) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 47'1064 mlcod 47'1064 active pruub 127.772308350s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:52 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 54 pg[8.0( v 40'6 (0'0,40'6] local-lis/les=39/40 n=5 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=15.726395607s) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 lcod 40'5 mlcod 40'5 active pruub 133.442138672s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:52 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 54 pg[8.0( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=15.726395607s) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 lcod 40'5 mlcod 0'0 unknown pruub 133.442138672s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:52 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 54 pg[9.0( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=54 pruub=10.057437897s) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 47'1064 mlcod 0'0 unknown pruub 127.772308350s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: Deploying daemon keepalived.rgw.default.compute-0.dcvgot on compute-0
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:34:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:34:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:53.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.10( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.10( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.11( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.11( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.17( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.16( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.16( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.17( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1a( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1b( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1b( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.18( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.19( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1e( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1f( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1f( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1e( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1a( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1c( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1d( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.2( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.3( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.7( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.6( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.7( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.6( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.5( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.4( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.9( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1( v 40'6 (0'0,40'6] local-lis/les=39/40 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.8( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.14( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.15( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.15( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.14( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.2( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.3( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.e( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.f( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.f( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.9( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.8( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.e( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.b( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.a( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.d( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.c( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.c( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.d( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.b( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.a( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.4( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.5( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.19( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.18( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1c( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.13( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.12( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.12( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'5 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.13( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1d( v 47'1065 lc 0'0 (0'0,47'1065] local-lis/les=41/42 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.11( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.10( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.17( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.17( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.16( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1b( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.18( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1f( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1e( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1a( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1d( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.2( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.7( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.5( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.7( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1c( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.4( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.0( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 47'1064 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.9( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.8( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.14( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.15( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.14( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.2( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.3( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.6( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.e( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.3( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.8( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.9( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.a( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.d( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.c( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.c( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.f( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.0( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 40'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.4( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.b( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.1c( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.19( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.5( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.13( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[8.12( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.13( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 55 pg[9.18( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=41/41 les/c/f=42/42/0 sis=54) [0] r=0 lpr=54 pi=[41,54)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Oct  2 07:34:53 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Oct  2 07:34:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:54.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:34:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:34:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct  2 07:34:55 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 56 pg[11.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=11.903372765s) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 132.328475952s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:34:55 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 56 pg[11.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=11.903372765s) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown pruub 132.328475952s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:34:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:55.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:34:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:56.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:34:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:34:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:34:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.13( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.15( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.18( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.b( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.2( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.16( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.d( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.c( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.a( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.9( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.8( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.3( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1f( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.11( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.15( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.18( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.b( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.2( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.d( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.0( empty local-lis/les=56/57 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.c( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.1f( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.9( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.11( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  2 07:34:56 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  2 07:34:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:58 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct  2 07:34:58 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct  2 07:34:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:34:58.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:58 np0005465987 podman[83661]: 2025-10-02 11:34:58.203948875 +0000 UTC m=+0.065674714 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:34:58 np0005465987 podman[83661]: 2025-10-02 11:34:58.302188232 +0000 UTC m=+0.163914071 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:34:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:34:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:34:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:34:59.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:34:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:34:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:00.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:35:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.1e( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.14( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.6( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.3( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.19( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.c( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.5( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.a( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.17( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[5.1d( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675501823s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.674224854s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1d( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.253491402s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252258301s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.11( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.845501900s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.844284058s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676827431s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675643921s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676780701s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675643921s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1c( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.253042221s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252014160s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675426483s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.674224854s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1c( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.253005028s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252014160s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.11( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.845455170s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.844284058s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.10( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.845134735s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.844299316s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676481247s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675659180s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1b( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.252989769s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252182007s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.17( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.845114708s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.844314575s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1b( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.252945900s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252182007s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.17( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.845071793s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.844314575s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676407814s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675659180s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.10( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.845058441s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.844299316s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1d( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.253459930s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252258301s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.16( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.844941139s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.844360352s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.16( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.844920158s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.844360352s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1a( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.248095512s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.247543335s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.1b( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852503777s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852005005s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676181793s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675689697s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1a( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.248076439s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.247543335s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.1b( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852481842s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852005005s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676145554s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675720215s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676125526s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675720215s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.18( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852400780s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852066040s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.18( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852383614s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852066040s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675973892s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675720215s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.13( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.252238274s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252014160s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675955772s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675720215s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.13( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.252219200s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252014160s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.14( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.252112389s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.251922607s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.1f( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852241516s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852096558s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675809860s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675735474s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.1f( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852225304s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852096558s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675793648s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675735474s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.14( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.252065659s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.251922607s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675890923s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675918579s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.e( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251972198s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252044678s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675847054s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675918579s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.e( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251954079s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252044678s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.2( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852131844s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852233887s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675823212s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675918579s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675865173s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675918579s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.2( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852113724s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852233887s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675721169s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675933838s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675693512s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675933838s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.a( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251833916s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252136230s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675397873s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675689697s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.8( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.654774666s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 140.655120850s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.6( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852188110s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852523804s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.a( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251814842s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252136230s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.6( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.852171898s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852523804s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675572395s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.675949097s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.8( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.654735565s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.655120850s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675547600s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.675949097s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.9( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251749992s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252258301s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.9( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251732826s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252258301s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.5( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851765633s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852310181s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.5( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851741791s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852310181s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.8( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851823807s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852401733s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.8( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851798058s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852401733s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.18( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251604080s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252258301s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.d( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251587868s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252227783s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675340652s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.676010132s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.18( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251588821s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252258301s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675321579s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.676010132s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.14( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851703644s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852432251s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.14( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851686478s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852432251s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675250053s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.676025391s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675231934s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.676025391s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.19( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251412392s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252227783s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.19( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251384735s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252227783s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.15( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851575851s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852447510s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.d( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.653894424s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 140.654800415s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.3( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851552010s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852462769s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.15( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851535797s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852447510s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.d( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.653878212s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.654800415s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.3( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851531982s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852462769s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.d( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251530647s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252227783s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.2( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251237869s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252273560s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.2( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251203537s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252273560s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.3( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251179695s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252288818s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.f( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851434708s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852554321s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.3( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251158714s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252288818s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.f( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851417542s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852554321s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.5( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251033783s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252273560s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.653400421s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 140.654663086s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.5( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251014709s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252273560s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.653381348s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.654663086s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.9( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851293564s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852584839s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.9( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851273537s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852584839s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.1( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.653409004s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 140.654769897s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.1( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.653388023s) [2] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.654769897s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.6( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.250895500s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252288818s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.6( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.250882149s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252288818s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.a( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851148605s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852584839s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676313400s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.677780151s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.a( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.851123810s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852584839s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.653195381s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 140.654708862s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676292419s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.677780151s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.653181076s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.654708862s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251215935s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252746582s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.251195908s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252746582s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.674574852s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.676147461s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.674558640s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.676147461s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.d( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850985527s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852600098s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676143646s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.677780151s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.d( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850970268s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852600098s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676116943s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.677780151s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.2( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.652959824s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 140.654663086s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.2( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.652937889s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.654663086s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.c( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850868225s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852630615s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.c( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850852013s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852630615s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.676004410s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.677795410s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.5( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.648104668s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 140.649932861s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.5( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.648080826s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.649932861s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.b( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850753784s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852645874s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675911903s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.677825928s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.b( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850728989s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852645874s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675957680s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.677795410s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675896645s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.677825928s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.c( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.250350952s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252319336s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.c( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.250303268s) [1] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252319336s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.e( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.647812843s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 140.649871826s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675778389s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.677841187s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.e( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.647785187s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.649871826s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.8( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.250646591s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252746582s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675758362s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.677841187s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.8( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.250629425s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252746582s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.a( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.652966499s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 140.655105591s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[6.a( empty local-lis/les=51/53 n=0 ec=51/23 lis/c=51/51 les/c/f=53/53/0 sis=58 pruub=14.652931213s) [1] r=-1 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 140.655105591s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.4( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850476265s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852706909s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675597191s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 137.677841187s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.4( v 40'6 (0'0,40'6] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850451469s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852706909s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=11.675577164s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.677841187s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.15( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.250073433s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252365112s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.15( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.250058174s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252365112s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.19( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850358009s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852722168s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.19( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850338936s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852722168s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.1c( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850323677s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852767944s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1f( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.250003815s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 137.252471924s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.1c( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850304604s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852767944s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[4.1f( empty local-lis/les=49/50 n=0 ec=49/19 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.249984741s) [2] r=-1 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 137.252471924s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.12( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850253105s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 active pruub 134.852828979s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[8.12( v 40'6 (0'0,40'6] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.850232124s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.852828979s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.1e( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.1b( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[10.18( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.4( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.18( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.3( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.e( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.2( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.f( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.6( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.9( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.8( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[10.5( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.b( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.10( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 58 pg[7.13( empty local-lis/les=0/0 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct  2 07:35:00 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct  2 07:35:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:35:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:01.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 07:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  2 07:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:35:01 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct  2 07:35:01 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct  2 07:35:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[10.1b( v 44'48 (0'0,44'48] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.3( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.1b( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[10.18( v 44'48 (0'0,44'48] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.e( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.f( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[10.14( v 57'51 lc 44'45 (0'0,57'51] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.4( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.1e( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[10.13( v 44'48 (0'0,44'48] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.1d( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[10.15( v 57'51 lc 44'26 (0'0,57'51] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[10.2( v 44'48 (0'0,44'48] local-lis/les=58/59 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.18( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[10.19( v 44'48 (0'0,44'48] local-lis/les=58/59 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.10( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.8( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.17( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.9( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[10.5( v 44'48 (0'0,44'48] local-lis/les=58/59 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.5( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.19( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.c( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.3( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.6( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[10.8( v 44'48 (0'0,44'48] local-lis/les=58/59 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.2( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.a( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.6( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.13( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[7.b( empty local-lis/les=58/59 n=0 ec=52/25 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.14( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 59 pg[5.1e( empty local-lis/les=58/59 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:02.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct  2 07:35:02 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct  2 07:35:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct  2 07:35:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:03.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  2 07:35:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:04.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.17( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.807553291s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.844528198s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.17( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.807507515s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.844528198s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.814998627s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.852172852s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.814947128s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.852172852s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815112114s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.852386475s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.3( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815595627s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.852920532s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815066338s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.852386475s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.3( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815571785s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.852920532s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.7( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815161705s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.852630615s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.7( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815134048s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.852630615s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815122604s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.852890015s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815100670s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.852890015s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815067291s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.852966309s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.815052032s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.852966309s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.13( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.814964294s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.853256226s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:04 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 61 pg[9.13( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=12.814891815s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.853256226s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:04 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  2 07:35:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:05.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.7( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.13( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.7( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.13( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.3( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.3( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.17( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:06 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 62 pg[9.17( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:06.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:07 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct  2 07:35:07 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct  2 07:35:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  2 07:35:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:07.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct  2 07:35:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:08.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 63 pg[9.7( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 63 pg[9.b( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 63 pg[9.3( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 63 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 63 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 63 pg[9.13( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 63 pg[9.17( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:08 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 63 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  2 07:35:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  2 07:35:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  2 07:35:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.13( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022944450s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 149.447952271s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.b( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.986941338s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 149.411956787s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022883415s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 149.447937012s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.7( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.986851692s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 149.411911011s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.13( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022858620s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.447952271s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.b( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.986832619s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.411956787s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.7( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.986743927s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.411911011s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.3( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022701263s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 149.447906494s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.f( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022730827s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.447937012s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.3( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=6 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022642136s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.447906494s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022494316s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 149.447921753s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.17( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022672653s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 149.448104858s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022668839s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 149.448104858s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022455215s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.447921753s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.17( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022627831s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.448104858s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=62/63 n=5 ec=54/41 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=15.022628784s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.448104858s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=8.426814079s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.852752686s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=8.426789284s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.852752686s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=8.426812172s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.853042603s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=8.426790237s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.853042603s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=8.426867485s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.853240967s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.5( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=8.426773071s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 142.853164673s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=8.426836014s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.853240967s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 64 pg[9.5( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=8.426748276s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.853164673s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:09.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 65 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 65 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 65 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 65 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 65 pg[9.5( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 65 pg[9.5( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 65 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:09 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 65 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gpiyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 07:35:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gpiyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 07:35:09 np0005465987 ceph-mon[80822]: Deploying daemon mds.cephfs.compute-2.gpiyct on compute-2
Oct  2 07:35:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  2 07:35:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:35:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:10.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=15.125089645s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 150.844680786s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=15.125044823s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.844680786s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=15.132583618s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 150.852310181s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=15.132517815s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.852310181s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=15.132799149s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 150.852737427s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=15.132758141s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.852737427s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=15.132855415s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 150.853179932s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=15.132835388s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.853179932s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.5( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:10 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 66 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.odxjnj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.odxjnj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e3 new map
Oct  2 07:35:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:34:16.058958+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.gpiyct{-1:24148} state up:standby seq 1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e4 new map
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:35:10.977247+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24148}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.gpiyct{0:24148} state up:creating seq 1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:11.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=5 ec=54/41 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.026546478s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 151.798217773s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.15( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=5 ec=54/41 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.026473999s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.798217773s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.5( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=6 ec=54/41 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.988095284s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 151.759979248s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=6 ec=54/41 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.026318550s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 151.798217773s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.5( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=6 ec=54/41 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.988038063s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.759979248s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.d( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=6 ec=54/41 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.026261330s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.798217773s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=5 ec=54/41 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.026101112s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 151.798202515s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 67 pg[9.1d( v 47'1065 (0'0,47'1065] local-lis/les=65/66 n=5 ec=54/41 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=15.026062012s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.798202515s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: Deploying daemon mds.cephfs.compute-0.odxjnj on compute-0
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: daemon mds.cephfs.compute-2.gpiyct assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: Cluster is now healthy
Oct  2 07:35:11 np0005465987 ceph-mon[80822]: daemon mds.cephfs.compute-2.gpiyct is now active in filesystem cephfs as rank 0
Oct  2 07:35:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e5 new map
Oct  2 07:35:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:35:12.043439+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24148}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 2 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  2 07:35:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:12.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct  2 07:35:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 68 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 68 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 68 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 68 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[54,67)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e6 new map
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:35:12.043439+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24148}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 2 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e7 new map
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:35:12.043439+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24148}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 2 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.zfhmgy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.zfhmgy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 07:35:13 np0005465987 podman[83908]: 2025-10-02 11:35:13.194005339 +0000 UTC m=+0.035967551 container create 842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wozniak, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:35:13 np0005465987 systemd[1]: Started libpod-conmon-842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8.scope.
Oct  2 07:35:13 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:35:13 np0005465987 podman[83908]: 2025-10-02 11:35:13.268558162 +0000 UTC m=+0.110520394 container init 842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wozniak, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 07:35:13 np0005465987 podman[83908]: 2025-10-02 11:35:13.178154195 +0000 UTC m=+0.020116427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:35:13 np0005465987 podman[83908]: 2025-10-02 11:35:13.277105731 +0000 UTC m=+0.119067943 container start 842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:35:13 np0005465987 podman[83908]: 2025-10-02 11:35:13.280145247 +0000 UTC m=+0.122107459 container attach 842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wozniak, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 07:35:13 np0005465987 great_wozniak[83924]: 167 167
Oct  2 07:35:13 np0005465987 systemd[1]: libpod-842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8.scope: Deactivated successfully.
Oct  2 07:35:13 np0005465987 conmon[83924]: conmon 842acbd57ddb1a6b12e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8.scope/container/memory.events
Oct  2 07:35:13 np0005465987 podman[83908]: 2025-10-02 11:35:13.282623776 +0000 UTC m=+0.124585988 container died 842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wozniak, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:35:13 np0005465987 systemd[1]: var-lib-containers-storage-overlay-eff2de8d4e2c77fe2ba27096b942e115f3b521f62863c22cd556d424c56cec85-merged.mount: Deactivated successfully.
Oct  2 07:35:13 np0005465987 podman[83908]: 2025-10-02 11:35:13.315570061 +0000 UTC m=+0.157532273 container remove 842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:35:13 np0005465987 systemd[1]: libpod-conmon-842acbd57ddb1a6b12e2a9dfa09513f4646f49639a0cc07531998c0b967c2fc8.scope: Deactivated successfully.
Oct  2 07:35:13 np0005465987 systemd[1]: Reloading.
Oct  2 07:35:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:13.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:13 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:35:13 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:35:13 np0005465987 systemd[1]: Reloading.
Oct  2 07:35:13 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:35:13 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:35:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 69 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=15.106184959s) [1] async=[1] r=-1 lpr=69 pi=[54,69)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 153.956237793s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 69 pg[9.16( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=15.106101036s) [1] r=-1 lpr=69 pi=[54,69)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.956237793s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 69 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=15.094266891s) [1] async=[1] r=-1 lpr=69 pi=[54,69)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 153.944656372s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 69 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=5 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=15.094209671s) [1] r=-1 lpr=69 pi=[54,69)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.944656372s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 69 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=15.105556488s) [1] async=[1] r=-1 lpr=69 pi=[54,69)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 153.956298828s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 69 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=15.105401039s) [1] async=[1] r=-1 lpr=69 pi=[54,69)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 153.956237793s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 69 pg[9.e( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=15.105358124s) [1] r=-1 lpr=69 pi=[54,69)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.956237793s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 69 pg[9.6( v 47'1065 (0'0,47'1065] local-lis/les=67/68 n=6 ec=54/41 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=15.105153084s) [1] r=-1 lpr=69 pi=[54,69)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.956298828s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:13 np0005465987 systemd[1]: Starting Ceph mds.cephfs.compute-1.zfhmgy for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2...
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct  2 07:35:13 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct  2 07:35:14 np0005465987 podman[84069]: 2025-10-02 11:35:14.06098138 +0000 UTC m=+0.042174744 container create 882db16f13f96001c53b786cb847264bd75b69463bea3560583ea93e88e7dc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mds-cephfs-compute-1-zfhmgy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:35:14 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f428e3764cf50290e2ffcc3e22ef56b26b4a08aa30f33020f709cd8c60ee534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:35:14 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f428e3764cf50290e2ffcc3e22ef56b26b4a08aa30f33020f709cd8c60ee534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:35:14 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f428e3764cf50290e2ffcc3e22ef56b26b4a08aa30f33020f709cd8c60ee534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:35:14 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f428e3764cf50290e2ffcc3e22ef56b26b4a08aa30f33020f709cd8c60ee534/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.zfhmgy supports timestamps until 2038 (0x7fffffff)
Oct  2 07:35:14 np0005465987 podman[84069]: 2025-10-02 11:35:14.124196864 +0000 UTC m=+0.105390228 container init 882db16f13f96001c53b786cb847264bd75b69463bea3560583ea93e88e7dc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mds-cephfs-compute-1-zfhmgy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 07:35:14 np0005465987 podman[84069]: 2025-10-02 11:35:14.13153119 +0000 UTC m=+0.112724554 container start 882db16f13f96001c53b786cb847264bd75b69463bea3560583ea93e88e7dc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mds-cephfs-compute-1-zfhmgy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:35:14 np0005465987 bash[84069]: 882db16f13f96001c53b786cb847264bd75b69463bea3560583ea93e88e7dc77
Oct  2 07:35:14 np0005465987 podman[84069]: 2025-10-02 11:35:14.042421259 +0000 UTC m=+0.023614643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:35:14 np0005465987 systemd[1]: Started Ceph mds.cephfs.compute-1.zfhmgy for fd4c5763-22d1-50ea-ad0b-96a3dc3040b2.
Oct  2 07:35:14 np0005465987 ceph-mon[80822]: Deploying daemon mds.cephfs.compute-1.zfhmgy on compute-1
Oct  2 07:35:14 np0005465987 ceph-mds[84089]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:35:14 np0005465987 ceph-mds[84089]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct  2 07:35:14 np0005465987 ceph-mds[84089]: main not setting numa affinity
Oct  2 07:35:14 np0005465987 ceph-mds[84089]: pidfile_write: ignore empty --pid-file
Oct  2 07:35:14 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mds-cephfs-compute-1-zfhmgy[84085]: starting mds.cephfs.compute-1.zfhmgy at 
Oct  2 07:35:14 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Updating MDS map to version 7 from mon.2
Oct  2 07:35:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:14.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct  2 07:35:14 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 70 pg[9.8( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=10.754250526s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 150.853027344s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:14 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 70 pg[9.8( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=10.754172325s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.853027344s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:14 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 70 pg[9.18( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=10.753870964s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 150.853302002s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:14 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 70 pg[9.18( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=10.753805161s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.853302002s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e8 new map
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:35:12.043439+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24148}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 2 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.zfhmgy{-1:24158} state up:standby seq 1 addr [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:35:15 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Updating MDS map to version 8 from mon.2
Oct  2 07:35:15 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Monitors have assigned me to become a standby.
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  2 07:35:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:15.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:15 np0005465987 podman[84332]: 2025-10-02 11:35:15.531762476 +0000 UTC m=+0.052573717 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:35:15 np0005465987 podman[84332]: 2025-10-02 11:35:15.628999614 +0000 UTC m=+0.149810805 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 07:35:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct  2 07:35:15 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 71 pg[9.18( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[0] r=0 lpr=71 pi=[54,71)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:15 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 71 pg[9.18( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[0] r=0 lpr=71 pi=[54,71)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:15 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 71 pg[9.8( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[0] r=0 lpr=71 pi=[54,71)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:15 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 71 pg[9.8( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[0] r=0 lpr=71 pi=[54,71)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:15 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct  2 07:35:16 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct  2 07:35:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:16.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e9 new map
Oct  2 07:35:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:35:16.189190+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24148}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.zfhmgy{-1:24158} state up:standby seq 1 addr [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:35:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  2 07:35:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct  2 07:35:17 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 72 pg[9.9( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=8.630183220s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 150.852783203s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:17 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 72 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=8.629774094s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 150.852310181s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:17 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 72 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=8.629478455s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.852310181s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:17 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 72 pg[9.9( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=8.629908562s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.852783203s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:17 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 72 pg[9.18( v 47'1065 (0'0,47'1065] local-lis/les=71/72 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[54,71)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:17 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 72 pg[9.8( v 47'1065 (0'0,47'1065] local-lis/les=71/72 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[54,71)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e10 new map
Oct  2 07:35:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:35:16.189190+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24148}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.zfhmgy{-1:24158} state up:standby seq 1 addr [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:35:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  2 07:35:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:18.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct  2 07:35:18 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 73 pg[9.9( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=0 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:18 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 73 pg[9.9( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=0 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:18 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 73 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=0 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:18 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 73 pg[9.8( v 47'1065 (0'0,47'1065] local-lis/les=71/72 n=6 ec=54/41 lis/c=71/54 les/c/f=72/55/0 sis=73 pruub=14.825916290s) [2] async=[2] r=-1 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 158.240478516s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:18 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 73 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=0 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:18 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 73 pg[9.8( v 47'1065 (0'0,47'1065] local-lis/les=71/72 n=6 ec=54/41 lis/c=71/54 les/c/f=72/55/0 sis=73 pruub=14.825849533s) [2] r=-1 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.240478516s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:18 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 73 pg[9.18( v 47'1065 (0'0,47'1065] local-lis/les=71/72 n=5 ec=54/41 lis/c=71/54 les/c/f=72/55/0 sis=73 pruub=14.825138092s) [2] async=[2] r=-1 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 158.240478516s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:18 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 73 pg[9.18( v 47'1065 (0'0,47'1065] local-lis/les=71/72 n=5 ec=54/41 lis/c=71/54 les/c/f=72/55/0 sis=73 pruub=14.824993134s) [2] r=-1 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.240478516s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:35:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:35:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:19 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct  2 07:35:19 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct  2 07:35:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e11 new map
Oct  2 07:35:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).mds e11 print_map#012e11#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:34:16.058920+0000#012modified#0112025-10-02T11:35:16.189190+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24148}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.gpiyct{0:24148} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1224979121,v1:192.168.122.102:6805/1224979121] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.odxjnj{-1:14421} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3281017148,v1:192.168.122.100:6807/3281017148] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.zfhmgy{-1:24158} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2486237496,v1:192.168.122.101:6805/2486237496] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:35:19 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Updating MDS map to version 11 from mon.2
Oct  2 07:35:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:35:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:19.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:35:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct  2 07:35:19 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 74 pg[9.9( v 47'1065 (0'0,47'1065] local-lis/les=73/74 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:19 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 74 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=73/74 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[54,73)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:20 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct  2 07:35:20 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct  2 07:35:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:20.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct  2 07:35:20 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 75 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=73/74 n=5 ec=54/41 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=14.990023613s) [2] async=[2] r=-1 lpr=75 pi=[54,75)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 160.619918823s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:20 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 75 pg[9.9( v 47'1065 (0'0,47'1065] local-lis/les=73/74 n=6 ec=54/41 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=14.988512039s) [2] async=[2] r=-1 lpr=75 pi=[54,75)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 160.619018555s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:20 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 75 pg[9.9( v 47'1065 (0'0,47'1065] local-lis/les=73/74 n=6 ec=54/41 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=14.988194466s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.619018555s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:20 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 75 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=73/74 n=5 ec=54/41 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=14.988989830s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.619918823s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:21 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct  2 07:35:21 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct  2 07:35:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:21.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct  2 07:35:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:22.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:23 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct  2 07:35:23 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct  2 07:35:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:23.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct  2 07:35:24 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  2 07:35:24 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:24 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct  2 07:35:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:35:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 77 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=8.135324478s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 158.845016479s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 77 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=8.135273933s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.845016479s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 77 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=8.142728806s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 158.853332520s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 77 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=8.142702103s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.853332520s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:25 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  2 07:35:25 np0005465987 ceph-mon[80822]: Reconfiguring mon.compute-0 (monmap changed)...
Oct  2 07:35:25 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:35:25 np0005465987 ceph-mon[80822]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  2 07:35:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 78 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 78 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 78 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 78 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:25 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct  2 07:35:26 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct  2 07:35:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:35:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:35:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:27 np0005465987 ceph-mon[80822]: Reconfiguring mgr.compute-0.fmcstn (monmap changed)...
Oct  2 07:35:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmcstn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 07:35:27 np0005465987 ceph-mon[80822]: Reconfiguring daemon mgr.compute-0.fmcstn on compute-0
Oct  2 07:35:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  2 07:35:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct  2 07:35:27 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 79 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=78/79 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[54,78)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:27 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 79 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=78/79 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[54,78)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  2 07:35:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:35:28 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct  2 07:35:28 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct  2 07:35:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct  2 07:35:28 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 80 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=78/79 n=5 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=14.942640305s) [1] async=[1] r=-1 lpr=80 pi=[54,80)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 168.929565430s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:28 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 80 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=78/79 n=5 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=14.942508698s) [1] r=-1 lpr=80 pi=[54,80)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.929565430s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:28 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 80 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=78/79 n=6 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=14.933870316s) [1] async=[1] r=-1 lpr=80 pi=[54,80)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 168.921600342s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:28 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 80 pg[9.a( v 47'1065 (0'0,47'1065] local-lis/les=78/79 n=6 ec=54/41 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=14.933619499s) [1] r=-1 lpr=80 pi=[54,80)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.921600342s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:29 np0005465987 ceph-mon[80822]: Reconfiguring crash.compute-0 (monmap changed)...
Oct  2 07:35:29 np0005465987 ceph-mon[80822]: Reconfiguring daemon crash.compute-0 on compute-0
Oct  2 07:35:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  2 07:35:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:29 np0005465987 ceph-mon[80822]: Reconfiguring osd.1 (monmap changed)...
Oct  2 07:35:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  2 07:35:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  2 07:35:29 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct  2 07:35:29 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct  2 07:35:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:35:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:29.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:35:29 np0005465987 podman[84751]: 2025-10-02 11:35:29.822291543 +0000 UTC m=+0.079589234 container create bfba4233f4a73bd04685e45d8f11529e4037f07bcf7123ca1f51827555f4bba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 07:35:29 np0005465987 podman[84751]: 2025-10-02 11:35:29.765902881 +0000 UTC m=+0.023200612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:35:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct  2 07:35:29 np0005465987 systemd[1]: Started libpod-conmon-bfba4233f4a73bd04685e45d8f11529e4037f07bcf7123ca1f51827555f4bba8.scope.
Oct  2 07:35:29 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:35:29 np0005465987 podman[84751]: 2025-10-02 11:35:29.934327797 +0000 UTC m=+0.191625518 container init bfba4233f4a73bd04685e45d8f11529e4037f07bcf7123ca1f51827555f4bba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:35:29 np0005465987 podman[84751]: 2025-10-02 11:35:29.942877387 +0000 UTC m=+0.200175088 container start bfba4233f4a73bd04685e45d8f11529e4037f07bcf7123ca1f51827555f4bba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:35:29 np0005465987 relaxed_germain[84767]: 167 167
Oct  2 07:35:29 np0005465987 systemd[1]: libpod-bfba4233f4a73bd04685e45d8f11529e4037f07bcf7123ca1f51827555f4bba8.scope: Deactivated successfully.
Oct  2 07:35:29 np0005465987 podman[84751]: 2025-10-02 11:35:29.98397415 +0000 UTC m=+0.241271871 container attach bfba4233f4a73bd04685e45d8f11529e4037f07bcf7123ca1f51827555f4bba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 07:35:29 np0005465987 podman[84751]: 2025-10-02 11:35:29.984485785 +0000 UTC m=+0.241783496 container died bfba4233f4a73bd04685e45d8f11529e4037f07bcf7123ca1f51827555f4bba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 07:35:30 np0005465987 systemd[1]: var-lib-containers-storage-overlay-417ccdd436242f8cb61a78ba1c3caea5dcb29723a2a4d9e4e63051b33ba382a9-merged.mount: Deactivated successfully.
Oct  2 07:35:30 np0005465987 podman[84751]: 2025-10-02 11:35:30.103331689 +0000 UTC m=+0.360629430 container remove bfba4233f4a73bd04685e45d8f11529e4037f07bcf7123ca1f51827555f4bba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 07:35:30 np0005465987 systemd[1]: libpod-conmon-bfba4233f4a73bd04685e45d8f11529e4037f07bcf7123ca1f51827555f4bba8.scope: Deactivated successfully.
Oct  2 07:35:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:30.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:30 np0005465987 ceph-mon[80822]: Reconfiguring daemon osd.1 on compute-0
Oct  2 07:35:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:30 np0005465987 ceph-mon[80822]: Reconfiguring crash.compute-1 (monmap changed)...
Oct  2 07:35:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:35:30 np0005465987 ceph-mon[80822]: Reconfiguring daemon crash.compute-1 on compute-1
Oct  2 07:35:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  2 07:35:30 np0005465987 podman[84902]: 2025-10-02 11:35:30.832674698 +0000 UTC m=+0.036790744 container create 223602062d5512d188369ba27475b7c33f6e0cde84fc41eae63e42b2caf4a31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 07:35:30 np0005465987 systemd[1]: Started libpod-conmon-223602062d5512d188369ba27475b7c33f6e0cde84fc41eae63e42b2caf4a31b.scope.
Oct  2 07:35:30 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:35:30 np0005465987 podman[84902]: 2025-10-02 11:35:30.896403036 +0000 UTC m=+0.100519082 container init 223602062d5512d188369ba27475b7c33f6e0cde84fc41eae63e42b2caf4a31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:35:30 np0005465987 podman[84902]: 2025-10-02 11:35:30.901316364 +0000 UTC m=+0.105432410 container start 223602062d5512d188369ba27475b7c33f6e0cde84fc41eae63e42b2caf4a31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:35:30 np0005465987 charming_chaplygin[84918]: 167 167
Oct  2 07:35:30 np0005465987 podman[84902]: 2025-10-02 11:35:30.904932325 +0000 UTC m=+0.109048371 container attach 223602062d5512d188369ba27475b7c33f6e0cde84fc41eae63e42b2caf4a31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:35:30 np0005465987 systemd[1]: libpod-223602062d5512d188369ba27475b7c33f6e0cde84fc41eae63e42b2caf4a31b.scope: Deactivated successfully.
Oct  2 07:35:30 np0005465987 podman[84902]: 2025-10-02 11:35:30.90616171 +0000 UTC m=+0.110277756 container died 223602062d5512d188369ba27475b7c33f6e0cde84fc41eae63e42b2caf4a31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 07:35:30 np0005465987 podman[84902]: 2025-10-02 11:35:30.817058869 +0000 UTC m=+0.021174925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:35:30 np0005465987 systemd[1]: var-lib-containers-storage-overlay-6440fb218cad88a01808a30f81d1ae0dc540463528d63bf2a7b0febd3e7c59a6-merged.mount: Deactivated successfully.
Oct  2 07:35:30 np0005465987 podman[84902]: 2025-10-02 11:35:30.949267569 +0000 UTC m=+0.153383615 container remove 223602062d5512d188369ba27475b7c33f6e0cde84fc41eae63e42b2caf4a31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:35:30 np0005465987 systemd[1]: libpod-conmon-223602062d5512d188369ba27475b7c33f6e0cde84fc41eae63e42b2caf4a31b.scope: Deactivated successfully.
Oct  2 07:35:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:35:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:31.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:35:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct  2 07:35:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:32 np0005465987 ceph-mon[80822]: Reconfiguring osd.0 (monmap changed)...
Oct  2 07:35:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  2 07:35:32 np0005465987 ceph-mon[80822]: Reconfiguring daemon osd.0 on compute-1
Oct  2 07:35:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  2 07:35:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:35:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:32.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:35:32 np0005465987 podman[85062]: 2025-10-02 11:35:32.45442296 +0000 UTC m=+0.049007096 container create cf599ec454071da68ced65259fc74fd4476ece2ac2a1c692741f791f3b3d030b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 07:35:32 np0005465987 systemd[1]: Started libpod-conmon-cf599ec454071da68ced65259fc74fd4476ece2ac2a1c692741f791f3b3d030b.scope.
Oct  2 07:35:32 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:35:32 np0005465987 podman[85062]: 2025-10-02 11:35:32.51785254 +0000 UTC m=+0.112436706 container init cf599ec454071da68ced65259fc74fd4476ece2ac2a1c692741f791f3b3d030b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_davinci, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:35:32 np0005465987 podman[85062]: 2025-10-02 11:35:32.526642336 +0000 UTC m=+0.121226472 container start cf599ec454071da68ced65259fc74fd4476ece2ac2a1c692741f791f3b3d030b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_davinci, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 07:35:32 np0005465987 podman[85062]: 2025-10-02 11:35:32.529002703 +0000 UTC m=+0.123586859 container attach cf599ec454071da68ced65259fc74fd4476ece2ac2a1c692741f791f3b3d030b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_davinci, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:35:32 np0005465987 nostalgic_davinci[85078]: 167 167
Oct  2 07:35:32 np0005465987 systemd[1]: libpod-cf599ec454071da68ced65259fc74fd4476ece2ac2a1c692741f791f3b3d030b.scope: Deactivated successfully.
Oct  2 07:35:32 np0005465987 podman[85062]: 2025-10-02 11:35:32.532011397 +0000 UTC m=+0.126595533 container died cf599ec454071da68ced65259fc74fd4476ece2ac2a1c692741f791f3b3d030b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_davinci, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:35:32 np0005465987 podman[85062]: 2025-10-02 11:35:32.439012547 +0000 UTC m=+0.033596703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:35:32 np0005465987 systemd[1]: var-lib-containers-storage-overlay-7de28f745406c154225cee9a4858f7ea3ad0425eae215dbd541ae042f6760598-merged.mount: Deactivated successfully.
Oct  2 07:35:32 np0005465987 podman[85062]: 2025-10-02 11:35:32.563681287 +0000 UTC m=+0.158265423 container remove cf599ec454071da68ced65259fc74fd4476ece2ac2a1c692741f791f3b3d030b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:35:32 np0005465987 systemd[1]: libpod-conmon-cf599ec454071da68ced65259fc74fd4476ece2ac2a1c692741f791f3b3d030b.scope: Deactivated successfully.
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: Reconfiguring mon.compute-1 (monmap changed)...
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: Reconfiguring daemon mon.compute-1 on compute-1
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:35:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  2 07:35:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:35:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:33.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:35:34 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct  2 07:35:34 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct  2 07:35:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:34.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct  2 07:35:34 np0005465987 ceph-mon[80822]: Reconfiguring mon.compute-2 (monmap changed)...
Oct  2 07:35:34 np0005465987 ceph-mon[80822]: Reconfiguring daemon mon.compute-2 on compute-2
Oct  2 07:35:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:34 np0005465987 podman[85269]: 2025-10-02 11:35:34.667131166 +0000 UTC m=+0.047899895 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 07:35:34 np0005465987 podman[85269]: 2025-10-02 11:35:34.756182866 +0000 UTC m=+0.136951595 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:35:35 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct  2 07:35:35 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct  2 07:35:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:35.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct  2 07:35:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 07:35:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:36 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct  2 07:35:36 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct  2 07:35:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:36.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct  2 07:35:36 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 86 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=86 pruub=13.096716881s) [1] r=-1 lpr=86 pi=[54,86)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 174.853271484s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:36 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 86 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=86 pruub=13.096295357s) [1] r=-1 lpr=86 pi=[54,86)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.853271484s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  2 07:35:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  2 07:35:37 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct  2 07:35:37 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct  2 07:35:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:37.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct  2 07:35:37 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 87 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=87) [1]/[0] r=0 lpr=87 pi=[54,87)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:37 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 87 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=87) [1]/[0] r=0 lpr=87 pi=[54,87)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:38 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct  2 07:35:38 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct  2 07:35:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:38.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct  2 07:35:38 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 88 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=88 pruub=11.056040764s) [1] r=-1 lpr=88 pi=[54,88)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 174.846466064s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:38 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 88 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=88 pruub=11.055992126s) [1] r=-1 lpr=88 pi=[54,88)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.846466064s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:38 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 88 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=87/88 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=87) [1]/[0] async=[1] r=0 lpr=87 pi=[54,87)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  2 07:35:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  2 07:35:39 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct  2 07:35:39 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct  2 07:35:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:39.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct  2 07:35:39 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 89 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=87/88 n=6 ec=54/41 lis/c=87/54 les/c/f=88/55/0 sis=89 pruub=15.115469933s) [1] async=[1] r=-1 lpr=89 pi=[54,89)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 179.810653687s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:39 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 89 pg[9.10( v 47'1065 (0'0,47'1065] local-lis/les=87/88 n=6 ec=54/41 lis/c=87/54 les/c/f=88/55/0 sis=89 pruub=15.115393639s) [1] r=-1 lpr=89 pi=[54,89)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.810653687s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:39 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 89 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=89) [1]/[0] r=0 lpr=89 pi=[54,89)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:39 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 89 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=89) [1]/[0] r=0 lpr=89 pi=[54,89)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:40.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct  2 07:35:40 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 90 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=89/90 n=6 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[54,89)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:41.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct  2 07:35:41 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 91 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=89/90 n=6 ec=54/41 lis/c=89/54 les/c/f=90/55/0 sis=91 pruub=14.968729973s) [1] async=[1] r=-1 lpr=91 pi=[54,91)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 181.719192505s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:41 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 91 pg[9.11( v 47'1065 (0'0,47'1065] local-lis/les=89/90 n=6 ec=54/41 lis/c=89/54 les/c/f=90/55/0 sis=91 pruub=14.968657494s) [1] r=-1 lpr=91 pi=[54,91)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.719192505s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:42.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:35:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct  2 07:35:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:43.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:44 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct  2 07:35:44 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct  2 07:35:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:44.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:44 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct  2 07:35:45 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct  2 07:35:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:45.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:46 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct  2 07:35:46 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct  2 07:35:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:46.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct  2 07:35:47 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct  2 07:35:47 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 93 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=93 pruub=10.479509354s) [1] r=-1 lpr=93 pi=[54,93)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 182.853912354s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:47 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 93 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=93 pruub=10.479393005s) [1] r=-1 lpr=93 pi=[54,93)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.853912354s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:47 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct  2 07:35:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:47.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  2 07:35:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct  2 07:35:47 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 94 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:47 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 94 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=54/55 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=47'1065 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:35:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:48.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:49 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  2 07:35:49 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  2 07:35:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:35:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:49.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:35:49 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct  2 07:35:49 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct  2 07:35:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:35:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:50.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:35:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct  2 07:35:50 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct  2 07:35:51 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct  2 07:35:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:35:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:51.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:35:51 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct  2 07:35:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:52.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:52 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 95 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=94/95 n=5 ec=54/41 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:35:52 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct  2 07:35:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:52 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct  2 07:35:53 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct  2 07:35:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:35:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:53.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:35:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct  2 07:35:53 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct  2 07:35:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  2 07:35:54 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 96 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=94/95 n=5 ec=54/41 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=14.226063728s) [1] async=[1] r=-1 lpr=96 pi=[54,96)/1 crt=47'1065 lcod 0'0 mlcod 0'0 active pruub 193.461730957s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:35:54 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 96 pg[9.12( v 47'1065 (0'0,47'1065] local-lis/les=94/95 n=5 ec=54/41 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=14.225980759s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=47'1065 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.461730957s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:35:54 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct  2 07:35:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:35:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:54.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:35:54 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct  2 07:35:54 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct  2 07:35:55 np0005465987 systemd-logind[794]: New session 34 of user zuul.
Oct  2 07:35:55 np0005465987 systemd[1]: Started Session 34 of User zuul.
Oct  2 07:35:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:55.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct  2 07:35:56 np0005465987 python3.9[85594]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:35:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:56.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:56 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct  2 07:35:56 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct  2 07:35:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:35:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:57.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:35:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:35:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:35:58.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:58 np0005465987 python3.9[85808]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:35:58 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct  2 07:35:58 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct  2 07:35:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:35:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:35:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:35:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:35:59 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct  2 07:35:59 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct  2 07:36:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct  2 07:36:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  2 07:36:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:01.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:02.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  2 07:36:02 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct  2 07:36:02 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct  2 07:36:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct  2 07:36:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  2 07:36:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  2 07:36:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct  2 07:36:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:04.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:04 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  2 07:36:04 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.10 deep-scrub starts
Oct  2 07:36:04 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.10 deep-scrub ok
Oct  2 07:36:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct  2 07:36:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:05.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  2 07:36:05 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct  2 07:36:05 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct  2 07:36:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct  2 07:36:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:06.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:06 np0005465987 systemd[1]: session-34.scope: Deactivated successfully.
Oct  2 07:36:06 np0005465987 systemd[1]: session-34.scope: Consumed 7.991s CPU time.
Oct  2 07:36:06 np0005465987 systemd-logind[794]: Session 34 logged out. Waiting for processes to exit.
Oct  2 07:36:06 np0005465987 systemd-logind[794]: Removed session 34.
Oct  2 07:36:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  2 07:36:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct  2 07:36:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:07.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  2 07:36:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:08.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:08 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct  2 07:36:08 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct  2 07:36:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  2 07:36:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct  2 07:36:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:09.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  2 07:36:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct  2 07:36:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:10.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  2 07:36:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct  2 07:36:11 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=75/75 les/c/f=76/76/0 sis=106) [0] r=0 lpr=106 pi=[75,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:11.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct  2 07:36:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=75/75 les/c/f=76/76/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[75,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:12 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=75/75 les/c/f=76/76/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[75,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:36:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  2 07:36:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  2 07:36:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct  2 07:36:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:13.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:13 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 108 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=80/80 les/c/f=81/81/0 sis=108) [0] r=0 lpr=108 pi=[80,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:13 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct  2 07:36:13 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct  2 07:36:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:14.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:14 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  2 07:36:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct  2 07:36:15 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 109 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=107/75 les/c/f=108/76/0 sis=109) [0] r=0 lpr=109 pi=[75,109)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:15 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 109 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=107/75 les/c/f=108/76/0 sis=109) [0] r=0 lpr=109 pi=[75,109)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:15 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 109 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[1] r=-1 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:15 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 109 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[1] r=-1 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:36:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:15.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct  2 07:36:16 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 110 pg[9.19( v 47'1065 (0'0,47'1065] local-lis/les=109/110 n=5 ec=54/41 lis/c=107/75 les/c/f=108/76/0 sis=109) [0] r=0 lpr=109 pi=[75,109)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:36:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:16.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct  2 07:36:17 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 111 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=109/80 les/c/f=110/81/0 sis=111) [0] r=0 lpr=111 pi=[80,111)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:17 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 111 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=109/80 les/c/f=110/81/0 sis=111) [0] r=0 lpr=111 pi=[80,111)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:17.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:36:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:18.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:36:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct  2 07:36:18 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 112 pg[9.1a( v 47'1065 (0'0,47'1065] local-lis/les=111/112 n=5 ec=54/41 lis/c=109/80 les/c/f=110/81/0 sis=111) [0] r=0 lpr=111 pi=[80,111)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:36:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:19.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:20.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct  2 07:36:20 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 113 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=64/64 les/c/f=65/65/0 sis=113) [0] r=0 lpr=113 pi=[64,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  2 07:36:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:21.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct  2 07:36:21 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 114 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=64/64 les/c/f=65/65/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[64,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:21 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 114 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=64/64 les/c/f=65/65/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[64,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:36:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  2 07:36:21 np0005465987 systemd-logind[794]: New session 35 of user zuul.
Oct  2 07:36:21 np0005465987 systemd[1]: Started Session 35 of User zuul.
Oct  2 07:36:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:22.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  2 07:36:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct  2 07:36:22 np0005465987 python3.9[86018]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  2 07:36:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:23.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  2 07:36:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct  2 07:36:23 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 116 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=114/64 les/c/f=115/65/0 sis=116) [0] r=0 lpr=116 pi=[64,116)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:23 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 116 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=114/64 les/c/f=115/65/0 sis=116) [0] r=0 lpr=116 pi=[64,116)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:23 np0005465987 python3.9[86192]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:36:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:24.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct  2 07:36:24 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 117 pg[9.1b( v 47'1065 (0'0,47'1065] local-lis/les=116/117 n=5 ec=54/41 lis/c=114/64 les/c/f=115/65/0 sis=116) [0] r=0 lpr=116 pi=[64,116)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:36:24 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct  2 07:36:24 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct  2 07:36:25 np0005465987 python3.9[86348]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:36:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:25.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:25 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct  2 07:36:25 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct  2 07:36:26 np0005465987 python3.9[86501]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:36:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:26.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:27 np0005465987 python3.9[86655]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:36:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:27.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:27 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct  2 07:36:27 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct  2 07:36:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:28 np0005465987 python3.9[86806]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:36:28 np0005465987 network[86823]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:36:28 np0005465987 network[86824]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:36:28 np0005465987 network[86825]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:36:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:28.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:28 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  2 07:36:28 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  2 07:36:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:36:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:29.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:36:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:30.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct  2 07:36:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  2 07:36:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct  2 07:36:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  2 07:36:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:31.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:32.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct  2 07:36:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  2 07:36:32 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 120 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=120) [0] r=0 lpr=120 pi=[69,120)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:32 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct  2 07:36:32 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct  2 07:36:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  2 07:36:33 np0005465987 python3.9[87088]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:36:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:33.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct  2 07:36:33 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 121 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=121) [0]/[1] r=-1 lpr=121 pi=[69,121)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:33 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 121 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=69/69 les/c/f=70/70/0 sis=121) [0]/[1] r=-1 lpr=121 pi=[69,121)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:36:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:34.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:34 np0005465987 python3.9[87238]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:36:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:36:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct  2 07:36:34 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct  2 07:36:34 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct  2 07:36:34 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 122 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=88/88 les/c/f=89/89/0 sis=122) [0] r=0 lpr=122 pi=[88,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:35.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:36:35 np0005465987 python3.9[87392]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:36:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct  2 07:36:35 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 123 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=121/69 les/c/f=122/70/0 sis=123) [0] r=0 lpr=123 pi=[69,123)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:35 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 123 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=88/88 les/c/f=89/89/0 sis=123) [0]/[1] r=-1 lpr=123 pi=[88,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:35 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 123 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=121/69 les/c/f=122/70/0 sis=123) [0] r=0 lpr=123 pi=[69,123)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:35 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 123 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/41 lis/c=88/88 les/c/f=89/89/0 sis=123) [0]/[1] r=-1 lpr=123 pi=[88,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:36:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:36:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:36:36 np0005465987 python3.9[87550]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:36:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct  2 07:36:36 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 124 pg[9.1e( v 47'1065 (0'0,47'1065] local-lis/les=123/124 n=5 ec=54/41 lis/c=121/69 les/c/f=122/70/0 sis=123) [0] r=0 lpr=123 pi=[69,123)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:36:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:37.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:37 np0005465987 python3.9[87634]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:36:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct  2 07:36:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:37 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 125 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=123/88 les/c/f=124/89/0 sis=125) [0] r=0 lpr=125 pi=[88,125)/1 luod=0'0 crt=47'1065 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:36:37 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 125 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=0/0 n=5 ec=54/41 lis/c=123/88 les/c/f=124/89/0 sis=125) [0] r=0 lpr=125 pi=[88,125)/1 crt=47'1065 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:36:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:36:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:38.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:36:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct  2 07:36:39 np0005465987 ceph-osd[78159]: osd.0 pg_epoch: 126 pg[9.1f( v 47'1065 (0'0,47'1065] local-lis/les=125/126 n=5 ec=54/41 lis/c=123/88 les/c/f=124/89/0 sis=125) [0] r=0 lpr=125 pi=[88,125)/1 crt=47'1065 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:36:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:39.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:40.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:41.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:42.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:43 np0005465987 podman[87876]: 2025-10-02 11:36:43.4396743 +0000 UTC m=+0.053832239 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:36:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:43.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:43 np0005465987 podman[87876]: 2025-10-02 11:36:43.531026114 +0000 UTC m=+0.145184053 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:36:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:44 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct  2 07:36:44 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.335971) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005336058, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7220, "num_deletes": 255, "total_data_size": 13568401, "memory_usage": 13755232, "flush_reason": "Manual Compaction"}
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005389373, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7990411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 241, "largest_seqno": 7225, "table_properties": {"data_size": 7962232, "index_size": 18464, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 81060, "raw_average_key_size": 23, "raw_value_size": 7895206, "raw_average_value_size": 2295, "num_data_blocks": 817, "num_entries": 3439, "num_filter_entries": 3439, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 1759404811, "file_creation_time": 1759405005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 53560 microseconds, and 15370 cpu microseconds.
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.389533) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7990411 bytes OK
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.389580) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.391327) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.391341) EVENT_LOG_v1 {"time_micros": 1759405005391337, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.391356) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13530542, prev total WAL file size 13591110, number of live WAL files 2.
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.394077) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7803KB) 8(1648B)]
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005394243, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7992059, "oldest_snapshot_seqno": -1}
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3188 keys, 7986926 bytes, temperature: kUnknown
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005430986, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7986926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7959405, "index_size": 18451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 76882, "raw_average_key_size": 24, "raw_value_size": 7895486, "raw_average_value_size": 2476, "num_data_blocks": 817, "num_entries": 3188, "num_filter_entries": 3188, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759405005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.431167) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7986926 bytes
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.432131) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.1 rd, 217.0 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.6, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3444, records dropped: 256 output_compression: NoCompression
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.432146) EVENT_LOG_v1 {"time_micros": 1759405005432139, "job": 4, "event": "compaction_finished", "compaction_time_micros": 36809, "compaction_time_cpu_micros": 15338, "output_level": 6, "num_output_files": 1, "total_output_size": 7986926, "num_input_records": 3444, "num_output_records": 3188, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005433332, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405005433414, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  2 07:36:45 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:36:45.393971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:36:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:45.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:45 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct  2 07:36:45 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct  2 07:36:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:36:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:36:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:36:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:36:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:36:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:36:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:36:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:47.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:48.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:49.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:49 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct  2 07:36:49 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct  2 07:36:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:50.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:51.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:52.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:36:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:36:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:53.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:53 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct  2 07:36:53 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct  2 07:36:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:36:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:36:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:55.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:55 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct  2 07:36:55 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct  2 07:36:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:56.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:57.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:36:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:36:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:36:58.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:36:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:36:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:36:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:36:59.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:37:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:00.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:01.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:37:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:02.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:37:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:03.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:04.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:04 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct  2 07:37:04 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct  2 07:37:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:05.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:05 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct  2 07:37:05 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct  2 07:37:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:06 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct  2 07:37:06 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct  2 07:37:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:07.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:37:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:37:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:09.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:10.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:11.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:37:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:12.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:37:12 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  2 07:37:12 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  2 07:37:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:13.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:14.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:15.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:15 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Oct  2 07:37:15 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Oct  2 07:37:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:16.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:17 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct  2 07:37:17 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct  2 07:37:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:18.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:18 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  2 07:37:18 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  2 07:37:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:19.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:37:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:21.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:37:21 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct  2 07:37:21 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct  2 07:37:21 np0005465987 python3.9[88404]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:22.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:22 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct  2 07:37:22 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct  2 07:37:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:23.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:23 np0005465987 python3.9[88691]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  2 07:37:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:37:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:24.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:37:24 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct  2 07:37:24 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct  2 07:37:24 np0005465987 python3.9[88843]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  2 07:37:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:25.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:25 np0005465987 python3.9[88995]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:37:25 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct  2 07:37:25 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct  2 07:37:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:26.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:26 np0005465987 python3.9[89147]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  2 07:37:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:27.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:28 np0005465987 python3.9[89299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:28.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:29 np0005465987 python3.9[89451]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:37:29 np0005465987 python3.9[89529]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:37:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:29.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:30.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:30 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct  2 07:37:30 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct  2 07:37:31 np0005465987 python3.9[89681]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  2 07:37:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:31.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:32 np0005465987 python3.9[89834]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  2 07:37:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:37:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:32.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:37:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:33 np0005465987 python3.9[89987]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:37:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:33.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:33 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct  2 07:37:33 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct  2 07:37:34 np0005465987 python3.9[90139]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  2 07:37:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:34 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct  2 07:37:34 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct  2 07:37:35 np0005465987 python3.9[90291]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:37:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:35.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:36 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct  2 07:37:36 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct  2 07:37:37 np0005465987 python3.9[90444]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:37.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:37 np0005465987 python3.9[90596]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:37:38 np0005465987 python3.9[90674]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:38.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:38 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  2 07:37:38 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  2 07:37:39 np0005465987 python3.9[90826]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:37:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:39.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:39 np0005465987 python3.9[90904]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:37:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:37:40 np0005465987 python3.9[91056]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:37:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:41.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:41 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Oct  2 07:37:41 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Oct  2 07:37:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:43 np0005465987 python3.9[91207]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:37:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:37:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:43.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:37:43 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.10 deep-scrub starts
Oct  2 07:37:43 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.10 deep-scrub ok
Oct  2 07:37:43 np0005465987 python3.9[91359]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  2 07:37:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:44.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:44 np0005465987 python3.9[91509]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:37:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:37:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:45.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:37:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:37:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:46.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:37:46 np0005465987 python3.9[91661]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:37:46 np0005465987 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  2 07:37:46 np0005465987 systemd[1]: tuned.service: Deactivated successfully.
Oct  2 07:37:46 np0005465987 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  2 07:37:46 np0005465987 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 07:37:46 np0005465987 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 07:37:46 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct  2 07:37:46 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct  2 07:37:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:47.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:47 np0005465987 python3.9[91823]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  2 07:37:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:48.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:49.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:49 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct  2 07:37:49 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct  2 07:37:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:37:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:50.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:37:51 np0005465987 python3.9[91975]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:37:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:51.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:51 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct  2 07:37:51 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct  2 07:37:52 np0005465987 python3.9[92129]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:37:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:52.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:53 np0005465987 systemd-logind[794]: Session 35 logged out. Waiting for processes to exit.
Oct  2 07:37:53 np0005465987 systemd[1]: session-35.scope: Deactivated successfully.
Oct  2 07:37:53 np0005465987 systemd[1]: session-35.scope: Consumed 59.640s CPU time.
Oct  2 07:37:53 np0005465987 systemd-logind[794]: Removed session 35.
Oct  2 07:37:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:53.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:53 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Oct  2 07:37:53 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Oct  2 07:37:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:54.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:54 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct  2 07:37:54 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct  2 07:37:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:55.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:55 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Oct  2 07:37:55 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Oct  2 07:37:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:37:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:37:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:56.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:56 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct  2 07:37:56 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct  2 07:37:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:37:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:37:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:37:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:37:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:57.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:37:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:37:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:37:58.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:37:58 np0005465987 systemd[71766]: Created slice User Background Tasks Slice.
Oct  2 07:37:58 np0005465987 systemd[71766]: Starting Cleanup of User's Temporary Files and Directories...
Oct  2 07:37:58 np0005465987 systemd[71766]: Finished Cleanup of User's Temporary Files and Directories.
Oct  2 07:37:58 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct  2 07:37:58 np0005465987 ceph-osd[78159]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct  2 07:37:59 np0005465987 systemd-logind[794]: New session 36 of user zuul.
Oct  2 07:37:59 np0005465987 systemd[1]: Started Session 36 of User zuul.
Oct  2 07:37:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:37:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:37:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:37:59.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:00 np0005465987 python3.9[92443]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:00.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:01 np0005465987 python3.9[92599]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  2 07:38:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:01.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:38:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:02.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:38:02 np0005465987 python3.9[92802]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:38:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:38:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:38:03 np0005465987 python3.9[92886]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:38:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:03.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:04.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:38:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:05.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:38:05 np0005465987 python3.9[93039]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:06.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:07.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:08 np0005465987 python3.9[93192]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:38:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:08.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:09 np0005465987 python3.9[93345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:09.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:10.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:10 np0005465987 python3.9[93497]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  2 07:38:11 np0005465987 python3.9[93647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:12.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:12 np0005465987 python3.9[93805]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:14.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:14 np0005465987 python3.9[93958]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:38:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:38:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:15.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:38:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:38:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:38:16 np0005465987 python3.9[94246]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 07:38:17 np0005465987 python3.9[94396]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:38:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:18 np0005465987 python3.9[94550]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:18.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:38:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:19.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:38:20 np0005465987 python3.9[94703]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:20.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:21.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:22 np0005465987 python3.9[94856]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:38:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:23 np0005465987 python3.9[95010]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct  2 07:38:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:38:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:23.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:38:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:24 np0005465987 systemd[1]: session-36.scope: Deactivated successfully.
Oct  2 07:38:24 np0005465987 systemd[1]: session-36.scope: Consumed 17.057s CPU time.
Oct  2 07:38:24 np0005465987 systemd-logind[794]: Session 36 logged out. Waiting for processes to exit.
Oct  2 07:38:24 np0005465987 systemd-logind[794]: Removed session 36.
Oct  2 07:38:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:25.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:26.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:27.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:28.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:29.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:29 np0005465987 systemd-logind[794]: New session 37 of user zuul.
Oct  2 07:38:29 np0005465987 systemd[1]: Started Session 37 of User zuul.
Oct  2 07:38:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:30.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:30 np0005465987 python3.9[95188]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:31.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:31 np0005465987 python3.9[95342]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:38:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:32.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:33 np0005465987 python3.9[95535]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:38:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:33.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:33 np0005465987 systemd-logind[794]: Session 37 logged out. Waiting for processes to exit.
Oct  2 07:38:33 np0005465987 systemd[1]: session-37.scope: Deactivated successfully.
Oct  2 07:38:33 np0005465987 systemd[1]: session-37.scope: Consumed 2.348s CPU time.
Oct  2 07:38:33 np0005465987 systemd-logind[794]: Removed session 37.
Oct  2 07:38:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:34.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:35.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:36.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:37.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:38.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:39 np0005465987 systemd-logind[794]: New session 38 of user zuul.
Oct  2 07:38:39 np0005465987 systemd[1]: Started Session 38 of User zuul.
Oct  2 07:38:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:39.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:40 np0005465987 python3.9[95715]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:38:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:40.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:38:41 np0005465987 python3.9[95869]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:41.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:42 np0005465987 python3.9[96025]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:38:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:43 np0005465987 python3.9[96109]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:43.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:44.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:45 np0005465987 python3.9[96262]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:38:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:45.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:46.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:46 np0005465987 python3.9[96457]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:47.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:47 np0005465987 python3.9[96609]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:38:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:48.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:48 np0005465987 python3.9[96774]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:38:49 np0005465987 python3.9[96852]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:49.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:50 np0005465987 python3.9[97004]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:38:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 07:38:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:50.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 07:38:50 np0005465987 python3.9[97082]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:38:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:51.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:51 np0005465987 python3.9[97234]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:38:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:52.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:52 np0005465987 python3.9[97386]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:38:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:53 np0005465987 python3.9[97538]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:38:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:38:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:53.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:38:54 np0005465987 python3.9[97690]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:38:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:54.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:55 np0005465987 python3.9[97842]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:55.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:56.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:38:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:57.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:38:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:38:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:38:58.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:38:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:38:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:38:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:38:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:00.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:01.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:02.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:03 np0005465987 python3.9[98097]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:03.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:04 np0005465987 python3.9[98280]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:39:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:04.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:04 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:39:04 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:39:04 np0005465987 python3.9[98432]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:39:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:39:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:39:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:39:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:05.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:05 np0005465987 python3.9[98584]: ansible-service_facts Invoked
Oct  2 07:39:05 np0005465987 network[98601]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:39:05 np0005465987 network[98602]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:39:05 np0005465987 network[98603]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:39:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:06.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:07.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:08.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:09.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:39:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:39:11 np0005465987 python3.9[99058]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:39:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:11.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:39:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:39:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:13.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:14 np0005465987 python3.9[99261]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  2 07:39:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:14.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:15.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:15 np0005465987 python3.9[99413]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:16 np0005465987 python3.9[99491]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:16.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:17 np0005465987 python3.9[99643]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:17 np0005465987 python3.9[99721]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:39:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:17.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:39:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:18.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:19 np0005465987 python3.9[99873]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:39:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:19.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:39:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:39:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:20.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:39:21 np0005465987 python3.9[100025]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:39:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:39:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:21.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:39:22 np0005465987 python3.9[100109]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:39:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:22.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:23 np0005465987 systemd[1]: session-38.scope: Deactivated successfully.
Oct  2 07:39:23 np0005465987 systemd[1]: session-38.scope: Consumed 21.384s CPU time.
Oct  2 07:39:23 np0005465987 systemd-logind[794]: Session 38 logged out. Waiting for processes to exit.
Oct  2 07:39:23 np0005465987 systemd-logind[794]: Removed session 38.
Oct  2 07:39:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:23.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:24.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:25.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:26.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:27.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:28.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:28 np0005465987 systemd-logind[794]: New session 39 of user zuul.
Oct  2 07:39:28 np0005465987 systemd[1]: Started Session 39 of User zuul.
Oct  2 07:39:29 np0005465987 python3.9[100292]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:29.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:30.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:30 np0005465987 python3.9[100444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:31 np0005465987 python3.9[100522]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:31 np0005465987 systemd[1]: session-39.scope: Deactivated successfully.
Oct  2 07:39:31 np0005465987 systemd[1]: session-39.scope: Consumed 1.417s CPU time.
Oct  2 07:39:31 np0005465987 systemd-logind[794]: Session 39 logged out. Waiting for processes to exit.
Oct  2 07:39:31 np0005465987 systemd-logind[794]: Removed session 39.
Oct  2 07:39:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:31.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:32.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:33.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:34.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:35.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:36 np0005465987 systemd-logind[794]: New session 40 of user zuul.
Oct  2 07:39:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:39:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:36.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:39:36 np0005465987 systemd[1]: Started Session 40 of User zuul.
Oct  2 07:39:37 np0005465987 python3.9[100701]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:37.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:38.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:38 np0005465987 python3.9[100857]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:39 np0005465987 python3.9[101032]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:39.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:40 np0005465987 python3.9[101110]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.k89hwhnt recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:40.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:41 np0005465987 python3.9[101262]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:39:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:41.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:39:41 np0005465987 python3.9[101340]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.sc4yeu99 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:42.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:42 np0005465987 python3.9[101492]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:43 np0005465987 python3.9[101644]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:43.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:43 np0005465987 python3.9[101722]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:44 np0005465987 python3.9[101874]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:45 np0005465987 python3.9[101952]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:45.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:46 np0005465987 python3.9[102104]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:39:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:39:46 np0005465987 python3.9[102256]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:47 np0005465987 python3.9[102334]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:39:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:47.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:39:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:48 np0005465987 python3.9[102486]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:48.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:48 np0005465987 python3.9[102564]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:39:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:49.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:39:49 np0005465987 python3.9[102716]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:39:49 np0005465987 systemd[1]: Reloading.
Oct  2 07:39:49 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:39:49 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:39:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:50.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:51 np0005465987 python3.9[102907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:51 np0005465987 python3.9[102985]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:51.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:52 np0005465987 python3.9[103137]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:52.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:52 np0005465987 python3.9[103215]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:53 np0005465987 python3.9[103367]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:39:53 np0005465987 systemd[1]: Reloading.
Oct  2 07:39:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:53.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:53 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:39:53 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:39:54 np0005465987 systemd[1]: Starting Create netns directory...
Oct  2 07:39:54 np0005465987 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:39:54 np0005465987 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:39:54 np0005465987 systemd[1]: Finished Create netns directory.
Oct  2 07:39:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:55 np0005465987 python3.9[103560]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:39:55 np0005465987 network[103577]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:39:55 np0005465987 network[103578]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:39:55 np0005465987 network[103579]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:39:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:39:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:55.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:39:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:56.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:57.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:39:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:39:58.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:39:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:39:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:39:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:39:59.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 07:40:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:00.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:01.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:40:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:02.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:40:02 np0005465987 python3.9[103844]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:02.913155) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405202913228, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2496, "num_deletes": 251, "total_data_size": 5462768, "memory_usage": 5543744, "flush_reason": "Manual Compaction"}
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405202934238, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3561289, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7230, "largest_seqno": 9721, "table_properties": {"data_size": 3551728, "index_size": 5671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23520, "raw_average_key_size": 20, "raw_value_size": 3530765, "raw_average_value_size": 3149, "num_data_blocks": 256, "num_entries": 1121, "num_filter_entries": 1121, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405005, "oldest_key_time": 1759405005, "file_creation_time": 1759405202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 21102 microseconds, and 7660 cpu microseconds.
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:02.934270) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3561289 bytes OK
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:02.934284) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:02.935436) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:02.935449) EVENT_LOG_v1 {"time_micros": 1759405202935444, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:02.935462) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5451297, prev total WAL file size 5451933, number of live WAL files 2.
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:02.936774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3477KB)], [15(7799KB)]
Oct  2 07:40:02 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405202936835, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11548215, "oldest_snapshot_seqno": -1}
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3792 keys, 9852102 bytes, temperature: kUnknown
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405203007934, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9852102, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9820590, "index_size": 20906, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 91334, "raw_average_key_size": 24, "raw_value_size": 9746017, "raw_average_value_size": 2570, "num_data_blocks": 917, "num_entries": 3792, "num_filter_entries": 3792, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759405202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:03.008159) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9852102 bytes
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:03.009386) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.3 rd, 138.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 4309, records dropped: 517 output_compression: NoCompression
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:03.009404) EVENT_LOG_v1 {"time_micros": 1759405203009394, "job": 6, "event": "compaction_finished", "compaction_time_micros": 71159, "compaction_time_cpu_micros": 29591, "output_level": 6, "num_output_files": 1, "total_output_size": 9852102, "num_input_records": 4309, "num_output_records": 3792, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405203010007, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405203011147, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:02.936632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:03.011199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:03.011205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:03.011207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:03.011209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:03 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:03.011211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:03 np0005465987 python3.9[103922]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:03.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:04 np0005465987 python3.9[104074]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:04.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:04 np0005465987 python3.9[104226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:05 np0005465987 python3.9[104304]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:05.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:06 np0005465987 python3.9[104456]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 07:40:06 np0005465987 systemd[1]: Starting Time & Date Service...
Oct  2 07:40:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:06.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:06 np0005465987 systemd[1]: Started Time & Date Service.
Oct  2 07:40:07 np0005465987 python3.9[104612]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:07.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:08 np0005465987 python3.9[104764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:08.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:09 np0005465987 python3.9[104842]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:09.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:09 np0005465987 python3.9[104994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:10 np0005465987 python3.9[105072]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9tc7yamw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:10.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:11 np0005465987 python3.9[105224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:11 np0005465987 python3.9[105302]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:11.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:12 np0005465987 python3.9[105454]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:40:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:13 np0005465987 python3[105725]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:40:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:13.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:14 np0005465987 python3.9[105889]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:14.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:14 np0005465987 python3.9[105967]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:15 np0005465987 python3.9[106119]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:15.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:16 np0005465987 python3.9[106197]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:40:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:40:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:40:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:40:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:40:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:16.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:17 np0005465987 python3.9[106349]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:17 np0005465987 python3.9[106427]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:40:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:17.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:40:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:18 np0005465987 python3.9[106579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:18 np0005465987 python3.9[106657]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:18.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:19 np0005465987 python3.9[106809]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:19.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:20 np0005465987 python3.9[106887]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:20.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:21 np0005465987 python3.9[107039]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:40:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:40:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:40:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:21.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:21 np0005465987 python3.9[107244]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:22.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:22 np0005465987 python3.9[107396]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:23 np0005465987 python3.9[107548]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:40:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:23.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:40:24 np0005465987 python3.9[107700]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:40:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:24.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:24 np0005465987 python3.9[107852]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:40:25 np0005465987 systemd[1]: session-40.scope: Deactivated successfully.
Oct  2 07:40:25 np0005465987 systemd[1]: session-40.scope: Consumed 27.943s CPU time.
Oct  2 07:40:25 np0005465987 systemd-logind[794]: Session 40 logged out. Waiting for processes to exit.
Oct  2 07:40:25 np0005465987 systemd-logind[794]: Removed session 40.
Oct  2 07:40:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:25.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:26.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:27.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:28.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:29.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:40:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:30.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:40:31 np0005465987 systemd-logind[794]: New session 41 of user zuul.
Oct  2 07:40:31 np0005465987 systemd[1]: Started Session 41 of User zuul.
Oct  2 07:40:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:31.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:32 np0005465987 python3.9[108032]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  2 07:40:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:32.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:32 np0005465987 python3.9[108184]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:40:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:33.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:33 np0005465987 python3.9[108338]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  2 07:40:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:34 np0005465987 python3.9[108490]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.e0xqry4x follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:35 np0005465987 python3.9[108615]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.e0xqry4x mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405234.1489065-108-10241542547664/.source.e0xqry4x _original_basename=.intufk49 follow=False checksum=279e85be4c03ef167cd5d1085e11bbf46760f3f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:40:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:35.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:40:36 np0005465987 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 07:40:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:36.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:36 np0005465987 python3.9[108767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:40:37 np0005465987 python3.9[108921]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFg5rufAFy1itLjBBGlAJUDsQsaZUavZeI3stNJBLolkBBMB4sBpwAvQFbu2iUhtVavUC7q9xD2LsX0DVBu9DCaQn6tETqUUvMQqzvmaXd34gwo5fH6vo+bjqVdZEih0pIVI1O2OfOUvnv2MFLdKx8MWLQd54beGjWQsC3xCnYVuh0W/aAQtRC2EA77nBo+r40u5V3HXOhdmUbFNvL0r6I8FwP4IvbKC5jkBTtqIzewh+/cyJrURCh0aCpeUjBqNqw3ADhtuR2h5n3ioq+IwPXbhHViJUWQyJ5XKmlSzupEEYA+RV8i1Y3eHJK2RuYlCXkpRP3MEsyBxmISTPhVdQwfxClvyi/mTQkl6k5XFGyZher7KbE6lx4qzp8iCOyOWkw32N3tG0AlnOtPI5HJw8uKbwWl2Apb7RncDQ5fpNOKNFcB1sg61g2Vvew7xJs62OxhkOTiSkEEUYoFfXAqNLiH8gC0+Go12qYleZKbfzL00BDT2boQ2UxYn2rWK7YifU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB29M+5Yr1BRNmm2RoLe921umFtraZRFTbdptrBdgsAV#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAv7vUjfSNyE5eIqsBh1jfLF/N1YKOXT7KtCRIxAQ1i9+ljB9j4j/dQgL6TGk3m+hQRPyAVxTDwUpeBxHWIpFjU=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ry6wCZyHJmZsI4Z83U5DYzCaQhL5JmDbykEZokepEcnLFJt20bbnTU0eQzXJylkCgp7rhmpZo7V7qVNnZUMI3aLUHK30Yr5jzQVofHBRg6ZnAIq1MAwqwGH1s6vfNo+//zth4OMHvolMSEO6zSmOWeAsuHM2DTEJ6IdRasKfhOCc3oI/Tcf5vOUyVGg/BH+fFOHKzPiyJNXozsvw2u4ppfdkMJvVC9w2oTNHMIGcDxSsx0zD2bLdYe5l23tFIOaBM149ktg5KPPsKYyQFymOi5qJHHnf9027MqZ4N7Z9SYuQrqt2nY4C/XmaVFOmUIFNNMZ5qMWDsc38V9cHCgurSaMsQ4em1srXr9nzADLh9bw4WksIRfrtt3twMp7FG9fMsw8rdmFt0+4/IdHr/3wCmHeF07qp10kJPXa5z9dApoIKiQlbIl+UCzlaN5tHD6vb4q0MyhqAtU4mSA1zGz67c2lLSGbF4FTgU9yza15FZjHzQ0ArNu/1KIheA8nrpkE=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHwAEaDivXDvJkCgJw1MVhYQArg6qfdDb4SKBZRPdoOc#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNMghqQyWdigdn5yyuBSIQ3tHLq/tZwQO222aoRtckuDI9Ml6snE/xKJ7YWmTvRTsqj2tqCqXIllFFfreYY7Apw=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpTq9G8ymc65djWd0YMUA1/KMKQbBxw7LoyOCyAnPotUx6UyYfBtjYX5I4TzqzEugao1w+4AHDZ5XKSwr8sv9kaSGm0ERmNxz22+5cmKwWxcvUfNGQQXbk6gk6z5p0qpH/Ue9e19xDUC+RDUMGcwrysoGQ05aVcGDaEmNUxvYjj0UUfs45KX/pHPk5xQ4c0WjiL0BfzPJmphY2PAj6O9b4iFA3HjIJgvQ3+i3jEOkvA1FsXm5s7O1/wEjqwsdfKPlX0LUuCqXyxI4uhWY16Ofi89lEtsdQRwFyoZcDMJUDHMH8oJSopUNwwMEe7UBD1MHJSIzrd6NUGnvRjhqH6dE/IoT2X3f4JN/Six+J9ayDqiIkd1QNsJzPBr6G2Lj/dQbUusb3nXhPk5TXKMOXm5i+J940nYQv8/Y9rf2H1qltGaDEOS95ktKpcL6EVplOsQand/Qmb/ShKbiAo2dr3YC3v/FFE2AAj+0Dnh4xob14bhivkYHDhIF0zyzcVGhHZXc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICufzWCrq7lQCIqxq8UNP+WfGRQD+uOEPLr+ZneqofrM#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHP494uEOdMq07v1W25s7bKFki4bQuHkde7xWzYJuUT44SD4tSCrPbQiOkLCqtg9H5yxKL0Ovnl22PYLf1HMKAs=#012 create=True mode=0644 path=/tmp/ansible.e0xqry4x state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:37.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:38 np0005465987 python3.9[109073]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.e0xqry4x' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:40:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:38.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:39 np0005465987 python3.9[109227]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.e0xqry4x state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.657261) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239657292, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 616, "num_deletes": 251, "total_data_size": 1040534, "memory_usage": 1051784, "flush_reason": "Manual Compaction"}
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239661831, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 503823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9727, "largest_seqno": 10337, "table_properties": {"data_size": 500929, "index_size": 866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7335, "raw_average_key_size": 19, "raw_value_size": 494999, "raw_average_value_size": 1334, "num_data_blocks": 37, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405202, "oldest_key_time": 1759405202, "file_creation_time": 1759405239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 4642 microseconds, and 1768 cpu microseconds.
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.661901) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 503823 bytes OK
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.661922) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.663389) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.663409) EVENT_LOG_v1 {"time_micros": 1759405239663403, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.663428) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1037075, prev total WAL file size 1037075, number of live WAL files 2.
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.664092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(492KB)], [18(9621KB)]
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239664126, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10355925, "oldest_snapshot_seqno": -1}
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3658 keys, 7502407 bytes, temperature: kUnknown
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239715637, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7502407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7475152, "index_size": 17033, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 89039, "raw_average_key_size": 24, "raw_value_size": 7406146, "raw_average_value_size": 2024, "num_data_blocks": 747, "num_entries": 3658, "num_filter_entries": 3658, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759405239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.716011) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7502407 bytes
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.717515) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.5 rd, 145.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.4 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(35.4) write-amplify(14.9) OK, records in: 4163, records dropped: 505 output_compression: NoCompression
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.717547) EVENT_LOG_v1 {"time_micros": 1759405239717532, "job": 8, "event": "compaction_finished", "compaction_time_micros": 51652, "compaction_time_cpu_micros": 18126, "output_level": 6, "num_output_files": 1, "total_output_size": 7502407, "num_input_records": 4163, "num_output_records": 3658, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239717892, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405239721637, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.664025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.721696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.721706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.721805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.721808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:40:39.721810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:40:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:40 np0005465987 systemd[1]: session-41.scope: Deactivated successfully.
Oct  2 07:40:40 np0005465987 systemd[1]: session-41.scope: Consumed 5.200s CPU time.
Oct  2 07:40:40 np0005465987 systemd-logind[794]: Session 41 logged out. Waiting for processes to exit.
Oct  2 07:40:40 np0005465987 systemd-logind[794]: Removed session 41.
Oct  2 07:40:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:40.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:41.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:40:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:42.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:40:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:43.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:44.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:45 np0005465987 systemd-logind[794]: New session 42 of user zuul.
Oct  2 07:40:45 np0005465987 systemd[1]: Started Session 42 of User zuul.
Oct  2 07:40:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:45.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:46 np0005465987 python3.9[109406]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:40:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:46.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:47 np0005465987 python3.9[109562]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 07:40:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:47.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:48 np0005465987 python3.9[109716]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:40:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:48.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:49 np0005465987 python3.9[109869]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:40:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:40:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:49.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:40:50 np0005465987 python3.9[110022]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:40:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:40:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:50.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:40:51 np0005465987 python3.9[110174]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:51 np0005465987 systemd[1]: session-42.scope: Deactivated successfully.
Oct  2 07:40:51 np0005465987 systemd[1]: session-42.scope: Consumed 3.818s CPU time.
Oct  2 07:40:51 np0005465987 systemd-logind[794]: Session 42 logged out. Waiting for processes to exit.
Oct  2 07:40:51 np0005465987 systemd-logind[794]: Removed session 42.
Oct  2 07:40:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:51.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:52.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:40:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:53.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:40:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:54.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:55.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:55 np0005465987 systemd-logind[794]: New session 43 of user zuul.
Oct  2 07:40:55 np0005465987 systemd[1]: Started Session 43 of User zuul.
Oct  2 07:40:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:56.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:56 np0005465987 python3.9[110352]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:40:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:40:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:57.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:40:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:40:57 np0005465987 python3.9[110508]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:40:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:40:58.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:40:58 np0005465987 python3.9[110592]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:40:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:40:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:40:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:40:59.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:00.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:01 np0005465987 python3.9[110743]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:41:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:02 np0005465987 python3.9[110894]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:41:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:03 np0005465987 python3.9[111044]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:41:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:03.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:03 np0005465987 python3.9[111194]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:41:04 np0005465987 systemd[1]: session-43.scope: Deactivated successfully.
Oct  2 07:41:04 np0005465987 systemd[1]: session-43.scope: Consumed 5.505s CPU time.
Oct  2 07:41:04 np0005465987 systemd-logind[794]: Session 43 logged out. Waiting for processes to exit.
Oct  2 07:41:04 np0005465987 systemd-logind[794]: Removed session 43.
Oct  2 07:41:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:41:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:04.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:41:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:05.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:41:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:06.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:41:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:41:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:07.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:41:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:08.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:09 np0005465987 systemd-logind[794]: New session 44 of user zuul.
Oct  2 07:41:09 np0005465987 systemd[1]: Started Session 44 of User zuul.
Oct  2 07:41:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:10 np0005465987 python3.9[111372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:41:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:41:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:10.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:41:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:11.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:12 np0005465987 python3.9[111528]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:12.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:12 np0005465987 python3.9[111680]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:13 np0005465987 python3.9[111832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:13.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:14 np0005465987 python3.9[111955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405273.0900512-162-68342568285376/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=10d14e99cbf8373b46e0c6e478297ac786bfec44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:41:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:14.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:41:15 np0005465987 python3.9[112107]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:15 np0005465987 python3.9[112230]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405274.4961314-162-144937074618611/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=7abde3fe8c59ace3eb9cb99b75c7e56e5fa913ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:15.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:16 np0005465987 python3.9[112382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:41:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:16.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:41:16 np0005465987 python3.9[112505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405275.7510152-162-265531108300609/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=cc25c64052d1d5a3398d05f1e6c3f94f82d7cb26 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:17 np0005465987 python3.9[112657]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:17.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:18 np0005465987 python3.9[112809]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:18.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:18 np0005465987 python3.9[112961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:19 np0005465987 python3.9[113084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405278.4022071-341-182550109445545/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=4f45699ca780e1ce7a4eb5967664fcff3899fd06 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:19.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:20 np0005465987 python3.9[113236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:20.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:20 np0005465987 python3.9[113359]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405279.7141778-341-17947553122409/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=c2cb2163d8d8d6ce387c5934cff1fd20c9d087eb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:21 np0005465987 python3.9[113511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:21.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:22 np0005465987 python3.9[113675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405280.987741-341-149248128057874/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=394726f43b6cce91056a5292d3b7fd6af87cbb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:41:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:41:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:41:22 np0005465987 python3.9[113917]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:22.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:23 np0005465987 python3.9[114069]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:23.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:23 np0005465987 python3.9[114221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:24 np0005465987 python3.9[114344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405283.463333-513-242986780345953/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=c633bfd16210751b883e8558aa78abc68914fbbf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:24.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:25 np0005465987 python3.9[114496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:25 np0005465987 python3.9[114619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405284.6313796-513-249002209706431/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=c2cb2163d8d8d6ce387c5934cff1fd20c9d087eb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:25.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:26 np0005465987 python3.9[114771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:26 np0005465987 python3.9[114894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405285.679874-513-242353177376646/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=29aed63f41c512d5bbc64b5a923bf4a3cff00195 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:26.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:27 np0005465987 python3.9[115046]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:27.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:28 np0005465987 python3.9[115198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:28.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:29 np0005465987 python3.9[115321]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405287.9860961-695-95900871707156/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:29 np0005465987 python3.9[115473]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:29.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:30 np0005465987 python3.9[115625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:30.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:30 np0005465987 python3.9[115798]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405289.8983305-772-189306481624818/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:41:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:41:31 np0005465987 python3.9[115950]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:31.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:32 np0005465987 python3.9[116102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:32.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:32 np0005465987 python3.9[116225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405291.7267947-841-176779407284701/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:33 np0005465987 python3.9[116377]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:33.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:34 np0005465987 python3.9[116529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:34 np0005465987 python3.9[116652]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405293.6583009-910-59356283992950/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:34.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:35 np0005465987 python3.9[116804]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:35.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:36 np0005465987 python3.9[116956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:36.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:36 np0005465987 python3.9[117079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405295.7758124-980-220421351293389/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:37 np0005465987 python3.9[117231]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:38 np0005465987 python3.9[117383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:38.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:38 np0005465987 python3.9[117506]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405297.818336-1058-221740729764015/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=131465f77d8d4faa1442e1beada7324e1814ff9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:40.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:41 np0005465987 systemd[1]: session-44.scope: Deactivated successfully.
Oct  2 07:41:41 np0005465987 systemd[1]: session-44.scope: Consumed 22.735s CPU time.
Oct  2 07:41:41 np0005465987 systemd-logind[794]: Session 44 logged out. Waiting for processes to exit.
Oct  2 07:41:41 np0005465987 systemd-logind[794]: Removed session 44.
Oct  2 07:41:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:42.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:44.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:45.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:46.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:48.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:49 np0005465987 systemd-logind[794]: New session 45 of user zuul.
Oct  2 07:41:49 np0005465987 systemd[1]: Started Session 45 of User zuul.
Oct  2 07:41:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:49 np0005465987 python3.9[117686]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:50.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:50 np0005465987 python3.9[117838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:51 np0005465987 python3.9[117961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405310.259486-67-381637396125/.source.conf _original_basename=ceph.conf follow=False checksum=a063547ed46c9b567daa2ad5bf469f0aff0b35ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:52 np0005465987 python3.9[118113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:52 np0005465987 python3.9[118236]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405311.720378-67-272004457583484/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=879f4ae20801e566b8dfcda89b2df304e135843d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:52.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:53 np0005465987 systemd[1]: session-45.scope: Deactivated successfully.
Oct  2 07:41:53 np0005465987 systemd[1]: session-45.scope: Consumed 2.550s CPU time.
Oct  2 07:41:53 np0005465987 systemd-logind[794]: Session 45 logged out. Waiting for processes to exit.
Oct  2 07:41:53 np0005465987 systemd-logind[794]: Removed session 45.
Oct  2 07:41:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:54.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:56.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:56.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:41:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:41:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:41:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:41:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:41:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:41:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:41:58.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:41:59 np0005465987 systemd-logind[794]: New session 46 of user zuul.
Oct  2 07:41:59 np0005465987 systemd[1]: Started Session 46 of User zuul.
Oct  2 07:42:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:00 np0005465987 python3.9[118414]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:00.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:01 np0005465987 python3.9[118570]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:42:01 np0005465987 python3.9[118722]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:42:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:02.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:02 np0005465987 python3.9[118872]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:03 np0005465987 python3.9[119024]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  2 07:42:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:04.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:05 np0005465987 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct  2 07:42:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:06.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:06 np0005465987 python3.9[119180]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:42:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:06.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:07 np0005465987 python3.9[119264]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:42:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:08.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:08.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:09 np0005465987 python3.9[119417]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:42:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:10.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:10 np0005465987 python3[119572]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  2 07:42:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:11 np0005465987 python3.9[119724]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:12.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:12 np0005465987 python3.9[119876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:12 np0005465987 python3.9[119954]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:12.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:13 np0005465987 python3.9[120106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:13 np0005465987 python3.9[120184]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3h69lo6t recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:14.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:14 np0005465987 python3.9[120336]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:14.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:14 np0005465987 python3.9[120414]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:16.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:16 np0005465987 python3.9[120566]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:16.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:17 np0005465987 python3[120719]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:42:17 np0005465987 python3.9[120871]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:18.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:18 np0005465987 python3.9[120996]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405337.2954426-437-56213484180563/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:18.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:19 np0005465987 python3.9[121148]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:19 np0005465987 python3.9[121273]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405338.8279145-482-264913809762852/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:20.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:20.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:20 np0005465987 python3.9[121425]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:21 np0005465987 python3.9[121550]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405340.3896496-527-111829070485909/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:22.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:22 np0005465987 python3.9[121702]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:22.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:22 np0005465987 python3.9[121827]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405341.8888953-573-105476741951940/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:23 np0005465987 python3.9[121979]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:24.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:24 np0005465987 python3.9[122104]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405343.2546463-617-272480281265242/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:24.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:25 np0005465987 python3.9[122256]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:25 np0005465987 python3.9[122408]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:26.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:26.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:26 np0005465987 python3.9[122563]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:27 np0005465987 python3.9[122715]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:28 np0005465987 python3.9[122868]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:28.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:29 np0005465987 python3.9[123022]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:30.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:30 np0005465987 python3.9[123177]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:30.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:31 np0005465987 python3.9[123448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:42:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:42:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:42:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:42:32 np0005465987 python3.9[123723]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:32 np0005465987 ovs-vsctl[123731]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  2 07:42:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:32.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:33 np0005465987 python3.9[123883]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:42:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:42:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:42:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:34.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:34 np0005465987 python3.9[124038]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:34 np0005465987 ovs-vsctl[124039]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  2 07:42:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:34.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:34 np0005465987 python3.9[124189]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:35 np0005465987 python3.9[124343]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:42:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:36 np0005465987 python3.9[124495]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:36.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:36 np0005465987 python3.9[124573]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:42:37 np0005465987 python3.9[124725]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:37 np0005465987 python3.9[124803]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:42:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:38.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:38.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:38 np0005465987 python3.9[124955]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:39 np0005465987 python3.9[125157]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:42:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:42:40 np0005465987 python3.9[125235]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:40.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:40 np0005465987 python3.9[125387]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:40.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:41 np0005465987 python3.9[125465]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:42.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:42 np0005465987 python3.9[125617]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:42:42 np0005465987 systemd[1]: Reloading.
Oct  2 07:42:42 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:42:42 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:42:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:42.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:43 np0005465987 python3.9[125807]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:44 np0005465987 python3.9[125885]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:44.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:44 np0005465987 python3.9[126037]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:44.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:45 np0005465987 python3.9[126115]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:46.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:46 np0005465987 python3.9[126267]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:42:46 np0005465987 systemd[1]: Reloading.
Oct  2 07:42:46 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:42:46 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:42:46 np0005465987 systemd[1]: Starting Create netns directory...
Oct  2 07:42:46 np0005465987 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:42:46 np0005465987 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:42:46 np0005465987 systemd[1]: Finished Create netns directory.
Oct  2 07:42:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:46.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:47 np0005465987 python3.9[126461]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:42:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:48.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:48 np0005465987 python3.9[126613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:48 np0005465987 python3.9[126736]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405367.789942-1370-75539150355904/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:42:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:48.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:49 np0005465987 python3.9[126888]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:42:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:50.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:50 np0005465987 python3.9[127040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:50.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:51 np0005465987 python3.9[127163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405370.2830486-1445-126412884706079/.source.json _original_basename=.c2xmq70x follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:52.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:52 np0005465987 python3.9[127315]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:54.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:54 np0005465987 python3.9[127742]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  2 07:42:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:42:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 7624 writes, 31K keys, 7624 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7624 writes, 1439 syncs, 5.30 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7624 writes, 31K keys, 7624 commit groups, 1.0 writes per commit group, ingest: 20.40 MB, 0.03 MB/s#012Interval WAL: 7624 writes, 1439 syncs, 5.30 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt
Oct  2 07:42:55 np0005465987 python3.9[127894]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:42:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:56.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:56 np0005465987 python3.9[128046]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 07:42:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:56.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:42:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:42:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:42:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:42:58.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:42:58 np0005465987 python3[128225]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:42:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:42:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:42:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:42:58.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:00.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:00.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:02.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:02 np0005465987 podman[128238]: 2025-10-02 11:43:02.62919304 +0000 UTC m=+4.291785604 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  2 07:43:02 np0005465987 podman[128356]: 2025-10-02 11:43:02.78411949 +0000 UTC m=+0.049106369 container create 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 07:43:02 np0005465987 podman[128356]: 2025-10-02 11:43:02.758322781 +0000 UTC m=+0.023309690 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  2 07:43:02 np0005465987 python3[128225]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  2 07:43:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:02.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:43:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:04.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:43:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:43:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:04.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:43:04 np0005465987 python3.9[128547]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:43:05 np0005465987 python3.9[128701]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:43:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:06.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:06 np0005465987 python3.9[128777]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:43:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:06.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:06 np0005465987 python3.9[128928]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405386.3489957-1709-265424403179306/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:43:07 np0005465987 python3.9[129004]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:43:07 np0005465987 systemd[1]: Reloading.
Oct  2 07:43:07 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:43:07 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:43:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:08.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:08 np0005465987 python3.9[129116]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:43:08 np0005465987 systemd[1]: Reloading.
Oct  2 07:43:08 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:43:08 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:43:08 np0005465987 systemd[1]: Starting ovn_controller container...
Oct  2 07:43:08 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:43:08 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977c416c1ac866686e04661c260f70a9de4075024f6ab6f01cde5f981a24984f/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  2 07:43:08 np0005465987 systemd[1]: Started /usr/bin/podman healthcheck run 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517.
Oct  2 07:43:08 np0005465987 podman[129157]: 2025-10-02 11:43:08.85133284 +0000 UTC m=+0.158488322 container init 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct  2 07:43:08 np0005465987 ovn_controller[129172]: + sudo -E kolla_set_configs
Oct  2 07:43:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:08.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:08 np0005465987 podman[129157]: 2025-10-02 11:43:08.892792893 +0000 UTC m=+0.199948315 container start 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:43:08 np0005465987 edpm-start-podman-container[129157]: ovn_controller
Oct  2 07:43:08 np0005465987 systemd[1]: Created slice User Slice of UID 0.
Oct  2 07:43:08 np0005465987 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  2 07:43:08 np0005465987 edpm-start-podman-container[129156]: Creating additional drop-in dependency for "ovn_controller" (110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517)
Oct  2 07:43:08 np0005465987 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  2 07:43:08 np0005465987 systemd[1]: Starting User Manager for UID 0...
Oct  2 07:43:08 np0005465987 systemd[1]: Reloading.
Oct  2 07:43:09 np0005465987 podman[129179]: 2025-10-02 11:43:09.010595093 +0000 UTC m=+0.096486589 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 07:43:09 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:43:09 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:43:09 np0005465987 systemd[1]: Started ovn_controller container.
Oct  2 07:43:09 np0005465987 systemd[1]: 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517-67fa26d3c6c0f551.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 07:43:09 np0005465987 systemd[1]: 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517-67fa26d3c6c0f551.service: Failed with result 'exit-code'.
Oct  2 07:43:09 np0005465987 systemd[129212]: Queued start job for default target Main User Target.
Oct  2 07:43:09 np0005465987 systemd[129212]: Created slice User Application Slice.
Oct  2 07:43:09 np0005465987 systemd[129212]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  2 07:43:09 np0005465987 systemd[129212]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 07:43:09 np0005465987 systemd[129212]: Reached target Paths.
Oct  2 07:43:09 np0005465987 systemd[129212]: Reached target Timers.
Oct  2 07:43:09 np0005465987 systemd[129212]: Starting D-Bus User Message Bus Socket...
Oct  2 07:43:09 np0005465987 systemd[129212]: Starting Create User's Volatile Files and Directories...
Oct  2 07:43:09 np0005465987 systemd[129212]: Listening on D-Bus User Message Bus Socket.
Oct  2 07:43:09 np0005465987 systemd[129212]: Finished Create User's Volatile Files and Directories.
Oct  2 07:43:09 np0005465987 systemd[129212]: Reached target Sockets.
Oct  2 07:43:09 np0005465987 systemd[129212]: Reached target Basic System.
Oct  2 07:43:09 np0005465987 systemd[129212]: Reached target Main User Target.
Oct  2 07:43:09 np0005465987 systemd[129212]: Startup finished in 121ms.
Oct  2 07:43:09 np0005465987 systemd[1]: Started User Manager for UID 0.
Oct  2 07:43:09 np0005465987 systemd[1]: Started Session c1 of User root.
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: INFO:__main__:Validating config file
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: INFO:__main__:Writing out command to execute
Oct  2 07:43:09 np0005465987 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: ++ cat /run_command
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: + ARGS=
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: + sudo kolla_copy_cacerts
Oct  2 07:43:09 np0005465987 systemd[1]: Started Session c2 of User root.
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: + [[ ! -n '' ]]
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: + . kolla_extend_start
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: + umask 0022
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  2 07:43:09 np0005465987 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  2 07:43:09 np0005465987 NetworkManager[44910]: <info>  [1759405389.5146] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  2 07:43:09 np0005465987 NetworkManager[44910]: <info>  [1759405389.5158] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:43:09 np0005465987 NetworkManager[44910]: <info>  [1759405389.5176] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  2 07:43:09 np0005465987 NetworkManager[44910]: <info>  [1759405389.5185] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  2 07:43:09 np0005465987 NetworkManager[44910]: <info>  [1759405389.5191] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 07:43:09 np0005465987 kernel: br-int: entered promiscuous mode
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:43:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:09Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:43:09 np0005465987 NetworkManager[44910]: <info>  [1759405389.5437] manager: (ovn-672825-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  2 07:43:09 np0005465987 systemd-udevd[129307]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:43:09 np0005465987 kernel: genev_sys_6081: entered promiscuous mode
Oct  2 07:43:09 np0005465987 systemd-udevd[129309]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:43:09 np0005465987 NetworkManager[44910]: <info>  [1759405389.5607] device (genev_sys_6081): carrier: link connected
Oct  2 07:43:09 np0005465987 NetworkManager[44910]: <info>  [1759405389.5612] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct  2 07:43:09 np0005465987 NetworkManager[44910]: <info>  [1759405389.6202] manager: (ovn-6718a9-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct  2 07:43:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:10.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:10 np0005465987 NetworkManager[44910]: <info>  [1759405390.2546] manager: (ovn-900289-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct  2 07:43:10 np0005465987 python3.9[129439]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:43:10 np0005465987 ovs-vsctl[129440]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  2 07:43:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:10.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:11 np0005465987 python3.9[129592]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:43:11 np0005465987 ovs-vsctl[129594]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  2 07:43:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:12.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:12 np0005465987 python3.9[129747]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:43:12 np0005465987 ovs-vsctl[129748]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  2 07:43:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:43:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:12.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:43:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:13 np0005465987 systemd[1]: session-46.scope: Deactivated successfully.
Oct  2 07:43:13 np0005465987 systemd[1]: session-46.scope: Consumed 52.301s CPU time.
Oct  2 07:43:13 np0005465987 systemd-logind[794]: Session 46 logged out. Waiting for processes to exit.
Oct  2 07:43:13 np0005465987 systemd-logind[794]: Removed session 46.
Oct  2 07:43:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:14.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:14.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:16.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:16.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:18.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:18 np0005465987 systemd-logind[794]: New session 48 of user zuul.
Oct  2 07:43:18 np0005465987 systemd[1]: Started Session 48 of User zuul.
Oct  2 07:43:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:18.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:19 np0005465987 systemd[1]: Stopping User Manager for UID 0...
Oct  2 07:43:19 np0005465987 systemd[129212]: Activating special unit Exit the Session...
Oct  2 07:43:19 np0005465987 systemd[129212]: Stopped target Main User Target.
Oct  2 07:43:19 np0005465987 systemd[129212]: Stopped target Basic System.
Oct  2 07:43:19 np0005465987 systemd[129212]: Stopped target Paths.
Oct  2 07:43:19 np0005465987 systemd[129212]: Stopped target Sockets.
Oct  2 07:43:19 np0005465987 systemd[129212]: Stopped target Timers.
Oct  2 07:43:19 np0005465987 systemd[129212]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 07:43:19 np0005465987 systemd[129212]: Closed D-Bus User Message Bus Socket.
Oct  2 07:43:19 np0005465987 systemd[129212]: Stopped Create User's Volatile Files and Directories.
Oct  2 07:43:19 np0005465987 systemd[129212]: Removed slice User Application Slice.
Oct  2 07:43:19 np0005465987 systemd[129212]: Reached target Shutdown.
Oct  2 07:43:19 np0005465987 systemd[129212]: Finished Exit the Session.
Oct  2 07:43:19 np0005465987 systemd[129212]: Reached target Exit the Session.
Oct  2 07:43:19 np0005465987 systemd[1]: user@0.service: Deactivated successfully.
Oct  2 07:43:19 np0005465987 systemd[1]: Stopped User Manager for UID 0.
Oct  2 07:43:19 np0005465987 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  2 07:43:19 np0005465987 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  2 07:43:19 np0005465987 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  2 07:43:19 np0005465987 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  2 07:43:19 np0005465987 systemd[1]: Removed slice User Slice of UID 0.
Oct  2 07:43:19 np0005465987 python3.9[129927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:43:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:20.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:20 np0005465987 python3.9[130084]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:20.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:21 np0005465987 python3.9[130236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:43:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:22.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:43:22 np0005465987 python3.9[130388]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:22 np0005465987 python3.9[130540]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000056s ======
Oct  2 07:43:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:22.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Oct  2 07:43:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:23 np0005465987 python3.9[130692]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:24.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:24 np0005465987 python3.9[130842]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:43:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:43:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:24.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:43:25 np0005465987 python3.9[130994]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  2 07:43:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:26.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:43:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:26.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:43:27 np0005465987 python3.9[131144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:27 np0005465987 python3.9[131266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405406.4479635-224-56666058804097/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:43:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:28.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:43:28 np0005465987 python3.9[131416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:28.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:29 np0005465987 python3.9[131537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405408.0985012-269-2921841467679/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:43:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:30.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:43:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:30.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:43:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2073 writes, 11K keys, 2073 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2073 writes, 2073 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2073 writes, 11K keys, 2073 commit groups, 1.0 writes per commit group, ingest: 23.25 MB, 0.04 MB/s#012Interval WAL: 2073 writes, 2073 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    142.0      0.08              0.02         4    0.020       0      0       0.0       0.0#012  L6      1/0    7.15 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    178.6    151.4      0.16              0.06         3    0.053     11K   1278       0.0       0.0#012 Sum      1/0    7.15 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    118.5    148.3      0.24              0.09         7    0.034     11K   1278       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    119.3    149.3      0.24              0.09         6    0.040     11K   1278       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    178.6    151.4      0.16              0.06         3    0.053     11K   1278       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    145.0      0.08              0.02         3    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 308.00 MB usage: 1.01 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(51,905.22 KB,0.287014%) FilterBlock(7,40.30 KB,0.0127768%) IndexBlock(7,91.45 KB,0.0289967%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 07:43:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:32.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:32 np0005465987 python3.9[131689]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:43:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:32.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:33 np0005465987 python3.9[131773]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:43:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:43:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:34.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:43:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:43:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:34.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:43:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:36.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:43:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:36.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:43:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:38.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:38.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:39 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:39Z|00025|memory|INFO|16384 kB peak resident set size after 30.1 seconds
Oct  2 07:43:39 np0005465987 ovn_controller[129172]: 2025-10-02T11:43:39Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct  2 07:43:39 np0005465987 podman[131823]: 2025-10-02 11:43:39.661688723 +0000 UTC m=+0.107907292 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:43:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:43:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:40.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:43:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 07:43:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:43:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:43:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:43:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  2 07:43:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:40.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:41 np0005465987 python3.9[132082]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:43:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:42.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:42 np0005465987 python3.9[132235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:43:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:43:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:43:42 np0005465987 python3.9[132356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405421.6575856-380-114209890587083/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:43:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:42.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:43:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:43 np0005465987 python3.9[132506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:43 np0005465987 python3.9[132627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405422.971329-380-5014823612193/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:43:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:44.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:43:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:43:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:44.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:43:45 np0005465987 python3.9[132777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:45 np0005465987 python3.9[132898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405424.9857016-512-273480856999381/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:46.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:46 np0005465987 python3.9[133048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:46.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:46 np0005465987 python3.9[133169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405426.004489-512-38089999381408/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:48.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:48 np0005465987 python3.9[133319]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:43:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:49 np0005465987 python3.9[133473]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:49 np0005465987 python3.9[133625]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:50.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:50 np0005465987 python3.9[133703]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.726586) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430726654, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1961, "num_deletes": 252, "total_data_size": 4955951, "memory_usage": 5019712, "flush_reason": "Manual Compaction"}
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430746341, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 3253241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10342, "largest_seqno": 12298, "table_properties": {"data_size": 3245113, "index_size": 5007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16060, "raw_average_key_size": 19, "raw_value_size": 3228885, "raw_average_value_size": 3966, "num_data_blocks": 224, "num_entries": 814, "num_filter_entries": 814, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405240, "oldest_key_time": 1759405240, "file_creation_time": 1759405430, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 19781 microseconds, and 6675 cpu microseconds.
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.746378) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 3253241 bytes OK
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.746392) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.747725) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.747736) EVENT_LOG_v1 {"time_micros": 1759405430747732, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.747751) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4947168, prev total WAL file size 4947168, number of live WAL files 2.
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.748821) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3176KB)], [21(7326KB)]
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430748850, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10755648, "oldest_snapshot_seqno": -1}
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 3950 keys, 8581937 bytes, temperature: kUnknown
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430785752, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8581937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8552430, "index_size": 18517, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 95762, "raw_average_key_size": 24, "raw_value_size": 8477946, "raw_average_value_size": 2146, "num_data_blocks": 802, "num_entries": 3950, "num_filter_entries": 3950, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759405430, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.786048) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8581937 bytes
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.787112) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 290.8 rd, 232.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.2 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(5.9) write-amplify(2.6) OK, records in: 4472, records dropped: 522 output_compression: NoCompression
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.787128) EVENT_LOG_v1 {"time_micros": 1759405430787121, "job": 10, "event": "compaction_finished", "compaction_time_micros": 36990, "compaction_time_cpu_micros": 16812, "output_level": 6, "num_output_files": 1, "total_output_size": 8581937, "num_input_records": 4472, "num_output_records": 3950, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430787680, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405430788692, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.748758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.788755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.788759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.788761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.788762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:43:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:43:50.788763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:43:50 np0005465987 python3.9[133855]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:43:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:50.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:43:51 np0005465987 python3.9[133933]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:43:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:52.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:52 np0005465987 python3.9[134135]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:43:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:43:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:43:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:52.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:53 np0005465987 python3.9[134287]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:53 np0005465987 python3.9[134365]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:43:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:54.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:54 np0005465987 python3.9[134517]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:54 np0005465987 python3.9[134595]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:43:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:54.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:56 np0005465987 python3.9[134747]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:43:56 np0005465987 systemd[1]: Reloading.
Oct  2 07:43:56 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:43:56 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:43:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:56.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:43:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:56.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:43:57 np0005465987 python3.9[134937]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:57 np0005465987 python3.9[135015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:43:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:43:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:43:58.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:58 np0005465987 python3.9[135167]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:43:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:43:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:43:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:43:58.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:43:59 np0005465987 python3.9[135245]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:43:59 np0005465987 python3.9[135397]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:43:59 np0005465987 systemd[1]: Reloading.
Oct  2 07:44:00 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:44:00 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:44:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:00.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:00 np0005465987 systemd[1]: Starting Create netns directory...
Oct  2 07:44:00 np0005465987 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:44:00 np0005465987 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:44:00 np0005465987 systemd[1]: Finished Create netns directory.
Oct  2 07:44:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:00.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:01 np0005465987 python3.9[135589]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:44:02 np0005465987 python3.9[135741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:44:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:02.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:02 np0005465987 python3.9[135864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405441.6908197-965-218038193549640/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:44:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:02.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:03 np0005465987 python3.9[136016]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:44:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:04.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:04 np0005465987 python3.9[136168]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:44:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:04.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:05 np0005465987 python3.9[136291]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405444.116723-1040-22998385104498/.source.json _original_basename=.nw_iu2pk follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:05 np0005465987 python3.9[136443]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:06.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:06.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:08.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:08 np0005465987 python3.9[136870]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  2 07:44:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:08.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:09 np0005465987 python3.9[137022]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:44:09 np0005465987 podman[137047]: 2025-10-02 11:44:09.911842865 +0000 UTC m=+0.129578472 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:44:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:10.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:10 np0005465987 python3.9[137200]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 07:44:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:10.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:12 np0005465987 python3[137378]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:44:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:12.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:12.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:14.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:14.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:16.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:16.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:18.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:18.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:19 np0005465987 podman[137391]: 2025-10-02 11:44:19.738072273 +0000 UTC m=+7.546921340 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:44:19 np0005465987 podman[137509]: 2025-10-02 11:44:19.880789827 +0000 UTC m=+0.049790761 container create 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:44:19 np0005465987 podman[137509]: 2025-10-02 11:44:19.854645675 +0000 UTC m=+0.023646619 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:44:19 np0005465987 python3[137378]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:44:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:20.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:20.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:21 np0005465987 python3.9[137697]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:44:22 np0005465987 python3.9[137851]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:22.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:22 np0005465987 python3.9[137927]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:44:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:22.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:23 np0005465987 python3.9[138078]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405462.6997924-1304-190427306686934/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:24 np0005465987 python3.9[138154]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:44:24 np0005465987 systemd[1]: Reloading.
Oct  2 07:44:24 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:44:24 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:44:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:24.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:25.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:25 np0005465987 python3.9[138264]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:44:25 np0005465987 systemd[1]: Reloading.
Oct  2 07:44:25 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:44:25 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:44:25 np0005465987 systemd[1]: Starting ovn_metadata_agent container...
Oct  2 07:44:25 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:44:25 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58e87bba0097e42f374e54409d6b5fc77147b4d7cee2b1f8590526465792dd1e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  2 07:44:25 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58e87bba0097e42f374e54409d6b5fc77147b4d7cee2b1f8590526465792dd1e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 07:44:26 np0005465987 systemd[1]: Started /usr/bin/podman healthcheck run 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916.
Oct  2 07:44:26 np0005465987 podman[138304]: 2025-10-02 11:44:26.080603299 +0000 UTC m=+0.625204157 container init 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: + sudo -E kolla_set_configs
Oct  2 07:44:26 np0005465987 podman[138304]: 2025-10-02 11:44:26.119574053 +0000 UTC m=+0.664174861 container start 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent)
Oct  2 07:44:26 np0005465987 edpm-start-podman-container[138304]: ovn_metadata_agent
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Validating config file
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Copying service configuration files
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Writing out command to execute
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: ++ cat /run_command
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: + CMD=neutron-ovn-metadata-agent
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: + ARGS=
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: + sudo kolla_copy_cacerts
Oct  2 07:44:26 np0005465987 edpm-start-podman-container[138303]: Creating additional drop-in dependency for "ovn_metadata_agent" (04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916)
Oct  2 07:44:26 np0005465987 podman[138327]: 2025-10-02 11:44:26.194745801 +0000 UTC m=+0.063940432 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: Running command: 'neutron-ovn-metadata-agent'
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: + [[ ! -n '' ]]
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: + . kolla_extend_start
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: + umask 0022
Oct  2 07:44:26 np0005465987 ovn_metadata_agent[138320]: + exec neutron-ovn-metadata-agent
Oct  2 07:44:26 np0005465987 systemd[1]: Reloading.
Oct  2 07:44:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:26.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:26 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:44:26 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:44:26 np0005465987 systemd[1]: Started ovn_metadata_agent container.
Oct  2 07:44:26 np0005465987 systemd[1]: session-48.scope: Deactivated successfully.
Oct  2 07:44:26 np0005465987 systemd-logind[794]: Session 48 logged out. Waiting for processes to exit.
Oct  2 07:44:26 np0005465987 systemd[1]: session-48.scope: Consumed 51.658s CPU time.
Oct  2 07:44:26 np0005465987 systemd-logind[794]: Removed session 48.
Oct  2 07:44:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:27.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.870 138325 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.870 138325 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.870 138325 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.870 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.871 138325 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.871 138325 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.871 138325 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.871 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.871 138325 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.871 138325 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.871 138325 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.871 138325 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.871 138325 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.872 138325 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.872 138325 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.872 138325 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.872 138325 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.872 138325 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.872 138325 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.872 138325 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.872 138325 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.872 138325 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.873 138325 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.874 138325 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.874 138325 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.874 138325 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.874 138325 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.874 138325 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.874 138325 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.874 138325 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.875 138325 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.876 138325 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.877 138325 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.877 138325 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.877 138325 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.877 138325 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.877 138325 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.877 138325 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.877 138325 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.878 138325 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.878 138325 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.878 138325 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.878 138325 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.878 138325 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.878 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.878 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.878 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.878 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.879 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.879 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.879 138325 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.879 138325 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.879 138325 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.879 138325 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.879 138325 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.879 138325 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.879 138325 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.880 138325 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.880 138325 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.880 138325 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.880 138325 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.880 138325 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.880 138325 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.880 138325 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.880 138325 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.881 138325 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.882 138325 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.882 138325 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.882 138325 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.882 138325 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.882 138325 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.882 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.882 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.882 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.882 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.883 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.883 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.883 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.883 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.883 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.883 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.883 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.883 138325 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.883 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.884 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.884 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.884 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.884 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.884 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.884 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.884 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.884 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.884 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.885 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.885 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.885 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.885 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.885 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.885 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.885 138325 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.885 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.885 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.886 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.886 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.886 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.886 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.886 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.886 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.886 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.886 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.886 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.887 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.887 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.887 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.887 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.887 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.887 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.888 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.888 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.888 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.888 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.889 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.889 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.889 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.889 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.889 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.889 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.889 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.889 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.889 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.890 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.890 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.890 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.890 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.890 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.890 138325 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.890 138325 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.890 138325 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.890 138325 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.891 138325 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.891 138325 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.891 138325 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.891 138325 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.891 138325 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.891 138325 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.891 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.891 138325 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.891 138325 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.892 138325 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.892 138325 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.892 138325 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.892 138325 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.892 138325 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.892 138325 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.892 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.892 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.892 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.893 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.893 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.893 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.893 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.893 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.893 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.893 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.894 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.895 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.895 138325 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.895 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.895 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.895 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.895 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.895 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.895 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.895 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.896 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.897 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.897 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.897 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.897 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.897 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.897 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.897 138325 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.897 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.898 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.898 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.898 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.898 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.898 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.898 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.898 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.898 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.898 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.899 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.899 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.899 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.899 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.899 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.899 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.899 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.899 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.899 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.900 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.900 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.900 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.900 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.900 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.900 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.900 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.900 138325 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.900 138325 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.901 138325 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.901 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.901 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.901 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.901 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.901 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.901 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.901 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.902 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.902 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.902 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.902 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.902 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.902 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.902 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.902 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.902 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.903 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.904 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.904 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.904 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.904 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.904 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.904 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.904 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.904 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.904 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.905 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.905 138325 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.905 138325 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.913 138325 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.913 138325 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.914 138325 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.914 138325 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.914 138325 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.926 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name e4c8874a-a81b-4869-98e4-aca2e3f3bf40 (UUID: e4c8874a-a81b-4869-98e4-aca2e3f3bf40) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  2 07:44:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.951 138325 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.951 138325 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.951 138325 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.951 138325 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.954 138325 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.959 138325 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.964 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'e4c8874a-a81b-4869-98e4-aca2e3f3bf40'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], external_ids={}, name=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, nb_cfg_timestamp=1759405397542, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.965 138325 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f5990f78f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.965 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.966 138325 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.966 138325 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.966 138325 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.970 138325 DEBUG oslo_service.service [-] Started child 138432 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.973 138325 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpgwnijrc_/privsep.sock']#033[00m
Oct  2 07:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:27.974 138432 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-2006754'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.000 138432 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.000 138432 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.000 138432 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.005 138432 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.013 138432 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.020 138432 INFO eventlet.wsgi.server [-] (138432) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  2 07:44:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:28.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:28 np0005465987 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.592 138325 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.593 138325 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgwnijrc_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.492 138437 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.498 138437 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.502 138437 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.502 138437 INFO oslo.privsep.daemon [-] privsep daemon running as pid 138437#033[00m
Oct  2 07:44:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:28.596 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[da3559dd-e556-49bf-bbd0-17695fc12f8c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:44:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:29.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.066 138437 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.066 138437 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.066 138437 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.569 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[582c0ee1-1f1c-436f-9230-6e15107902be]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.572 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, column=external_ids, values=({'neutron:ovn-metadata-id': '2af3ae28-e3da-5f17-a799-141f28daf994'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.616 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.721 138325 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.721 138325 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.721 138325 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.721 138325 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.721 138325 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.721 138325 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.721 138325 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.722 138325 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.722 138325 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.722 138325 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.722 138325 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.722 138325 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.722 138325 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.722 138325 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.722 138325 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.723 138325 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.723 138325 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.723 138325 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.723 138325 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.723 138325 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.723 138325 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.723 138325 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.723 138325 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.723 138325 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.724 138325 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.724 138325 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.724 138325 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.724 138325 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.724 138325 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.724 138325 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.724 138325 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.724 138325 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.724 138325 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.725 138325 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.725 138325 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.725 138325 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.725 138325 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.725 138325 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.725 138325 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.725 138325 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.725 138325 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.726 138325 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.726 138325 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.726 138325 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.726 138325 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.726 138325 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.726 138325 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.726 138325 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.726 138325 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.726 138325 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.727 138325 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.728 138325 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.728 138325 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.728 138325 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.728 138325 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.728 138325 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.728 138325 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.728 138325 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.728 138325 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.728 138325 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.729 138325 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.729 138325 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.729 138325 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.729 138325 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.729 138325 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.729 138325 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.729 138325 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.729 138325 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.730 138325 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.730 138325 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.730 138325 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.730 138325 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.730 138325 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.730 138325 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.730 138325 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.730 138325 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.731 138325 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.732 138325 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.733 138325 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.733 138325 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.733 138325 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.733 138325 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.733 138325 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.734 138325 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.734 138325 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.734 138325 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.734 138325 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.734 138325 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.734 138325 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.734 138325 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.734 138325 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.735 138325 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.735 138325 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.735 138325 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.735 138325 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.735 138325 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.735 138325 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.735 138325 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.736 138325 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.736 138325 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.736 138325 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.736 138325 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.736 138325 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.736 138325 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.736 138325 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.736 138325 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.736 138325 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.737 138325 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.737 138325 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.737 138325 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.737 138325 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.737 138325 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.737 138325 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.737 138325 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.737 138325 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.737 138325 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.738 138325 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.738 138325 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.738 138325 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.738 138325 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.738 138325 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.738 138325 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.738 138325 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.738 138325 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.739 138325 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.740 138325 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.741 138325 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.741 138325 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.741 138325 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.741 138325 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.741 138325 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.741 138325 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.741 138325 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.741 138325 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.741 138325 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.742 138325 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.743 138325 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.744 138325 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.745 138325 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.746 138325 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.746 138325 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.746 138325 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.746 138325 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.746 138325 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.746 138325 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.746 138325 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.746 138325 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.746 138325 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.747 138325 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.748 138325 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.748 138325 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.748 138325 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.748 138325 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.748 138325 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.748 138325 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.748 138325 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.748 138325 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.748 138325 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.749 138325 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.750 138325 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.750 138325 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.750 138325 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.750 138325 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.750 138325 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.750 138325 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.750 138325 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.750 138325 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.750 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.751 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.751 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.751 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.751 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.751 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.751 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.751 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.751 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.751 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.752 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.753 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.753 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.753 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.753 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.753 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.753 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.753 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.754 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.754 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.754 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.754 138325 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.754 138325 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.755 138325 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.755 138325 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.755 138325 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:44:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:44:29.755 138325 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 07:44:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:30.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:32 np0005465987 systemd-logind[794]: New session 49 of user zuul.
Oct  2 07:44:32 np0005465987 systemd[1]: Started Session 49 of User zuul.
Oct  2 07:44:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:32.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:33.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:33 np0005465987 python3.9[138595]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:44:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:34.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:34 np0005465987 python3.9[138751]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:44:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:35.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:35 np0005465987 python3.9[138916]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:44:35 np0005465987 systemd[1]: Reloading.
Oct  2 07:44:35 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:44:35 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:44:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:36.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  2 07:44:36 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  2 07:44:36 np0005465987 python3.9[139101]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:44:36 np0005465987 network[139118]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:44:36 np0005465987 network[139119]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:44:36 np0005465987 network[139120]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:44:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:37.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct  2 07:44:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:44:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:38.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:44:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:39.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:40.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:40 np0005465987 podman[139258]: 2025-10-02 11:44:40.867524175 +0000 UTC m=+0.085547756 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 07:44:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:44:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:41.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:44:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:43.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:43 np0005465987 python3.9[139412]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:44:43 np0005465987 python3.9[139565]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:44:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:44.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:44 np0005465987 python3.9[139718]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:44:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:45.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:45 np0005465987 python3.9[139871]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:44:45 np0005465987 python3.9[140024]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:44:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:46.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:46 np0005465987 python3.9[140177]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:44:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:47.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:47 np0005465987 python3.9[140330]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:44:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:48.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:49.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:49 np0005465987 python3.9[140483]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:49 np0005465987 python3.9[140635]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:50.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:50 np0005465987 python3.9[140787]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:51 np0005465987 python3.9[140939]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:51.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:51 np0005465987 python3.9[141091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:52.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:52 np0005465987 python3.9[141243]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:53 np0005465987 python3.9[141508]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:54.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:54 np0005465987 python3.9[141679]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:54 np0005465987 python3.9[141831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:55.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:44:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:44:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:44:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:44:55 np0005465987 python3.9[141983]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:56 np0005465987 python3.9[142135]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:56 np0005465987 podman[142259]: 2025-10-02 11:44:56.614431273 +0000 UTC m=+0.060200180 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:44:56 np0005465987 python3.9[142304]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:44:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:57.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:44:57 np0005465987 python3.9[142458]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:57 np0005465987 python3.9[142610]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:44:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:44:58.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:44:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:44:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:44:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:44:59.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:44:59 np0005465987 python3.9[142762]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:45:00 np0005465987 python3.9[142914]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:45:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000058s ======
Oct  2 07:45:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:01.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Oct  2 07:45:01 np0005465987 python3.9[143116]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:45:01 np0005465987 systemd[1]: Reloading.
Oct  2 07:45:01 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:45:01 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:45:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:45:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:45:02 np0005465987 python3.9[143302]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:45:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:02.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:02 np0005465987 python3.9[143455]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:45:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:45:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:03.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:45:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:03 np0005465987 python3.9[143608]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:45:04 np0005465987 python3.9[143761]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:45:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:04 np0005465987 python3.9[143914]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:45:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:05.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:05 np0005465987 python3.9[144067]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:45:06 np0005465987 python3.9[144220]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:45:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:06.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:07.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:07 np0005465987 python3.9[144373]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  2 07:45:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:08.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:08 np0005465987 python3.9[144526]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:45:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:09.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:09 np0005465987 python3.9[144684]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:45:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:45:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:10.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:45:10 np0005465987 python3.9[144844]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:45:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:11.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:11 np0005465987 podman[144900]: 2025-10-02 11:45:11.69649947 +0000 UTC m=+0.120174542 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:45:11 np0005465987 python3.9[144949]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:45:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:12.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:13.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:14.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:45:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:15.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:45:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:16.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:45:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:17.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:45:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:18.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:19.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:45:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:20.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:45:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:21.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:22.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:23.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:45:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:24.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:45:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:25.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:26.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:26 np0005465987 podman[145145]: 2025-10-02 11:45:26.82635998 +0000 UTC m=+0.048207921 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:45:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:27.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:45:27.907 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:45:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:45:27.907 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:45:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:45:27.907 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:45:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:28.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:29.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:45:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:30.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:45:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:31.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:32.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:33.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:34.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000029s ======
Oct  2 07:45:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:35.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct  2 07:45:35 np0005465987 kernel: SELinux:  Converting 2766 SID table entries...
Oct  2 07:45:35 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:45:35 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:45:35 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:45:35 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:45:35 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:45:35 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:45:35 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:45:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:36.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:37.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:38.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:39.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:40.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:41.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:41 np0005465987 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct  2 07:45:41 np0005465987 podman[145175]: 2025-10-02 11:45:41.908551591 +0000 UTC m=+0.120992355 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 07:45:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:42.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:44.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:44 np0005465987 kernel: SELinux:  Converting 2766 SID table entries...
Oct  2 07:45:45 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:45:45 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:45:45 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:45:45 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:45:45 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:45:45 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:45:45 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:45:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:45.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:46.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:47.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:48.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:49.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:50.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:45:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:51.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:45:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:53.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:45:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:45:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:56.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:45:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:57.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:45:57 np0005465987 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct  2 07:45:57 np0005465987 podman[145849]: 2025-10-02 11:45:57.859142255 +0000 UTC m=+0.059784589 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 07:45:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:45:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:45:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:45:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:45:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:45:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:45:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:45:59.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:46:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:01.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:46:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:46:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:02.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:03.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:46:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:04.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:46:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:05.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:06.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:07.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:09.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:10.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:11.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:46:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:46:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:12.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:12 np0005465987 podman[156173]: 2025-10-02 11:46:12.892118588 +0000 UTC m=+0.112421626 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 07:46:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:13.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:46:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:14.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:46:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:15.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:16.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:17.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.016000440s ======
Oct  2 07:46:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.016000440s
Oct  2 07:46:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:19.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:20.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:21.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:22.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:23.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:24.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:25.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:26.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:27.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:46:27.908 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:46:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:46:27.909 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:46:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:46:27.909 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:46:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:28.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:28 np0005465987 podman[162188]: 2025-10-02 11:46:28.840961243 +0000 UTC m=+0.057989369 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 07:46:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:29.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:30.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:31.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:32.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:33.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:34.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:35.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:35 np0005465987 kernel: SELinux:  Converting 2767 SID table entries...
Oct  2 07:46:35 np0005465987 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:46:35 np0005465987 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:46:35 np0005465987 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:46:35 np0005465987 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:46:35 np0005465987 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:46:35 np0005465987 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:46:35 np0005465987 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:46:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:36.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:36 np0005465987 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct  2 07:46:36 np0005465987 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct  2 07:46:36 np0005465987 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct  2 07:46:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:37.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:38.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:39.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:40.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:41.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:42.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:43.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:43 np0005465987 podman[162459]: 2025-10-02 11:46:43.258503432 +0000 UTC m=+0.083491587 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 07:46:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:44 np0005465987 systemd[1]: Stopping OpenSSH server daemon...
Oct  2 07:46:44 np0005465987 systemd[1]: sshd.service: Deactivated successfully.
Oct  2 07:46:44 np0005465987 systemd[1]: Stopped OpenSSH server daemon.
Oct  2 07:46:44 np0005465987 systemd[1]: sshd.service: Consumed 1.955s CPU time, read 0B from disk, written 12.0K to disk.
Oct  2 07:46:44 np0005465987 systemd[1]: Stopped target sshd-keygen.target.
Oct  2 07:46:44 np0005465987 systemd[1]: Stopping sshd-keygen.target...
Oct  2 07:46:44 np0005465987 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 07:46:44 np0005465987 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 07:46:44 np0005465987 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 07:46:44 np0005465987 systemd[1]: Reached target sshd-keygen.target.
Oct  2 07:46:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:44.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:44 np0005465987 systemd[1]: Starting OpenSSH server daemon...
Oct  2 07:46:44 np0005465987 systemd[1]: Started OpenSSH server daemon.
Oct  2 07:46:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:45.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:46 np0005465987 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:46:46 np0005465987 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:46:46 np0005465987 systemd[1]: Reloading.
Oct  2 07:46:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:46.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:46 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:46 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:46 np0005465987 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:46:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:47.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:48.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:46:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:49.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:46:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:50.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:50 np0005465987 systemd[1]: Starting PackageKit Daemon...
Oct  2 07:46:50 np0005465987 systemd[1]: Started PackageKit Daemon.
Oct  2 07:46:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:52.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:54 np0005465987 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:46:54 np0005465987 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:46:54 np0005465987 systemd[1]: man-db-cache-update.service: Consumed 9.678s CPU time.
Oct  2 07:46:54 np0005465987 systemd[1]: run-r427ee189b009499c9b1c0c4dcb693cee.service: Deactivated successfully.
Oct  2 07:46:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:46:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:56.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:46:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:46:58.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:46:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:46:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:46:59.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:46:59 np0005465987 podman[171615]: 2025-10-02 11:46:59.844113743 +0000 UTC m=+0.058342075 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 07:47:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:00.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:01.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:02.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:03.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:05.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:06 np0005465987 python3.9[171762]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:47:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:06.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:07.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:07 np0005465987 systemd[1]: Reloading.
Oct  2 07:47:07 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:07 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:08 np0005465987 python3.9[171952]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:47:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:08.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:09.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:09 np0005465987 systemd[1]: Reloading.
Oct  2 07:47:09 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:09 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:10 np0005465987 python3.9[172142]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:47:10 np0005465987 systemd[1]: Reloading.
Oct  2 07:47:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:10.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:10 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:10 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:11 np0005465987 podman[172500]: 2025-10-02 11:47:11.415938186 +0000 UTC m=+0.062994822 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 07:47:11 np0005465987 python3.9[172467]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:47:11 np0005465987 podman[172500]: 2025-10-02 11:47:11.518210181 +0000 UTC m=+0.165266787 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:47:11 np0005465987 systemd[1]: Reloading.
Oct  2 07:47:11 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:11 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:12.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:12 np0005465987 python3.9[172810]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:12 np0005465987 systemd[1]: Reloading.
Oct  2 07:47:12 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:12 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:13.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:47:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:47:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:47:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:47:13 np0005465987 podman[173102]: 2025-10-02 11:47:13.61371322 +0000 UTC m=+0.096649055 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:47:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:13 np0005465987 python3.9[173148]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:13 np0005465987 systemd[1]: Reloading.
Oct  2 07:47:14 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:14 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:14 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:47:14 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:47:14 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:47:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:14.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:15 np0005465987 python3.9[173344]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:15 np0005465987 systemd[1]: Reloading.
Oct  2 07:47:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:47:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:15.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.302395) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635302432, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2032, "num_deletes": 250, "total_data_size": 5239624, "memory_usage": 5310160, "flush_reason": "Manual Compaction"}
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635319630, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3431016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12304, "largest_seqno": 14330, "table_properties": {"data_size": 3422590, "index_size": 5241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 15298, "raw_average_key_size": 18, "raw_value_size": 3406123, "raw_average_value_size": 4108, "num_data_blocks": 235, "num_entries": 829, "num_filter_entries": 829, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405431, "oldest_key_time": 1759405431, "file_creation_time": 1759405635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 17283 microseconds, and 7030 cpu microseconds.
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.319676) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3431016 bytes OK
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.319695) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.322019) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.322033) EVENT_LOG_v1 {"time_micros": 1759405635322028, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.322049) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5230586, prev total WAL file size 5230586, number of live WAL files 2.
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.323337) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3350KB)], [24(8380KB)]
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635323363, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12012953, "oldest_snapshot_seqno": -1}
Oct  2 07:47:15 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:15 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4264 keys, 11473007 bytes, temperature: kUnknown
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635399495, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11473007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11438763, "index_size": 22486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 103745, "raw_average_key_size": 24, "raw_value_size": 11356067, "raw_average_value_size": 2663, "num_data_blocks": 960, "num_entries": 4264, "num_filter_entries": 4264, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759405635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.400426) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11473007 bytes
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.402021) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 149.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.8) write-amplify(3.3) OK, records in: 4779, records dropped: 515 output_compression: NoCompression
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.402052) EVENT_LOG_v1 {"time_micros": 1759405635402038, "job": 12, "event": "compaction_finished", "compaction_time_micros": 76866, "compaction_time_cpu_micros": 27507, "output_level": 6, "num_output_files": 1, "total_output_size": 11473007, "num_input_records": 4779, "num_output_records": 4264, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635403307, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405635406566, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.323249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.406662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.406666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.406668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.406670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:47:15 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:47:15.406671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:47:16 np0005465987 python3.9[173534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:16.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:17 np0005465987 python3.9[173689]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:17 np0005465987 systemd[1]: Reloading.
Oct  2 07:47:17 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:17 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:17.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:18.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:18 np0005465987 python3.9[173878]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:47:19 np0005465987 systemd[1]: Reloading.
Oct  2 07:47:19 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:19 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:47:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:19.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:47:19 np0005465987 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  2 07:47:19 np0005465987 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  2 07:47:20 np0005465987 python3.9[174071]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:20.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:21.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:21 np0005465987 python3.9[174276]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:47:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:47:22 np0005465987 python3.9[174431]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:22.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:22 np0005465987 python3.9[174586]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:23.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:23 np0005465987 python3.9[174741]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:24 np0005465987 python3.9[174896]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:24.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:25 np0005465987 python3.9[175051]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:47:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:25.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:47:25 np0005465987 python3.9[175206]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:26.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:26 np0005465987 python3.9[175361]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:27.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:27 np0005465987 python3.9[175516]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:47:27.910 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:47:27.911 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:47:27.911 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:47:28 np0005465987 python3.9[175671]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:28.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:29 np0005465987 python3.9[175826]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:47:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:29.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:47:29 np0005465987 python3.9[175981]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:30 np0005465987 podman[175983]: 2025-10-02 11:47:30.092218934 +0000 UTC m=+0.072984472 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  2 07:47:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:30.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:30 np0005465987 python3.9[176156]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:47:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:31.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:32.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:33.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:33 np0005465987 python3.9[176311]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:47:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:34 np0005465987 python3.9[176463]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:47:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:34.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:34 np0005465987 python3.9[176615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:47:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:35.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:35 np0005465987 python3.9[176767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:47:35 np0005465987 python3.9[176919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:47:36 np0005465987 python3.9[177071]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:47:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:47:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:36.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:47:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:37.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:37 np0005465987 python3.9[177223]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:47:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:38.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:38 np0005465987 python3.9[177348]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405657.2719548-1628-86869938753497/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:39 np0005465987 python3.9[177500]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:47:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:39.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:39 np0005465987 python3.9[177625]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405658.8152695-1628-197531510772622/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:40 np0005465987 python3.9[177777]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:47:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:47:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:40.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:47:41 np0005465987 python3.9[177902]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405660.0096438-1628-130640827344868/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:41.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:41 np0005465987 python3.9[178054]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:47:42 np0005465987 python3.9[178179]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405661.245661-1628-198843334793917/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:42.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:42 np0005465987 python3.9[178331]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:47:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:43.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:43 np0005465987 python3.9[178456]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405662.4150789-1628-153379586793827/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:43 np0005465987 podman[178554]: 2025-10-02 11:47:43.925673994 +0000 UTC m=+0.130263907 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  2 07:47:44 np0005465987 python3.9[178633]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:47:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:44.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:44 np0005465987 python3.9[178758]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405663.6317196-1628-73452425772384/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:45.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:45 np0005465987 python3.9[178910]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:47:46 np0005465987 python3.9[179033]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405665.048983-1628-253937886815488/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:46.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:47.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:47 np0005465987 python3.9[179185]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:47:48 np0005465987 python3.9[179310]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759405666.618467-1628-265265814258468/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:48.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:49.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:49 np0005465987 python3.9[179462]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  2 07:47:50 np0005465987 ceph-mds[84089]: mds.beacon.cephfs.compute-1.zfhmgy missed beacon ack from the monitors
Oct  2 07:47:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).paxos(paxos updating c 754..1505) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.059288263s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Oct  2 07:47:51 np0005465987 ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-mon-compute-1[80818]: 2025-10-02T11:47:51.301+0000 7f3c0a9b4640 -1 mon.compute-1@2(peon).paxos(paxos updating c 754..1505) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.059288263s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Oct  2 07:47:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:51.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:47:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:52.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:47:53 np0005465987 python3.9[179615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:54 np0005465987 python3.9[179767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:47:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:47:54 np0005465987 python3.9[179919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:55.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:55 np0005465987 python3.9[180071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:56 np0005465987 python3.9[180223]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:47:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:56.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:47:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:57.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:57 np0005465987 python3.9[180375]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:58 np0005465987 python3.9[180527]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:58.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:58 np0005465987 python3.9[180679]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:47:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:47:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:59.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:59 np0005465987 python3.9[180831]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:00 np0005465987 python3.9[180983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:00 np0005465987 podman[181107]: 2025-10-02 11:48:00.490507811 +0000 UTC m=+0.073634820 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct  2 07:48:00 np0005465987 python3.9[181154]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:00.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:01 np0005465987 python3.9[181306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:01.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:01 np0005465987 python3.9[181458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:02 np0005465987 python3.9[181612]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:02.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:48:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:03.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:48:04 np0005465987 python3.9[181764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:04 np0005465987 python3.9[181887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405683.6079214-2291-175951838045951/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:04.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:05 np0005465987 python3.9[182039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:05.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:05 np0005465987 python3.9[182162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405684.7751143-2291-142208946009998/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:06 np0005465987 python3.9[182314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:06.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:06 np0005465987 python3.9[182437]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405685.8345525-2291-180692311299118/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:07.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:07 np0005465987 python3.9[182589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:07 np0005465987 python3.9[182712]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405686.9644747-2291-16505535173975/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:08 np0005465987 python3.9[182864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:08.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:09 np0005465987 python3.9[182987]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405688.0749233-2291-105451395281548/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:09.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:09 np0005465987 python3.9[183139]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:10 np0005465987 python3.9[183262]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405689.241612-2291-188419206895759/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:10 np0005465987 python3.9[183414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:11.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:11 np0005465987 python3.9[183537]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405690.4328969-2291-278781229815468/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:12 np0005465987 python3.9[183689]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:12.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:12 np0005465987 python3.9[183812]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405691.733749-2291-65162220402266/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:13 np0005465987 python3.9[183964]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:13.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:13 np0005465987 python3.9[184087]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405692.8874981-2291-155524537886385/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:14 np0005465987 podman[184211]: 2025-10-02 11:48:14.338703391 +0000 UTC m=+0.087321201 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:48:14 np0005465987 python3.9[184258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:14.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:15 np0005465987 python3.9[184387]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405694.0097797-2291-234066660234790/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:15.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:15 np0005465987 python3.9[184539]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:16 np0005465987 python3.9[184662]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405695.1566951-2291-59402338725718/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:16 np0005465987 python3.9[184814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:48:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:16.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:48:17 np0005465987 python3.9[184937]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405696.2345846-2291-104176084360252/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:17.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:17 np0005465987 python3.9[185089]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:18 np0005465987 python3.9[185212]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405697.5061226-2291-123859972900759/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:18.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:19 np0005465987 python3.9[185364]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:19.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:19 np0005465987 python3.9[185487]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405698.6966708-2291-112445370354602/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:21.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:22 np0005465987 python3.9[185768]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:48:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:22.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:48:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:48:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:48:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:48:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:23.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:48:23 np0005465987 python3.9[185923]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  2 07:48:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:24.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:48:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:48:25 np0005465987 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct  2 07:48:25 np0005465987 python3.9[186080]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:26 np0005465987 python3.9[186232]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:26 np0005465987 python3.9[186384]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:27.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:27 np0005465987 python3.9[186536]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:48:27.911 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:48:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:48:27.913 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:48:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:48:27.913 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:48:28 np0005465987 python3.9[186688]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:29 np0005465987 python3.9[186889]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:48:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:29.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:48:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:48:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:48:29 np0005465987 python3.9[187042]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:30 np0005465987 python3.9[187194]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:30.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:30 np0005465987 podman[187318]: 2025-10-02 11:48:30.789664397 +0000 UTC m=+0.049811413 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:48:30 np0005465987 python3.9[187365]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:31.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:31 np0005465987 python3.9[187517]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:32.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:32 np0005465987 python3.9[187669]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:48:32 np0005465987 systemd[1]: Reloading.
Oct  2 07:48:32 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:48:32 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:48:33 np0005465987 systemd[1]: Starting libvirt logging daemon socket...
Oct  2 07:48:33 np0005465987 systemd[1]: Listening on libvirt logging daemon socket.
Oct  2 07:48:33 np0005465987 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  2 07:48:33 np0005465987 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  2 07:48:33 np0005465987 systemd[1]: Starting libvirt logging daemon...
Oct  2 07:48:33 np0005465987 systemd[1]: Started libvirt logging daemon.
Oct  2 07:48:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:33.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:34 np0005465987 python3.9[187862]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:48:34 np0005465987 systemd[1]: Reloading.
Oct  2 07:48:34 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:48:34 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:48:34 np0005465987 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  2 07:48:34 np0005465987 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  2 07:48:34 np0005465987 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  2 07:48:34 np0005465987 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  2 07:48:34 np0005465987 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  2 07:48:34 np0005465987 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  2 07:48:34 np0005465987 systemd[1]: Starting libvirt nodedev daemon...
Oct  2 07:48:34 np0005465987 systemd[1]: Started libvirt nodedev daemon.
Oct  2 07:48:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:34.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:35 np0005465987 python3.9[188077]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:48:35 np0005465987 systemd[1]: Reloading.
Oct  2 07:48:35 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:48:35 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:48:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:35.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:35 np0005465987 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  2 07:48:35 np0005465987 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  2 07:48:35 np0005465987 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  2 07:48:35 np0005465987 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  2 07:48:35 np0005465987 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  2 07:48:35 np0005465987 systemd[1]: Starting libvirt proxy daemon...
Oct  2 07:48:35 np0005465987 systemd[1]: Started libvirt proxy daemon.
Oct  2 07:48:35 np0005465987 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  2 07:48:35 np0005465987 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  2 07:48:35 np0005465987 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  2 07:48:36 np0005465987 python3.9[188293]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:48:36 np0005465987 systemd[1]: Reloading.
Oct  2 07:48:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:36 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:48:36 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:48:36 np0005465987 systemd[1]: Listening on libvirt locking daemon socket.
Oct  2 07:48:36 np0005465987 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  2 07:48:36 np0005465987 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  2 07:48:36 np0005465987 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  2 07:48:36 np0005465987 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  2 07:48:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:36.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:36 np0005465987 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  2 07:48:36 np0005465987 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  2 07:48:36 np0005465987 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  2 07:48:36 np0005465987 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  2 07:48:36 np0005465987 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  2 07:48:36 np0005465987 systemd[1]: Starting libvirt QEMU daemon...
Oct  2 07:48:36 np0005465987 systemd[1]: Started libvirt QEMU daemon.
Oct  2 07:48:36 np0005465987 setroubleshoot[188113]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l eff6c587-e6de-4f7b-836c-2b52cdcca4a8
Oct  2 07:48:36 np0005465987 setroubleshoot[188113]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  2 07:48:36 np0005465987 setroubleshoot[188113]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l eff6c587-e6de-4f7b-836c-2b52cdcca4a8
Oct  2 07:48:36 np0005465987 setroubleshoot[188113]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  2 07:48:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:37.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:37 np0005465987 python3.9[188509]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:48:37 np0005465987 systemd[1]: Reloading.
Oct  2 07:48:37 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:48:37 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:48:37 np0005465987 systemd[1]: Starting libvirt secret daemon socket...
Oct  2 07:48:37 np0005465987 systemd[1]: Listening on libvirt secret daemon socket.
Oct  2 07:48:37 np0005465987 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  2 07:48:37 np0005465987 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  2 07:48:37 np0005465987 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  2 07:48:37 np0005465987 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  2 07:48:37 np0005465987 systemd[1]: Starting libvirt secret daemon...
Oct  2 07:48:37 np0005465987 systemd[1]: Started libvirt secret daemon.
Oct  2 07:48:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:38.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:39.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:39 np0005465987 python3.9[188718]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:40 np0005465987 python3.9[188870]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:48:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:40.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:41 np0005465987 python3.9[189022]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:48:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:41 np0005465987 python3.9[189176]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:48:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:42 np0005465987 python3.9[189326]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:48:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:48:43 np0005465987 python3.9[189447]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405722.4337065-3365-145486574010636/.source.xml follow=False _original_basename=secret.xml.j2 checksum=63af5286395175bfc9ebaef2783b75cb37ce0b72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:44 np0005465987 python3.9[189599]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine fd4c5763-22d1-50ea-ad0b-96a3dc3040b2#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:48:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:44.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:44 np0005465987 podman[189612]: 2025-10-02 11:48:44.933830832 +0000 UTC m=+0.146538079 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:48:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:45 np0005465987 python3.9[189787]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:46 np0005465987 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  2 07:48:47 np0005465987 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  2 07:48:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:47.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:48 np0005465987 python3.9[190250]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:48.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:48 np0005465987 python3.9[190402]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:49.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:49 np0005465987 python3.9[190525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405728.4198282-3530-97517551774931/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:50 np0005465987 python3.9[190677]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:51 np0005465987 python3.9[190829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:51.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:51 np0005465987 python3.9[190907]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:52 np0005465987 python3.9[191059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:52.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:53 np0005465987 python3.9[191137]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9_w27_kn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:53.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:53 np0005465987 python3.9[191289]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:53 np0005465987 auditd[707]: Audit daemon rotating log files
Oct  2 07:48:54 np0005465987 python3.9[191367]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:54.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:55 np0005465987 python3.9[191519]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:48:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:48:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:55.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:48:56 np0005465987 python3[191672]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:48:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:56 np0005465987 python3.9[191824]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:57 np0005465987 python3.9[191902]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:57.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:58 np0005465987 python3.9[192054]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:58 np0005465987 python3.9[192132]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:48:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:58.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:59 np0005465987 python3.9[192284]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:48:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:48:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:48:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:59.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:48:59 np0005465987 python3.9[192362]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:00.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:00 np0005465987 python3.9[192514]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:01 np0005465987 podman[192564]: 2025-10-02 11:49:01.07245889 +0000 UTC m=+0.048634215 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:49:01 np0005465987 python3.9[192611]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:01.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:02 np0005465987 python3.9[192763]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:02 np0005465987 python3.9[192888]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405741.571683-3905-197562701217073/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:02.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:03.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:03 np0005465987 python3.9[193041]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:04 np0005465987 python3.9[193193]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:49:04 np0005465987 ceph-mgr[81180]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct  2 07:49:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:04.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:05 np0005465987 python3.9[193348]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:05.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:06 np0005465987 python3.9[193500]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:49:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:06.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:06 np0005465987 python3.9[193653]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:49:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:07 np0005465987 python3.9[193807]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:49:08 np0005465987 python3.9[193962]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:08.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:09 np0005465987 python3.9[194114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:09.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:09 np0005465987 python3.9[194237]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405748.7214782-4121-245864337021586/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.571807) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750571876, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1464, "num_deletes": 502, "total_data_size": 2844522, "memory_usage": 2888344, "flush_reason": "Manual Compaction"}
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct  2 07:49:10 np0005465987 python3.9[194389]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750584779, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1118591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14335, "largest_seqno": 15794, "table_properties": {"data_size": 1113801, "index_size": 1738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14678, "raw_average_key_size": 19, "raw_value_size": 1101709, "raw_average_value_size": 1430, "num_data_blocks": 79, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405635, "oldest_key_time": 1759405635, "file_creation_time": 1759405750, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13028 microseconds, and 6944 cpu microseconds.
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.584842) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1118591 bytes OK
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.584865) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.593170) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.593209) EVENT_LOG_v1 {"time_micros": 1759405750593199, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.593233) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2836712, prev total WAL file size 2836712, number of live WAL files 2.
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.594797) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1092KB)], [27(10MB)]
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750594845, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 12591598, "oldest_snapshot_seqno": -1}
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4065 keys, 7664182 bytes, temperature: kUnknown
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750641514, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 7664182, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7635826, "index_size": 17110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 100815, "raw_average_key_size": 24, "raw_value_size": 7561004, "raw_average_value_size": 1860, "num_data_blocks": 719, "num_entries": 4065, "num_filter_entries": 4065, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759405750, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.641738) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7664182 bytes
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.643010) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 269.5 rd, 164.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.9 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(18.1) write-amplify(6.9) OK, records in: 5034, records dropped: 969 output_compression: NoCompression
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.643025) EVENT_LOG_v1 {"time_micros": 1759405750643017, "job": 14, "event": "compaction_finished", "compaction_time_micros": 46726, "compaction_time_cpu_micros": 21511, "output_level": 6, "num_output_files": 1, "total_output_size": 7664182, "num_input_records": 5034, "num_output_records": 4065, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750643305, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405750645174, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.594653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.645333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.645337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.645339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.645340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:49:10 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:49:10.645342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:49:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:10.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:11 np0005465987 python3.9[194512]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405750.0559793-4166-189402528588567/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:11.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:11 np0005465987 python3.9[194664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:12 np0005465987 python3.9[194787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405751.31495-4211-260055152894185/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:13 np0005465987 python3.9[194939]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:49:13 np0005465987 systemd[1]: Reloading.
Oct  2 07:49:13 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:49:13 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:49:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:13.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:13 np0005465987 systemd[1]: Reached target edpm_libvirt.target.
Oct  2 07:49:14 np0005465987 python3.9[195130]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 07:49:14 np0005465987 systemd[1]: Reloading.
Oct  2 07:49:14 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:49:14 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:49:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:14 np0005465987 systemd[1]: Reloading.
Oct  2 07:49:15 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:49:15 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:49:15 np0005465987 podman[195202]: 2025-10-02 11:49:15.319917841 +0000 UTC m=+0.080853959 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  2 07:49:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:15.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:15 np0005465987 systemd[1]: session-49.scope: Deactivated successfully.
Oct  2 07:49:15 np0005465987 systemd[1]: session-49.scope: Consumed 3min 17.920s CPU time.
Oct  2 07:49:15 np0005465987 systemd-logind[794]: Session 49 logged out. Waiting for processes to exit.
Oct  2 07:49:15 np0005465987 systemd-logind[794]: Removed session 49.
Oct  2 07:49:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:16.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:17.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:18.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:19.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:21.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:21 np0005465987 systemd-logind[794]: New session 50 of user zuul.
Oct  2 07:49:21 np0005465987 systemd[1]: Started Session 50 of User zuul.
Oct  2 07:49:22 np0005465987 python3.9[195406]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:49:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:22.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:23.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:24 np0005465987 python3.9[195562]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:24 np0005465987 python3.9[195714]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:24.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:25.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:25 np0005465987 python3.9[195866]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:26 np0005465987 python3.9[196018]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 07:49:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:26.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:26 np0005465987 python3.9[196170]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:27.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:27 np0005465987 python3.9[196322]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:49:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:49:27.913 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:49:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:49:27.914 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:49:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:49:27.914 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:49:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:28.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:29 np0005465987 python3.9[196476]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:49:29 np0005465987 systemd[1]: Reloading.
Oct  2 07:49:29 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:49:29 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:49:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:29.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:30 np0005465987 python3.9[196795]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:49:30 np0005465987 network[196812]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:49:30 np0005465987 network[196813]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:49:30 np0005465987 network[196814]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:49:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:30.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:49:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:49:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:49:31 np0005465987 podman[196821]: 2025-10-02 11:49:31.323421732 +0000 UTC m=+0.051239727 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:49:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:31.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:32.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:33.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:34.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:35.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:35 np0005465987 python3.9[197107]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:49:35 np0005465987 systemd[1]: Reloading.
Oct  2 07:49:35 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:49:35 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:49:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:36.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:37 np0005465987 python3.9[197295]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:49:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:37.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:49:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:49:38 np0005465987 python3.9[197497]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 07:49:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 07:49:38 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:49:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:38.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 07:49:38 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:49:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:39.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:39 np0005465987 podman[197509]: 2025-10-02 11:49:39.495777706 +0000 UTC m=+1.160520335 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 07:49:39 np0005465987 podman[197567]: 2025-10-02 11:49:39.617976238 +0000 UTC m=+0.039665759 container create 076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6380] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct  2 07:49:39 np0005465987 kernel: podman0: port 1(veth0) entered blocking state
Oct  2 07:49:39 np0005465987 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 07:49:39 np0005465987 kernel: veth0: entered allmulticast mode
Oct  2 07:49:39 np0005465987 kernel: veth0: entered promiscuous mode
Oct  2 07:49:39 np0005465987 kernel: podman0: port 1(veth0) entered blocking state
Oct  2 07:49:39 np0005465987 kernel: podman0: port 1(veth0) entered forwarding state
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6567] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6579] device (veth0): carrier: link connected
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6582] device (podman0): carrier: link connected
Oct  2 07:49:39 np0005465987 systemd-udevd[197593]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:49:39 np0005465987 systemd-udevd[197595]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6823] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6832] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6840] device (podman0): Activation: starting connection 'podman0' (575f456d-3e37-4fb9-8d78-d8946cfde076)
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6841] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6845] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6848] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.6851] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 07:49:39 np0005465987 podman[197567]: 2025-10-02 11:49:39.598538315 +0000 UTC m=+0.020227846 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 07:49:39 np0005465987 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:49:39 np0005465987 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.7192] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.7195] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 07:49:39 np0005465987 NetworkManager[44910]: <info>  [1759405779.7205] device (podman0): Activation: successful, device activated.
Oct  2 07:49:39 np0005465987 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  2 07:49:39 np0005465987 systemd[1]: Started libpod-conmon-076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41.scope.
Oct  2 07:49:39 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:49:39 np0005465987 podman[197567]: 2025-10-02 11:49:39.943458466 +0000 UTC m=+0.365148017 container init 076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 07:49:39 np0005465987 podman[197567]: 2025-10-02 11:49:39.952783213 +0000 UTC m=+0.374472734 container start 076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 07:49:39 np0005465987 podman[197567]: 2025-10-02 11:49:39.956257208 +0000 UTC m=+0.377946759 container attach 076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 07:49:39 np0005465987 iscsid_config[197725]: iqn.1994-05.com.redhat:a46b55b0f33d#015
Oct  2 07:49:39 np0005465987 systemd[1]: libpod-076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41.scope: Deactivated successfully.
Oct  2 07:49:39 np0005465987 podman[197567]: 2025-10-02 11:49:39.959084345 +0000 UTC m=+0.380773876 container died 076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:49:39 np0005465987 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 07:49:39 np0005465987 kernel: veth0 (unregistering): left allmulticast mode
Oct  2 07:49:39 np0005465987 kernel: veth0 (unregistering): left promiscuous mode
Oct  2 07:49:39 np0005465987 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 07:49:40 np0005465987 NetworkManager[44910]: <info>  [1759405780.0311] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 07:49:40 np0005465987 systemd[1]: run-netns-netns\x2d81766c5b\x2de4a3\x2d49d8\x2d1b15\x2d0c1c5e1eaf72.mount: Deactivated successfully.
Oct  2 07:49:40 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41-userdata-shm.mount: Deactivated successfully.
Oct  2 07:49:40 np0005465987 podman[197567]: 2025-10-02 11:49:40.320101638 +0000 UTC m=+0.741791159 container remove 076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 07:49:40 np0005465987 python3.9[197497]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct  2 07:49:40 np0005465987 systemd[1]: libpod-conmon-076dfd0910fea6d20b47970f4c6256645735dcac7a3ae974f6b0ac4a2e170c41.scope: Deactivated successfully.
Oct  2 07:49:40 np0005465987 python3.9[197497]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  2 07:49:40 np0005465987 systemd[1]: var-lib-containers-storage-overlay-05bffadbdd605a1228e2a9c007d9e3005870529b5fa1eb4c1477a9ddcafa1e07-merged.mount: Deactivated successfully.
Oct  2 07:49:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:40.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:41 np0005465987 python3.9[197971]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:41.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:42 np0005465987 python3.9[198094]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405780.9090052-323-104425958942510/.source.iscsi _original_basename=.soefrhkr follow=False checksum=b84a3864ae99053eca0472d3110a562bd4d95402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:42.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:43 np0005465987 python3.9[198246]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:43.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:43 np0005465987 python3.9[198396]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:49:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:44.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:44 np0005465987 python3.9[198550]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:45.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:45 np0005465987 podman[198650]: 2025-10-02 11:49:45.865651837 +0000 UTC m=+0.082734380 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:49:46 np0005465987 python3.9[198730]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:46.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:47 np0005465987 python3.9[198882]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:47.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:47 np0005465987 python3.9[198960]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:48 np0005465987 python3.9[199112]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:48 np0005465987 python3.9[199190]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:48.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:49.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:49 np0005465987 python3.9[199342]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:50 np0005465987 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:49:50 np0005465987 python3.9[199494]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:50.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:50 np0005465987 python3.9[199572]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:51.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:51 np0005465987 python3.9[199724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:52 np0005465987 python3.9[199802]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:52.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:52 np0005465987 python3.9[199954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:49:53 np0005465987 systemd[1]: Reloading.
Oct  2 07:49:53 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:49:53 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:49:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:53.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:54 np0005465987 python3.9[200143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:54 np0005465987 python3.9[200221]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:54.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:55 np0005465987 python3.9[200373]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:55.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:55 np0005465987 python3.9[200451]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:56 np0005465987 python3.9[200603]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:49:56 np0005465987 systemd[1]: Reloading.
Oct  2 07:49:56 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:49:56 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:49:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:56.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:57 np0005465987 systemd[1]: Starting Create netns directory...
Oct  2 07:49:57 np0005465987 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:49:57 np0005465987 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:49:57 np0005465987 systemd[1]: Finished Create netns directory.
Oct  2 07:49:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:57.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:58 np0005465987 python3.9[200797]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:49:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:58.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:49:58 np0005465987 python3.9[200949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:59 np0005465987 python3.9[201072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405798.3998878-785-162739774981006/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:49:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:59.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:00 np0005465987 python3.9[201224]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 07:50:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:00.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:01 np0005465987 python3.9[201376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:50:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:01.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:50:01 np0005465987 podman[201471]: 2025-10-02 11:50:01.662653153 +0000 UTC m=+0.058438724 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:50:01 np0005465987 python3.9[201518]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405800.8275702-860-33791250702252/.source.json _original_basename=.bxokt8kj follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:02 np0005465987 python3.9[201670]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:02.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:03.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:04.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:05 np0005465987 python3.9[202097]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  2 07:50:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:05.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:06 np0005465987 python3.9[202249]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:50:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:06.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:07.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:07 np0005465987 python3.9[202401]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 07:50:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:08.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:09.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:09 np0005465987 python3[202579]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:50:09 np0005465987 podman[202614]: 2025-10-02 11:50:09.817556179 +0000 UTC m=+0.024454941 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 07:50:10 np0005465987 podman[202614]: 2025-10-02 11:50:10.099781421 +0000 UTC m=+0.306680163 container create 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=iscsid)
Oct  2 07:50:10 np0005465987 python3[202579]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 07:50:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:50:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:10.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:50:10 np0005465987 python3.9[202804]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:11.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:11 np0005465987 python3.9[202958]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:12 np0005465987 python3.9[203034]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:12.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:13 np0005465987 python3.9[203185]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405812.4354758-1124-50749618931728/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:13.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:13 np0005465987 python3.9[203261]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:50:13 np0005465987 systemd[1]: Reloading.
Oct  2 07:50:13 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:50:13 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:50:14 np0005465987 python3.9[203373]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:50:14 np0005465987 systemd[1]: Reloading.
Oct  2 07:50:14 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:50:14 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:50:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:14.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:14 np0005465987 systemd[1]: Starting iscsid container...
Oct  2 07:50:15 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:50:15 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71908d724acd5a152e434d999ab7d69d0310650d5329a01eea49585d72d61f87/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  2 07:50:15 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71908d724acd5a152e434d999ab7d69d0310650d5329a01eea49585d72d61f87/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 07:50:15 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71908d724acd5a152e434d999ab7d69d0310650d5329a01eea49585d72d61f87/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 07:50:15 np0005465987 systemd[1]: Started /usr/bin/podman healthcheck run 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610.
Oct  2 07:50:15 np0005465987 podman[203412]: 2025-10-02 11:50:15.1673617 +0000 UTC m=+0.252832007 container init 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 07:50:15 np0005465987 iscsid[203427]: + sudo -E kolla_set_configs
Oct  2 07:50:15 np0005465987 podman[203412]: 2025-10-02 11:50:15.196123388 +0000 UTC m=+0.281593685 container start 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 07:50:15 np0005465987 systemd[1]: Created slice User Slice of UID 0.
Oct  2 07:50:15 np0005465987 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  2 07:50:15 np0005465987 podman[203412]: iscsid
Oct  2 07:50:15 np0005465987 systemd[1]: Started iscsid container.
Oct  2 07:50:15 np0005465987 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  2 07:50:15 np0005465987 systemd[1]: Starting User Manager for UID 0...
Oct  2 07:50:15 np0005465987 podman[203434]: 2025-10-02 11:50:15.2588615 +0000 UTC m=+0.052047779 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 07:50:15 np0005465987 systemd[1]: 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610-d14e923562de585.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 07:50:15 np0005465987 systemd[1]: 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610-d14e923562de585.service: Failed with result 'exit-code'.
Oct  2 07:50:15 np0005465987 systemd[203446]: Queued start job for default target Main User Target.
Oct  2 07:50:15 np0005465987 systemd[203446]: Created slice User Application Slice.
Oct  2 07:50:15 np0005465987 systemd[203446]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  2 07:50:15 np0005465987 systemd[203446]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 07:50:15 np0005465987 systemd[203446]: Reached target Paths.
Oct  2 07:50:15 np0005465987 systemd[203446]: Reached target Timers.
Oct  2 07:50:15 np0005465987 systemd[203446]: Starting D-Bus User Message Bus Socket...
Oct  2 07:50:15 np0005465987 systemd[203446]: Starting Create User's Volatile Files and Directories...
Oct  2 07:50:15 np0005465987 systemd[203446]: Listening on D-Bus User Message Bus Socket.
Oct  2 07:50:15 np0005465987 systemd[203446]: Finished Create User's Volatile Files and Directories.
Oct  2 07:50:15 np0005465987 systemd[203446]: Reached target Sockets.
Oct  2 07:50:15 np0005465987 systemd[203446]: Reached target Basic System.
Oct  2 07:50:15 np0005465987 systemd[203446]: Reached target Main User Target.
Oct  2 07:50:15 np0005465987 systemd[203446]: Startup finished in 154ms.
Oct  2 07:50:15 np0005465987 systemd[1]: Started User Manager for UID 0.
Oct  2 07:50:15 np0005465987 systemd[1]: Started Session c3 of User root.
Oct  2 07:50:15 np0005465987 iscsid[203427]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:50:15 np0005465987 iscsid[203427]: INFO:__main__:Validating config file
Oct  2 07:50:15 np0005465987 iscsid[203427]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:50:15 np0005465987 iscsid[203427]: INFO:__main__:Writing out command to execute
Oct  2 07:50:15 np0005465987 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  2 07:50:15 np0005465987 iscsid[203427]: ++ cat /run_command
Oct  2 07:50:15 np0005465987 iscsid[203427]: + CMD='/usr/sbin/iscsid -f'
Oct  2 07:50:15 np0005465987 iscsid[203427]: + ARGS=
Oct  2 07:50:15 np0005465987 iscsid[203427]: + sudo kolla_copy_cacerts
Oct  2 07:50:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:15.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:15 np0005465987 systemd[1]: Started Session c4 of User root.
Oct  2 07:50:15 np0005465987 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  2 07:50:15 np0005465987 iscsid[203427]: + [[ ! -n '' ]]
Oct  2 07:50:15 np0005465987 iscsid[203427]: + . kolla_extend_start
Oct  2 07:50:15 np0005465987 iscsid[203427]: Running command: '/usr/sbin/iscsid -f'
Oct  2 07:50:15 np0005465987 iscsid[203427]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  2 07:50:15 np0005465987 iscsid[203427]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  2 07:50:15 np0005465987 iscsid[203427]: + umask 0022
Oct  2 07:50:15 np0005465987 iscsid[203427]: + exec /usr/sbin/iscsid -f
Oct  2 07:50:15 np0005465987 kernel: Loading iSCSI transport class v2.0-870.
Oct  2 07:50:16 np0005465987 podman[203608]: 2025-10-02 11:50:16.2388028 +0000 UTC m=+0.086124803 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller)
Oct  2 07:50:16 np0005465987 python3.9[203649]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:50:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:16.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:50:17 np0005465987 python3.9[203812]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:17.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:18 np0005465987 python3.9[203964]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:50:18 np0005465987 network[203981]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:50:18 np0005465987 network[203982]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:50:18 np0005465987 network[203983]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:50:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:18.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:19.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:20.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:21.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:22.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:23 np0005465987 python3.9[204258]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 07:50:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:23.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:24 np0005465987 python3.9[204410]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  2 07:50:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:24.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:24 np0005465987 python3.9[204566]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:25.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:25 np0005465987 python3.9[204689]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405824.4986343-1346-243079551191634/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:25 np0005465987 systemd[1]: Stopping User Manager for UID 0...
Oct  2 07:50:25 np0005465987 systemd[203446]: Activating special unit Exit the Session...
Oct  2 07:50:25 np0005465987 systemd[203446]: Stopped target Main User Target.
Oct  2 07:50:25 np0005465987 systemd[203446]: Stopped target Basic System.
Oct  2 07:50:25 np0005465987 systemd[203446]: Stopped target Paths.
Oct  2 07:50:25 np0005465987 systemd[203446]: Stopped target Sockets.
Oct  2 07:50:25 np0005465987 systemd[203446]: Stopped target Timers.
Oct  2 07:50:25 np0005465987 systemd[203446]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 07:50:25 np0005465987 systemd[203446]: Closed D-Bus User Message Bus Socket.
Oct  2 07:50:25 np0005465987 systemd[203446]: Stopped Create User's Volatile Files and Directories.
Oct  2 07:50:25 np0005465987 systemd[203446]: Removed slice User Application Slice.
Oct  2 07:50:25 np0005465987 systemd[203446]: Reached target Shutdown.
Oct  2 07:50:25 np0005465987 systemd[203446]: Finished Exit the Session.
Oct  2 07:50:25 np0005465987 systemd[203446]: Reached target Exit the Session.
Oct  2 07:50:25 np0005465987 systemd[1]: user@0.service: Deactivated successfully.
Oct  2 07:50:25 np0005465987 systemd[1]: Stopped User Manager for UID 0.
Oct  2 07:50:25 np0005465987 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  2 07:50:25 np0005465987 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  2 07:50:25 np0005465987 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  2 07:50:25 np0005465987 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  2 07:50:25 np0005465987 systemd[1]: Removed slice User Slice of UID 0.
Oct  2 07:50:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:26 np0005465987 python3.9[204843]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:26.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:27 np0005465987 python3.9[204995]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:50:27 np0005465987 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  2 07:50:27 np0005465987 systemd[1]: Stopped Load Kernel Modules.
Oct  2 07:50:27 np0005465987 systemd[1]: Stopping Load Kernel Modules...
Oct  2 07:50:27 np0005465987 systemd[1]: Starting Load Kernel Modules...
Oct  2 07:50:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:27.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:27 np0005465987 systemd[1]: Finished Load Kernel Modules.
Oct  2 07:50:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:50:27.915 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:50:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:50:27.916 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:50:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:50:27.917 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:50:28 np0005465987 python3.9[205151]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:28.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:29 np0005465987 python3.9[205303]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:29.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:30 np0005465987 python3.9[205455]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:30 np0005465987 python3.9[205607]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:30.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:31 np0005465987 python3.9[205730]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405830.371054-1520-223118994487962/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.004000109s ======
Oct  2 07:50:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:31.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000109s
Oct  2 07:50:31 np0005465987 podman[205807]: 2025-10-02 11:50:31.880880927 +0000 UTC m=+0.090167735 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 07:50:32 np0005465987 python3.9[205903]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:50:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:32.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:33 np0005465987 python3.9[206056]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:33.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:34 np0005465987 python3.9[206208]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:34 np0005465987 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  2 07:50:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:34.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:35 np0005465987 python3.9[206361]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:35.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:35 np0005465987 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  2 07:50:36 np0005465987 python3.9[206514]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:36 np0005465987 python3.9[206666]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:36.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:37 np0005465987 python3.9[206867]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:37.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:38 np0005465987 python3.9[207100]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:38 np0005465987 python3.9[207252]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:39.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:50:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:50:39 np0005465987 python3.9[207406]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:40 np0005465987 python3.9[207558]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:50:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:50:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:50:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:40.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:41 np0005465987 python3.9[207710]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:41.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:41 np0005465987 python3.9[207788]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:42 np0005465987 python3.9[207940]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:42.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:42 np0005465987 python3.9[208018]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:43.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:43 np0005465987 python3.9[208170]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:44 np0005465987 python3.9[208322]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:44.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:45 np0005465987 python3.9[208400]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:45.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:45 np0005465987 podman[208513]: 2025-10-02 11:50:45.849633782 +0000 UTC m=+0.056314406 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 07:50:46 np0005465987 python3.9[208572]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:46 np0005465987 podman[208670]: 2025-10-02 11:50:46.420644115 +0000 UTC m=+0.098646067 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 07:50:46 np0005465987 python3.9[208721]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:50:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:50:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:46.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:47 np0005465987 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  2 07:50:47 np0005465987 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  2 07:50:47 np0005465987 python3.9[208878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:50:47 np0005465987 systemd[1]: Reloading.
Oct  2 07:50:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:47.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:47 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:50:47 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:50:48 np0005465987 python3.9[209069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:48.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:48 np0005465987 python3.9[209147]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:49.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:49 np0005465987 python3.9[209299]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:50 np0005465987 python3.9[209377]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:50.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:51 np0005465987 python3.9[209529]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:50:51 np0005465987 systemd[1]: Reloading.
Oct  2 07:50:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:51.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:51 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:50:51 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:50:51 np0005465987 systemd[1]: Starting Create netns directory...
Oct  2 07:50:51 np0005465987 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:50:51 np0005465987 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:50:51 np0005465987 systemd[1]: Finished Create netns directory.
Oct  2 07:50:52 np0005465987 python3.9[209721]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:52.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:53.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:53 np0005465987 python3.9[209873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:54 np0005465987 python3.9[209996]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405853.1469834-2141-219908056409424/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:54.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:55 np0005465987 python3.9[210148]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:55.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:56 np0005465987 python3.9[210300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:56 np0005465987 python3.9[210423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405855.666218-2216-198634011339262/.source.json _original_basename=.dtn3khwt follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:56.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:57.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:57 np0005465987 python3.9[210575]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:58.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:50:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:50:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:59.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:50:59 np0005465987 python3.9[211002]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  2 07:51:00 np0005465987 python3.9[211154]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:51:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:00.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:01.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:01 np0005465987 python3.9[211306]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 07:51:02 np0005465987 podman[211358]: 2025-10-02 11:51:02.836423338 +0000 UTC m=+0.051598026 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 07:51:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:02.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:03.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:03 np0005465987 python3[211505]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:51:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:04.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:05 np0005465987 podman[211517]: 2025-10-02 11:51:05.495690286 +0000 UTC m=+1.681588818 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  2 07:51:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:05.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:05 np0005465987 podman[211573]: 2025-10-02 11:51:05.605458275 +0000 UTC m=+0.024258057 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  2 07:51:05 np0005465987 podman[211573]: 2025-10-02 11:51:05.949279266 +0000 UTC m=+0.368079018 container create 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 07:51:05 np0005465987 python3[211505]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  2 07:51:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:06 np0005465987 python3.9[211763]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:51:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:06.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:07.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:07 np0005465987 python3.9[211917]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:08 np0005465987 python3.9[211993]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:51:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:51:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:08.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:51:08 np0005465987 python3.9[212144]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405868.30606-2480-138702963137113/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:09 np0005465987 python3.9[212220]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:51:09 np0005465987 systemd[1]: Reloading.
Oct  2 07:51:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:09.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:09 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:51:09 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:51:10 np0005465987 python3.9[212331]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:10 np0005465987 systemd[1]: Reloading.
Oct  2 07:51:10 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:51:10 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:51:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:10.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:10 np0005465987 systemd[1]: Starting multipathd container...
Oct  2 07:51:11 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:51:11 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5bbc1ad3657348d01488dbd03fc3cc195c45f9f9e3d3b74222fae7901789baf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 07:51:11 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5bbc1ad3657348d01488dbd03fc3cc195c45f9f9e3d3b74222fae7901789baf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 07:51:11 np0005465987 systemd[1]: Started /usr/bin/podman healthcheck run 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff.
Oct  2 07:51:11 np0005465987 podman[212370]: 2025-10-02 11:51:11.150234079 +0000 UTC m=+0.141482328 container init 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 07:51:11 np0005465987 multipathd[212385]: + sudo -E kolla_set_configs
Oct  2 07:51:11 np0005465987 podman[212370]: 2025-10-02 11:51:11.174099425 +0000 UTC m=+0.165347744 container start 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:51:11 np0005465987 podman[212370]: multipathd
Oct  2 07:51:11 np0005465987 systemd[1]: Started multipathd container.
Oct  2 07:51:11 np0005465987 multipathd[212385]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:51:11 np0005465987 multipathd[212385]: INFO:__main__:Validating config file
Oct  2 07:51:11 np0005465987 multipathd[212385]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:51:11 np0005465987 multipathd[212385]: INFO:__main__:Writing out command to execute
Oct  2 07:51:11 np0005465987 multipathd[212385]: ++ cat /run_command
Oct  2 07:51:11 np0005465987 podman[212392]: 2025-10-02 11:51:11.238359813 +0000 UTC m=+0.054087794 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 07:51:11 np0005465987 multipathd[212385]: + CMD='/usr/sbin/multipathd -d'
Oct  2 07:51:11 np0005465987 multipathd[212385]: + ARGS=
Oct  2 07:51:11 np0005465987 multipathd[212385]: + sudo kolla_copy_cacerts
Oct  2 07:51:11 np0005465987 systemd[1]: 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff-5ff720ce0998941a.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 07:51:11 np0005465987 systemd[1]: 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff-5ff720ce0998941a.service: Failed with result 'exit-code'.
Oct  2 07:51:11 np0005465987 multipathd[212385]: + [[ ! -n '' ]]
Oct  2 07:51:11 np0005465987 multipathd[212385]: + . kolla_extend_start
Oct  2 07:51:11 np0005465987 multipathd[212385]: Running command: '/usr/sbin/multipathd -d'
Oct  2 07:51:11 np0005465987 multipathd[212385]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  2 07:51:11 np0005465987 multipathd[212385]: + umask 0022
Oct  2 07:51:11 np0005465987 multipathd[212385]: + exec /usr/sbin/multipathd -d
Oct  2 07:51:11 np0005465987 multipathd[212385]: 3939.912435 | --------start up--------
Oct  2 07:51:11 np0005465987 multipathd[212385]: 3939.912458 | read /etc/multipath.conf
Oct  2 07:51:11 np0005465987 multipathd[212385]: 3939.918655 | path checkers start up
Oct  2 07:51:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:11.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:12 np0005465987 python3.9[212574]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:51:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:12.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:13 np0005465987 python3.9[212728]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:51:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:13.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:14 np0005465987 python3.9[212894]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:51:14 np0005465987 systemd[1]: Stopping multipathd container...
Oct  2 07:51:14 np0005465987 multipathd[212385]: 3942.786429 | exit (signal)
Oct  2 07:51:14 np0005465987 multipathd[212385]: 3942.787289 | --------shut down-------
Oct  2 07:51:14 np0005465987 systemd[1]: libpod-40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff.scope: Deactivated successfully.
Oct  2 07:51:14 np0005465987 podman[212898]: 2025-10-02 11:51:14.183017862 +0000 UTC m=+0.067351984 container died 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 07:51:14 np0005465987 systemd[1]: 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff-5ff720ce0998941a.timer: Deactivated successfully.
Oct  2 07:51:14 np0005465987 systemd[1]: Stopped /usr/bin/podman healthcheck run 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff.
Oct  2 07:51:14 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff-userdata-shm.mount: Deactivated successfully.
Oct  2 07:51:14 np0005465987 systemd[1]: var-lib-containers-storage-overlay-f5bbc1ad3657348d01488dbd03fc3cc195c45f9f9e3d3b74222fae7901789baf-merged.mount: Deactivated successfully.
Oct  2 07:51:14 np0005465987 podman[212898]: 2025-10-02 11:51:14.450494567 +0000 UTC m=+0.334828669 container cleanup 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:51:14 np0005465987 podman[212898]: multipathd
Oct  2 07:51:14 np0005465987 podman[212927]: multipathd
Oct  2 07:51:14 np0005465987 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  2 07:51:14 np0005465987 systemd[1]: Stopped multipathd container.
Oct  2 07:51:14 np0005465987 systemd[1]: Starting multipathd container...
Oct  2 07:51:14 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:51:14 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5bbc1ad3657348d01488dbd03fc3cc195c45f9f9e3d3b74222fae7901789baf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 07:51:14 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5bbc1ad3657348d01488dbd03fc3cc195c45f9f9e3d3b74222fae7901789baf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 07:51:14 np0005465987 systemd[1]: Started /usr/bin/podman healthcheck run 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff.
Oct  2 07:51:14 np0005465987 podman[212940]: 2025-10-02 11:51:14.669887053 +0000 UTC m=+0.122025112 container init 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 07:51:14 np0005465987 multipathd[212955]: + sudo -E kolla_set_configs
Oct  2 07:51:14 np0005465987 podman[212940]: 2025-10-02 11:51:14.698155778 +0000 UTC m=+0.150293817 container start 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct  2 07:51:14 np0005465987 podman[212940]: multipathd
Oct  2 07:51:14 np0005465987 systemd[1]: Started multipathd container.
Oct  2 07:51:14 np0005465987 multipathd[212955]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:51:14 np0005465987 multipathd[212955]: INFO:__main__:Validating config file
Oct  2 07:51:14 np0005465987 multipathd[212955]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:51:14 np0005465987 multipathd[212955]: INFO:__main__:Writing out command to execute
Oct  2 07:51:14 np0005465987 multipathd[212955]: ++ cat /run_command
Oct  2 07:51:14 np0005465987 multipathd[212955]: + CMD='/usr/sbin/multipathd -d'
Oct  2 07:51:14 np0005465987 multipathd[212955]: + ARGS=
Oct  2 07:51:14 np0005465987 multipathd[212955]: + sudo kolla_copy_cacerts
Oct  2 07:51:14 np0005465987 multipathd[212955]: + [[ ! -n '' ]]
Oct  2 07:51:14 np0005465987 multipathd[212955]: + . kolla_extend_start
Oct  2 07:51:14 np0005465987 multipathd[212955]: Running command: '/usr/sbin/multipathd -d'
Oct  2 07:51:14 np0005465987 multipathd[212955]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  2 07:51:14 np0005465987 multipathd[212955]: + umask 0022
Oct  2 07:51:14 np0005465987 multipathd[212955]: + exec /usr/sbin/multipathd -d
Oct  2 07:51:14 np0005465987 multipathd[212955]: 3943.409172 | --------start up--------
Oct  2 07:51:14 np0005465987 multipathd[212955]: 3943.409193 | read /etc/multipath.conf
Oct  2 07:51:14 np0005465987 multipathd[212955]: 3943.414227 | path checkers start up
Oct  2 07:51:14 np0005465987 podman[212962]: 2025-10-02 11:51:14.794444063 +0000 UTC m=+0.086593043 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Oct  2 07:51:14 np0005465987 systemd[1]: 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff-3033950c4ee03f5e.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 07:51:14 np0005465987 systemd[1]: 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff-3033950c4ee03f5e.service: Failed with result 'exit-code'.
Oct  2 07:51:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:14.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:15.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:15 np0005465987 python3.9[213145]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:16 np0005465987 podman[213270]: 2025-10-02 11:51:16.781270747 +0000 UTC m=+0.091605990 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 07:51:16 np0005465987 podman[213269]: 2025-10-02 11:51:16.797678681 +0000 UTC m=+0.108003324 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 07:51:16 np0005465987 python3.9[213334]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 07:51:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:16.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:17.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:17 np0005465987 python3.9[213493]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  2 07:51:17 np0005465987 kernel: Key type psk registered
Oct  2 07:51:18 np0005465987 python3.9[213655]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:18.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:19 np0005465987 python3.9[213778]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405878.2211833-2720-106785614328818/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:19.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:20 np0005465987 python3.9[213930]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:51:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:20.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:51:21 np0005465987 python3.9[214082]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:51:21 np0005465987 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  2 07:51:21 np0005465987 systemd[1]: Stopped Load Kernel Modules.
Oct  2 07:51:21 np0005465987 systemd[1]: Stopping Load Kernel Modules...
Oct  2 07:51:21 np0005465987 systemd[1]: Starting Load Kernel Modules...
Oct  2 07:51:21 np0005465987 systemd[1]: Finished Load Kernel Modules.
Oct  2 07:51:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:21.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:22 np0005465987 python3.9[214238]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:51:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:22.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:23 np0005465987 python3.9[214322]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:51:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:23.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:24.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:25.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:26.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:51:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:27.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:51:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:51:27.916 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:51:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:51:27.918 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:51:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:51:27.918 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:51:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:28.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:29.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:30 np0005465987 systemd[1]: Reloading.
Oct  2 07:51:30 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:51:30 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:51:30 np0005465987 systemd[1]: Reloading.
Oct  2 07:51:30 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:51:30 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:51:30 np0005465987 systemd-logind[794]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  2 07:51:30 np0005465987 systemd-logind[794]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  2 07:51:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:30.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:30 np0005465987 lvm[214436]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:51:30 np0005465987 lvm[214436]: VG ceph_vg0 finished
Oct  2 07:51:31 np0005465987 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:51:31 np0005465987 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:51:31 np0005465987 systemd[1]: Reloading.
Oct  2 07:51:31 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:51:31 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:51:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:31 np0005465987 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:51:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:31.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:32.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:33.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:33 np0005465987 podman[215625]: 2025-10-02 11:51:33.87657606 +0000 UTC m=+0.081998000 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 07:51:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:34.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:35 np0005465987 python3.9[215795]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:35.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:35 np0005465987 python3.9[215945]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:51:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:51:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:36.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:51:37 np0005465987 python3.9[216101]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:37 np0005465987 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:51:37 np0005465987 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:51:37 np0005465987 systemd[1]: man-db-cache-update.service: Consumed 2.215s CPU time.
Oct  2 07:51:37 np0005465987 systemd[1]: run-r312e82f4d4b94d28b3a4fa7a7168707a.service: Deactivated successfully.
Oct  2 07:51:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:51:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:37.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:51:38 np0005465987 python3.9[216254]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:51:38 np0005465987 systemd[1]: Reloading.
Oct  2 07:51:38 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:51:38 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:51:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:38.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:39.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:39 np0005465987 python3.9[216440]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:51:40 np0005465987 network[216457]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:51:40 np0005465987 network[216458]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:51:40 np0005465987 network[216459]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:51:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:51:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:40.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:51:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:41.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:44.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:45.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:45 np0005465987 podman[216709]: 2025-10-02 11:51:45.809485635 +0000 UTC m=+0.058558466 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 07:51:46 np0005465987 python3.9[216756]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.880825) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906880889, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1635, "num_deletes": 251, "total_data_size": 4069337, "memory_usage": 4123824, "flush_reason": "Manual Compaction"}
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906900031, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2678237, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15799, "largest_seqno": 17429, "table_properties": {"data_size": 2671331, "index_size": 4041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13830, "raw_average_key_size": 19, "raw_value_size": 2657679, "raw_average_value_size": 3785, "num_data_blocks": 182, "num_entries": 702, "num_filter_entries": 702, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405751, "oldest_key_time": 1759405751, "file_creation_time": 1759405906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 19240 microseconds, and 5790 cpu microseconds.
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.900075) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2678237 bytes OK
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.900093) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.904366) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.904378) EVENT_LOG_v1 {"time_micros": 1759405906904375, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.904394) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 4061877, prev total WAL file size 4061877, number of live WAL files 2.
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.905266) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2615KB)], [30(7484KB)]
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906905325, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 10342419, "oldest_snapshot_seqno": -1}
Oct  2 07:51:46 np0005465987 python3.9[216983]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4250 keys, 8313734 bytes, temperature: kUnknown
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906951347, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 8313734, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8283674, "index_size": 18302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 105075, "raw_average_key_size": 24, "raw_value_size": 8205119, "raw_average_value_size": 1930, "num_data_blocks": 769, "num_entries": 4250, "num_filter_entries": 4250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759405906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.951610) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8313734 bytes
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.953639) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.3 rd, 180.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4767, records dropped: 517 output_compression: NoCompression
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.953659) EVENT_LOG_v1 {"time_micros": 1759405906953650, "job": 16, "event": "compaction_finished", "compaction_time_micros": 46109, "compaction_time_cpu_micros": 17491, "output_level": 6, "num_output_files": 1, "total_output_size": 8313734, "num_input_records": 4767, "num_output_records": 4250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906954585, "job": 16, "event": "table_file_deletion", "file_number": 32}
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405906956186, "job": 16, "event": "table_file_deletion", "file_number": 30}
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.905171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.956573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.956579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.956581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.956583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:46 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:46.956585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:46.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:47 np0005465987 podman[217028]: 2025-10-02 11:51:47.021447295 +0000 UTC m=+0.056111559 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 07:51:47 np0005465987 podman[217026]: 2025-10-02 11:51:47.0449369 +0000 UTC m=+0.079601114 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  2 07:51:47 np0005465987 python3.9[217241]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:47.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:51:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:51:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:51:48 np0005465987 python3.9[217394]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:51:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:51:49 np0005465987 python3.9[217547]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:49.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:50 np0005465987 python3.9[217700]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:50 np0005465987 python3.9[217853]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.094946) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405911094998, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 298, "num_deletes": 254, "total_data_size": 79642, "memory_usage": 86264, "flush_reason": "Manual Compaction"}
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405911247922, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 52315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17434, "largest_seqno": 17727, "table_properties": {"data_size": 50413, "index_size": 130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4614, "raw_average_key_size": 16, "raw_value_size": 46522, "raw_average_value_size": 166, "num_data_blocks": 6, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405907, "oldest_key_time": 1759405907, "file_creation_time": 1759405911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 153010 microseconds, and 887 cpu microseconds.
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.247964) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 52315 bytes OK
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.247982) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.256667) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.256687) EVENT_LOG_v1 {"time_micros": 1759405911256682, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.256726) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 77434, prev total WAL file size 93407, number of live WAL files 2.
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.257068) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(51KB)], [33(8118KB)]
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405911257142, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 8366049, "oldest_snapshot_seqno": -1}
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4012 keys, 8018203 bytes, temperature: kUnknown
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405911303397, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 8018203, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7990008, "index_size": 17077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 101446, "raw_average_key_size": 25, "raw_value_size": 7915695, "raw_average_value_size": 1973, "num_data_blocks": 704, "num_entries": 4012, "num_filter_entries": 4012, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759405911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.304151) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8018203 bytes
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.308499) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.9 rd, 172.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 7.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(313.2) write-amplify(153.3) OK, records in: 4530, records dropped: 518 output_compression: NoCompression
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.308541) EVENT_LOG_v1 {"time_micros": 1759405911308523, "job": 18, "event": "compaction_finished", "compaction_time_micros": 46500, "compaction_time_cpu_micros": 18343, "output_level": 6, "num_output_files": 1, "total_output_size": 8018203, "num_input_records": 4530, "num_output_records": 4012, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405911309872, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405911312994, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.256966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.313212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.313218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.313220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.313221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:51:51.313223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:51:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:51.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:51 np0005465987 python3.9[218006]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:53.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:53 np0005465987 python3.9[218159]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:54 np0005465987 python3.9[218361]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:51:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:51:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:55 np0005465987 python3.9[218513]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:55.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:55 np0005465987 python3.9[218665]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:56 np0005465987 python3.9[218817]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:56.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:51:57 np0005465987 python3.9[218969]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:57.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:57 np0005465987 python3.9[219121]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:58 np0005465987 python3.9[219273]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:51:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:51:59 np0005465987 python3.9[219425]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:51:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:51:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:59.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:52:00 np0005465987 python3.9[219577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:00 np0005465987 python3.9[219729]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:01 np0005465987 python3.9[219881]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:52:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:01.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:52:02 np0005465987 python3.9[220033]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:02 np0005465987 python3.9[220185]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:52:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:52:03 np0005465987 python3.9[220337]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:03.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:04 np0005465987 podman[220461]: 2025-10-02 11:52:04.163974256 +0000 UTC m=+0.043655802 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:52:04 np0005465987 python3.9[220507]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:05.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:05 np0005465987 python3.9[220659]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:06 np0005465987 python3.9[220811]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:52:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:07.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:07 np0005465987 python3.9[220963]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:52:07 np0005465987 systemd[1]: Reloading.
Oct  2 07:52:08 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:52:08 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:52:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:09.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:09 np0005465987 python3.9[221150]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:09.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:09 np0005465987 python3.9[221303]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:10 np0005465987 python3.9[221456]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:52:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:11.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:52:11 np0005465987 python3.9[221609]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:11.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:12 np0005465987 python3.9[221762]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:12 np0005465987 python3.9[221915]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:13 np0005465987 python3.9[222068]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:52:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:13.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:52:14 np0005465987 python3.9[222221]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:15.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:15.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:16 np0005465987 podman[222346]: 2025-10-02 11:52:16.277213759 +0000 UTC m=+0.091344322 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:52:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:16 np0005465987 python3.9[222394]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:17.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:17 np0005465987 python3.9[222547]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:17 np0005465987 podman[222549]: 2025-10-02 11:52:17.311869881 +0000 UTC m=+0.058691187 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  2 07:52:17 np0005465987 podman[222548]: 2025-10-02 11:52:17.378624428 +0000 UTC m=+0.127308025 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 07:52:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:52:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:17.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:52:17 np0005465987 python3.9[222743]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:18 np0005465987 python3.9[222895]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:52:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:19.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:52:19 np0005465987 python3.9[223047]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:19.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:20 np0005465987 python3.9[223199]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:21 np0005465987 python3.9[223351]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:52:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:21.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:52:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:21 np0005465987 python3.9[223503]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:21.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:22 np0005465987 python3.9[223655]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:23.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:23 np0005465987 python3.9[223807]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:23.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:24 np0005465987 python3.9[223959]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:24 np0005465987 python3.9[224111]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:25.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:25.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:52:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:27.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:52:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:27.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:52:27.918 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:52:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:52:27.918 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:52:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:52:27.919 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:52:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:29.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:29.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:31.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:31 np0005465987 python3.9[224263]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  2 07:52:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:31.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:32 np0005465987 python3.9[224416]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:52:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:33 np0005465987 python3.9[224574]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:52:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:33.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:34 np0005465987 systemd-logind[794]: New session 52 of user zuul.
Oct  2 07:52:34 np0005465987 systemd[1]: Started Session 52 of User zuul.
Oct  2 07:52:34 np0005465987 podman[224609]: 2025-10-02 11:52:34.616583491 +0000 UTC m=+0.058268687 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 07:52:34 np0005465987 systemd[1]: session-52.scope: Deactivated successfully.
Oct  2 07:52:34 np0005465987 systemd-logind[794]: Session 52 logged out. Waiting for processes to exit.
Oct  2 07:52:34 np0005465987 systemd-logind[794]: Removed session 52.
Oct  2 07:52:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:35.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:35 np0005465987 python3.9[224778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:52:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:35.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:52:36 np0005465987 python3.9[224899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405954.9209733-4357-89912010843547/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:36 np0005465987 python3.9[225049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:37.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:37 np0005465987 python3.9[225125]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:37.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:37 np0005465987 python3.9[225275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:38 np0005465987 python3.9[225396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405957.4419563-4357-103466157580347/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:39.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:39 np0005465987 python3.9[225546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:39.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:39 np0005465987 python3.9[225667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405958.8247564-4357-16670880442233/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:40 np0005465987 python3.9[225817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:41.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:41 np0005465987 python3.9[225938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405960.1187162-4357-161112910297937/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:41.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:42 np0005465987 python3.9[226090]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:43 np0005465987 python3.9[226242]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:52:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:52:43 np0005465987 python3.9[226394]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:52:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:43.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:44 np0005465987 python3.9[226546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:45 np0005465987 python3.9[226669]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759405964.0195634-4637-265722922826568/.source _original_basename=.xap3b9fe follow=False checksum=872d3f29167002b8ddd0e7e30a8d69797d0d0d6a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  2 07:52:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:45.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:46 np0005465987 python3.9[226821]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:52:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:46 np0005465987 podman[226947]: 2025-10-02 11:52:46.716407552 +0000 UTC m=+0.064716342 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 07:52:46 np0005465987 python3.9[226984]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:47.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:47 np0005465987 python3.9[227114]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405966.419535-4714-197055060072486/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:47 np0005465987 podman[227116]: 2025-10-02 11:52:47.514578986 +0000 UTC m=+0.080617212 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 07:52:47 np0005465987 podman[227115]: 2025-10-02 11:52:47.541058963 +0000 UTC m=+0.102751411 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:52:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:47.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:48 np0005465987 python3.9[227311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:48 np0005465987 python3.9[227432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405967.7905958-4759-188236212707244/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:49.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:50 np0005465987 python3.9[227584]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  2 07:52:50 np0005465987 python3.9[227736]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:52:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:52:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:52:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:51.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:52 np0005465987 python3[227888]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:52:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:53.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:52:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 8322 writes, 33K keys, 8322 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8322 writes, 1784 syncs, 4.66 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 698 writes, 1065 keys, 698 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s#012Interval WAL: 698 writes, 345 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  2 07:52:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:55.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:57.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:57.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:59.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:52:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:59.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:01.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:01.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:02 np0005465987 podman[227902]: 2025-10-02 11:53:02.984881485 +0000 UTC m=+10.785815170 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  2 07:53:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:03.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:03 np0005465987 podman[228156]: 2025-10-02 11:53:03.102742164 +0000 UTC m=+0.023055864 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  2 07:53:03 np0005465987 podman[228156]: 2025-10-02 11:53:03.215498785 +0000 UTC m=+0.135812455 container create ed83303e9abbdf8f6fee69da69f7ffeccd5165da77b6ec99209ae8fa3f8dd41a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=nova_compute_init)
Oct  2 07:53:03 np0005465987 python3[227888]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  2 07:53:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:53:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:53:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:53:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:53:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:53:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:53:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:53:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:03.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:03 np0005465987 python3.9[228434]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:04 np0005465987 podman[228461]: 2025-10-02 11:53:04.886530185 +0000 UTC m=+0.099343398 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 07:53:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:05.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:05 np0005465987 python3.9[228607]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  2 07:53:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:05.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:06 np0005465987 python3.9[228759]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:53:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:07.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:07 np0005465987 python3[228911]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:53:07 np0005465987 podman[228948]: 2025-10-02 11:53:07.419506924 +0000 UTC m=+0.020413243 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  2 07:53:07 np0005465987 podman[228948]: 2025-10-02 11:53:07.561574698 +0000 UTC m=+0.162481007 container create d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 07:53:07 np0005465987 python3[228911]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct  2 07:53:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:07.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:08 np0005465987 python3.9[229136]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:09.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:09 np0005465987 python3.9[229290]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:09.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:10 np0005465987 python3.9[229441]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405989.7288249-5035-104388147155228/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:10 np0005465987 python3.9[229567]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:53:10 np0005465987 systemd[1]: Reloading.
Oct  2 07:53:11 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:53:11 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:53:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:53:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:53:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:11.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:11 np0005465987 python3.9[229679]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:53:11 np0005465987 systemd[1]: Reloading.
Oct  2 07:53:12 np0005465987 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:53:12 np0005465987 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:53:12 np0005465987 systemd[1]: Starting nova_compute container...
Oct  2 07:53:12 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:53:12 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:12 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:12 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:12 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:12 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:12 np0005465987 podman[229720]: 2025-10-02 11:53:12.397346837 +0000 UTC m=+0.109195233 container init d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Oct  2 07:53:12 np0005465987 podman[229720]: 2025-10-02 11:53:12.405568274 +0000 UTC m=+0.117416630 container start d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251001)
Oct  2 07:53:12 np0005465987 podman[229720]: nova_compute
Oct  2 07:53:12 np0005465987 nova_compute[229736]: + sudo -E kolla_set_configs
Oct  2 07:53:12 np0005465987 systemd[1]: Started nova_compute container.
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Validating config file
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying service configuration files
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Deleting /etc/ceph
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Creating directory /etc/ceph
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/ceph
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Writing out command to execute
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 07:53:12 np0005465987 nova_compute[229736]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 07:53:12 np0005465987 nova_compute[229736]: ++ cat /run_command
Oct  2 07:53:12 np0005465987 nova_compute[229736]: + CMD=nova-compute
Oct  2 07:53:12 np0005465987 nova_compute[229736]: + ARGS=
Oct  2 07:53:12 np0005465987 nova_compute[229736]: + sudo kolla_copy_cacerts
Oct  2 07:53:12 np0005465987 nova_compute[229736]: + [[ ! -n '' ]]
Oct  2 07:53:12 np0005465987 nova_compute[229736]: + . kolla_extend_start
Oct  2 07:53:12 np0005465987 nova_compute[229736]: Running command: 'nova-compute'
Oct  2 07:53:12 np0005465987 nova_compute[229736]: + echo 'Running command: '\''nova-compute'\'''
Oct  2 07:53:12 np0005465987 nova_compute[229736]: + umask 0022
Oct  2 07:53:12 np0005465987 nova_compute[229736]: + exec nova-compute
Oct  2 07:53:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:13.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:13.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:14 np0005465987 python3.9[229897]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:53:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:15.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:53:15 np0005465987 python3.9[230048]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:16 np0005465987 python3.9[230198]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:16 np0005465987 nova_compute[229736]: 2025-10-02 11:53:16.270 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 07:53:16 np0005465987 nova_compute[229736]: 2025-10-02 11:53:16.271 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 07:53:16 np0005465987 nova_compute[229736]: 2025-10-02 11:53:16.271 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 07:53:16 np0005465987 nova_compute[229736]: 2025-10-02 11:53:16.271 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  2 07:53:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:16 np0005465987 nova_compute[229736]: 2025-10-02 11:53:16.508 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:53:16 np0005465987 nova_compute[229736]: 2025-10-02 11:53:16.531 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:53:16 np0005465987 podman[230302]: 2025-10-02 11:53:16.844386954 +0000 UTC m=+0.060874820 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.041 2 INFO nova.virt.driver [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  2 07:53:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:17.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:17 np0005465987 python3.9[230374]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.259 2 INFO nova.compute.provider_config [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  2 07:53:17 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:53:17 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:53:17 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.280 2 DEBUG oslo_concurrency.lockutils [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.281 2 DEBUG oslo_concurrency.lockutils [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.281 2 DEBUG oslo_concurrency.lockutils [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.281 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.281 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.282 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.282 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.282 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.282 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.282 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.283 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.283 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.283 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.283 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.283 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.284 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.284 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.284 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.284 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.284 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.285 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.285 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.285 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.285 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.285 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.285 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.286 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.286 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.286 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.286 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.286 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.287 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.287 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.287 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.287 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.287 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.288 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.288 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.288 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.288 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.288 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.289 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.289 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.289 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.289 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.289 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.289 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.290 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.290 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.290 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.290 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.290 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.291 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.291 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.291 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.291 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.291 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.292 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.292 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.292 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.292 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.292 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.293 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.293 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.293 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.293 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.293 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.293 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.293 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.294 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.294 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.294 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.294 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.294 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.294 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.294 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.295 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.295 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.295 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.295 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.295 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.295 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.295 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.296 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.296 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.296 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.296 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.296 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.296 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.296 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.296 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.297 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.297 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.297 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.297 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.297 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.297 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.297 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.298 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.298 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.298 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.298 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.298 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.298 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.298 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.298 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.299 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.299 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.299 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.299 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.299 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.299 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.299 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.300 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.300 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.300 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.300 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.300 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.300 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.300 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.301 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.301 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.301 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.301 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.301 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.301 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.301 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.302 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.302 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.302 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.302 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.302 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.302 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.302 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.302 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.303 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.303 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.303 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.303 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.303 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.303 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.303 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.304 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.304 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.304 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.304 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.304 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.304 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.304 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.304 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.305 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.305 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.305 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.305 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.305 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.305 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.306 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.306 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.306 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.306 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.306 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.306 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.306 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.307 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.307 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.307 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.307 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.307 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.307 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.308 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.308 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.308 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.308 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.308 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.308 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.308 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.309 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.309 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.309 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.309 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.309 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.309 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.309 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.310 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.310 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.310 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.310 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.310 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.310 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.310 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.311 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.311 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.311 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.311 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.311 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.311 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.311 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.312 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.312 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.312 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.312 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.312 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.312 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.313 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.313 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.313 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.313 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.313 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.313 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.313 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.314 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.314 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.314 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.314 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.314 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.314 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.314 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.315 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.315 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.315 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.315 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.315 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.315 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.315 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.315 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.316 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.316 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.316 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.316 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.316 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.316 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.316 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.317 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.317 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.317 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.317 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.317 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.317 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.317 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.318 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.318 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.318 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.318 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.318 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.318 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.318 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.319 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.319 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.319 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.319 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.319 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.319 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.319 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.320 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.320 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.320 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.320 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.320 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.320 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.321 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.321 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.321 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.321 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.321 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.321 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.322 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.322 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.322 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.322 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.322 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.322 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.322 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.323 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.323 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.323 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.323 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.323 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.323 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.323 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.324 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.324 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.324 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.324 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.324 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.324 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.324 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.325 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.325 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.325 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.325 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.325 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.325 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.325 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.326 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.326 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.326 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.326 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.326 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.326 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.326 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.327 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.327 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.327 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.327 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.327 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.327 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.328 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.328 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.328 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.328 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.328 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.328 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.328 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.329 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.329 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.329 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.329 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.329 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.329 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.329 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.330 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.330 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.330 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.330 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.330 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.330 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.330 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.331 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.331 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.331 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.331 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.331 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.331 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.331 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.332 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.332 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.332 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.332 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.332 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.332 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.333 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.333 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.333 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.333 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.333 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.333 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.334 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.334 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.334 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.334 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.334 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.334 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.335 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.335 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.335 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.335 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.335 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.335 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.336 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.336 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.336 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.336 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.337 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.337 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.337 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.337 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.337 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.337 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.338 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.338 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.338 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.338 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.338 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.338 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.338 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.339 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.339 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.339 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.339 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.339 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.339 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.340 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.340 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.340 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.340 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.340 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.341 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.341 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.341 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.341 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.341 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.341 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.342 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.342 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.342 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.342 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.342 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.342 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.343 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.343 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.343 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.343 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.343 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.343 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.344 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.344 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.344 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.344 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.344 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.344 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.344 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.345 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.345 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.345 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.345 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.345 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.345 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.345 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.346 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.346 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.346 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.346 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.346 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.347 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.347 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.347 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.347 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.347 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.347 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.348 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.348 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.348 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.348 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.348 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.348 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.349 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.349 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.349 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.349 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.349 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.349 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.349 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.350 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.350 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.350 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.350 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.350 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.350 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.350 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.351 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.351 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.351 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.351 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.351 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.351 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.351 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.352 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.352 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.352 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.352 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.352 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.353 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.353 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.353 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.353 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.353 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.353 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.353 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.354 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.354 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.354 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.354 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.354 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.354 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.355 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.355 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.355 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.355 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.355 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.355 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.356 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.356 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.356 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.356 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.356 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.357 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.357 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.357 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.357 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.357 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.357 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.358 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.358 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.358 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.358 2 WARNING oslo_config.cfg [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  2 07:53:17 np0005465987 nova_compute[229736]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  2 07:53:17 np0005465987 nova_compute[229736]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  2 07:53:17 np0005465987 nova_compute[229736]: and ``live_migration_inbound_addr`` respectively.
Oct  2 07:53:17 np0005465987 nova_compute[229736]: ).  Its value may be silently ignored in the future.#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.358 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.359 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.359 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.359 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.359 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.360 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.360 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.360 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.360 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.360 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.360 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.361 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.361 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.361 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.361 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.361 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.361 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.362 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.362 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rbd_secret_uuid        = fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.362 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.362 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.362 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.362 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.363 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.363 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.363 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.363 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.363 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.363 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.363 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.364 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.364 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.364 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.364 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.364 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.364 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.365 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.365 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.365 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.365 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.365 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.366 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.366 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.366 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.366 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.366 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.366 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.366 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.367 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.367 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.367 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.367 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.367 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.367 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.368 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.368 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.368 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.368 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.368 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.368 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.369 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.369 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.369 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.369 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.369 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.369 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.369 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.370 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.370 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.370 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.370 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.370 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.371 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.371 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.371 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.371 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.371 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.371 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.372 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.372 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.372 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.372 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.372 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.372 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.373 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.373 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.373 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.373 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.373 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.374 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.374 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.374 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.374 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.374 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.375 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.375 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.375 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.375 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.375 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.376 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.376 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.376 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.376 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.376 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.376 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.376 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.377 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.377 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.377 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.377 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.377 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.377 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.378 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.378 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.378 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.378 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.378 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.379 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.379 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.379 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.379 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.379 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.380 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.380 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.380 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.380 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.380 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.380 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.381 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.381 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.381 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.381 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.381 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.381 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.381 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.382 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.382 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.382 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.382 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.382 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.383 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.383 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.383 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.383 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.383 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.383 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.383 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.384 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.384 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.384 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.384 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.384 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.384 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.385 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.385 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.385 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.385 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.385 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.385 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.386 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.386 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.386 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.386 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.386 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.386 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.386 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.387 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.387 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.387 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.387 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.387 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.387 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.387 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.388 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.388 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.388 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.388 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.388 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.388 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.389 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.389 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.389 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.389 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.389 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.389 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.389 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.390 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.390 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.390 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.390 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.390 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.390 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.390 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.391 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.391 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.391 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.391 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.391 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.391 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.392 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.392 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.392 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.392 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.392 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.392 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.392 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.393 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.393 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.393 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.393 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.393 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.393 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.394 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.394 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.394 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.394 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.394 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.394 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.394 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.395 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.395 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.395 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.395 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.395 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.395 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.395 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.396 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.396 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.396 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.396 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.396 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.396 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.396 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.397 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.397 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.397 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.397 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.397 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.397 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.397 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.397 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.398 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.398 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.398 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.398 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.398 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.398 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.398 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.399 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.399 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.399 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.399 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.399 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.400 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.400 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.400 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.400 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.400 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.400 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.400 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.401 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.401 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.401 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.401 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.401 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.401 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.401 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.402 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.402 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.402 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.402 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.402 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.402 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.403 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.403 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.403 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.403 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.403 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.403 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.404 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.404 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.404 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.404 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.404 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.405 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.405 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.405 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.405 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.405 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.405 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.406 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.406 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.406 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.406 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.407 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.407 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.407 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.407 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.407 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.407 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.407 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.408 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.408 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.408 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.408 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.408 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.408 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.409 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.409 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.409 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.409 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.409 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.409 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.409 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.410 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.410 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.410 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.410 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.410 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.411 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.411 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.411 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.411 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.411 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.411 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.412 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.412 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.412 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.412 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.412 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.412 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.413 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.413 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.413 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.413 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.414 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.414 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.414 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.414 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.414 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.415 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.415 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.415 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.415 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.415 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.415 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.415 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.416 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.416 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.416 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.416 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.416 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.416 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.417 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.417 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.417 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.417 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.417 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.417 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.417 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.418 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.418 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.418 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.418 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.418 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.418 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.418 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.419 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.419 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.419 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.419 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.419 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.419 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.420 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.420 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.420 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.420 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.420 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.420 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.421 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.421 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.421 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.421 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.421 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.421 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.421 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.422 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.422 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.422 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.422 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.422 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.422 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.422 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.423 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.423 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.423 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.423 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.423 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.423 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.423 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.424 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.424 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.424 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.424 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.424 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.424 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.424 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.425 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.425 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.425 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.425 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.425 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.425 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.426 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.426 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.426 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.426 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.426 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.426 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.426 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.426 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.427 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.427 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.427 2 DEBUG oslo_service.service [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.429 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.490 2 DEBUG nova.virt.libvirt.host [None req-169ccc16-23ba-45df-862a-6fd323ec2032 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.491 2 DEBUG nova.virt.libvirt.host [None req-169ccc16-23ba-45df-862a-6fd323ec2032 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.491 2 DEBUG nova.virt.libvirt.host [None req-169ccc16-23ba-45df-862a-6fd323ec2032 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.491 2 DEBUG nova.virt.libvirt.host [None req-169ccc16-23ba-45df-862a-6fd323ec2032 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  2 07:53:17 np0005465987 systemd[1]: Starting libvirt QEMU daemon...
Oct  2 07:53:17 np0005465987 systemd[1]: Started libvirt QEMU daemon.
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.602 2 DEBUG nova.virt.libvirt.host [None req-169ccc16-23ba-45df-862a-6fd323ec2032 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fbb0e4ca340> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.605 2 DEBUG nova.virt.libvirt.host [None req-169ccc16-23ba-45df-862a-6fd323ec2032 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fbb0e4ca340> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.605 2 INFO nova.virt.libvirt.driver [None req-169ccc16-23ba-45df-862a-6fd323ec2032 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.647 2 WARNING nova.virt.libvirt.driver [None req-169ccc16-23ba-45df-862a-6fd323ec2032 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct  2 07:53:17 np0005465987 nova_compute[229736]: 2025-10-02 11:53:17.647 2 DEBUG nova.virt.libvirt.volume.mount [None req-169ccc16-23ba-45df-862a-6fd323ec2032 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  2 07:53:17 np0005465987 podman[230482]: 2025-10-02 11:53:17.670576461 +0000 UTC m=+0.070030492 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 07:53:17 np0005465987 podman[230468]: 2025-10-02 11:53:17.757599122 +0000 UTC m=+0.155130061 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Oct  2 07:53:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:17.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:18 np0005465987 python3.9[230643]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:53:18 np0005465987 systemd[1]: Stopping nova_compute container...
Oct  2 07:53:18 np0005465987 nova_compute[229736]: 2025-10-02 11:53:18.278 2 DEBUG oslo_concurrency.lockutils [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:53:18 np0005465987 nova_compute[229736]: 2025-10-02 11:53:18.279 2 DEBUG oslo_concurrency.lockutils [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:53:18 np0005465987 nova_compute[229736]: 2025-10-02 11:53:18.279 2 DEBUG oslo_concurrency.lockutils [None req-ea158acb-f18c-4bd1-ae12-dc3fdd346005 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:53:18 np0005465987 virtqemud[230442]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  2 07:53:18 np0005465987 virtqemud[230442]: hostname: compute-1
Oct  2 07:53:18 np0005465987 virtqemud[230442]: End of file while reading data: Input/output error
Oct  2 07:53:18 np0005465987 systemd[1]: libpod-d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7.scope: Deactivated successfully.
Oct  2 07:53:18 np0005465987 systemd[1]: libpod-d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7.scope: Consumed 3.658s CPU time.
Oct  2 07:53:18 np0005465987 podman[230655]: 2025-10-02 11:53:18.941886847 +0000 UTC m=+0.695210397 container died d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 07:53:18 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7-userdata-shm.mount: Deactivated successfully.
Oct  2 07:53:18 np0005465987 systemd[1]: var-lib-containers-storage-overlay-d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07-merged.mount: Deactivated successfully.
Oct  2 07:53:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:19.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:19.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:20 np0005465987 podman[230655]: 2025-10-02 11:53:20.388054536 +0000 UTC m=+2.141378096 container cleanup d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:53:20 np0005465987 podman[230655]: nova_compute
Oct  2 07:53:20 np0005465987 podman[230685]: nova_compute
Oct  2 07:53:20 np0005465987 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  2 07:53:20 np0005465987 systemd[1]: Stopped nova_compute container.
Oct  2 07:53:20 np0005465987 systemd[1]: Starting nova_compute container...
Oct  2 07:53:20 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:53:20 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:20 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:20 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:20 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:20 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4dc49a0e0227e665d4c1a8595b86e1ad93f77393357a466269dc049e3f33b07/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:20 np0005465987 podman[230698]: 2025-10-02 11:53:20.727234681 +0000 UTC m=+0.194962589 container init d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm)
Oct  2 07:53:20 np0005465987 podman[230698]: 2025-10-02 11:53:20.733243157 +0000 UTC m=+0.200971035 container start d9fa5ae2094c5a56da0e100a11a36dacedb0383ab5535fa68dcf205df49ef9d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 07:53:20 np0005465987 nova_compute[230713]: + sudo -E kolla_set_configs
Oct  2 07:53:20 np0005465987 podman[230698]: nova_compute
Oct  2 07:53:20 np0005465987 systemd[1]: Started nova_compute container.
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Validating config file
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying service configuration files
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Deleting /etc/ceph
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Creating directory /etc/ceph
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/ceph
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Writing out command to execute
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 07:53:20 np0005465987 nova_compute[230713]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 07:53:20 np0005465987 nova_compute[230713]: ++ cat /run_command
Oct  2 07:53:20 np0005465987 nova_compute[230713]: + CMD=nova-compute
Oct  2 07:53:20 np0005465987 nova_compute[230713]: + ARGS=
Oct  2 07:53:20 np0005465987 nova_compute[230713]: + sudo kolla_copy_cacerts
Oct  2 07:53:20 np0005465987 nova_compute[230713]: + [[ ! -n '' ]]
Oct  2 07:53:20 np0005465987 nova_compute[230713]: + . kolla_extend_start
Oct  2 07:53:20 np0005465987 nova_compute[230713]: + echo 'Running command: '\''nova-compute'\'''
Oct  2 07:53:20 np0005465987 nova_compute[230713]: Running command: 'nova-compute'
Oct  2 07:53:20 np0005465987 nova_compute[230713]: + umask 0022
Oct  2 07:53:20 np0005465987 nova_compute[230713]: + exec nova-compute
Oct  2 07:53:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000055s ======
Oct  2 07:53:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:21.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Oct  2 07:53:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:22 np0005465987 python3.9[230877]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 07:53:22 np0005465987 nova_compute[230713]: 2025-10-02 11:53:22.776 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 07:53:22 np0005465987 nova_compute[230713]: 2025-10-02 11:53:22.776 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 07:53:22 np0005465987 nova_compute[230713]: 2025-10-02 11:53:22.777 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 07:53:22 np0005465987 nova_compute[230713]: 2025-10-02 11:53:22.777 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  2 07:53:22 np0005465987 systemd[1]: Started libpod-conmon-ed83303e9abbdf8f6fee69da69f7ffeccd5165da77b6ec99209ae8fa3f8dd41a.scope.
Oct  2 07:53:22 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:53:22 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8054dbb925836782c0d37c005e8242d1c2c54582900b27c9583545d3923047de/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:22 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8054dbb925836782c0d37c005e8242d1c2c54582900b27c9583545d3923047de/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:22 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8054dbb925836782c0d37c005e8242d1c2c54582900b27c9583545d3923047de/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  2 07:53:22 np0005465987 podman[230905]: 2025-10-02 11:53:22.888140192 +0000 UTC m=+0.167492141 container init ed83303e9abbdf8f6fee69da69f7ffeccd5165da77b6ec99209ae8fa3f8dd41a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 07:53:22 np0005465987 podman[230905]: 2025-10-02 11:53:22.901511781 +0000 UTC m=+0.180863730 container start ed83303e9abbdf8f6fee69da69f7ffeccd5165da77b6ec99209ae8fa3f8dd41a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 07:53:22 np0005465987 python3.9[230877]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  2 07:53:22 np0005465987 nova_compute[230713]: 2025-10-02 11:53:22.936 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:53:22 np0005465987 nova_compute[230713]: 2025-10-02 11:53:22.952 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Applying nova statedir ownership
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  2 07:53:22 np0005465987 nova_compute_init[230924]: INFO:nova_statedir:Nova statedir ownership complete
Oct  2 07:53:22 np0005465987 systemd[1]: libpod-ed83303e9abbdf8f6fee69da69f7ffeccd5165da77b6ec99209ae8fa3f8dd41a.scope: Deactivated successfully.
Oct  2 07:53:23 np0005465987 podman[230941]: 2025-10-02 11:53:23.030241651 +0000 UTC m=+0.047929062 container died ed83303e9abbdf8f6fee69da69f7ffeccd5165da77b6ec99209ae8fa3f8dd41a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001)
Oct  2 07:53:23 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed83303e9abbdf8f6fee69da69f7ffeccd5165da77b6ec99209ae8fa3f8dd41a-userdata-shm.mount: Deactivated successfully.
Oct  2 07:53:23 np0005465987 systemd[1]: var-lib-containers-storage-overlay-8054dbb925836782c0d37c005e8242d1c2c54582900b27c9583545d3923047de-merged.mount: Deactivated successfully.
Oct  2 07:53:23 np0005465987 podman[230941]: 2025-10-02 11:53:23.101981679 +0000 UTC m=+0.119669100 container cleanup ed83303e9abbdf8f6fee69da69f7ffeccd5165da77b6ec99209ae8fa3f8dd41a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible)
Oct  2 07:53:23 np0005465987 systemd[1]: libpod-conmon-ed83303e9abbdf8f6fee69da69f7ffeccd5165da77b6ec99209ae8fa3f8dd41a.scope: Deactivated successfully.
Oct  2 07:53:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:23.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.640 2 INFO nova.virt.driver [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  2 07:53:23 np0005465987 systemd[1]: session-50.scope: Deactivated successfully.
Oct  2 07:53:23 np0005465987 systemd[1]: session-50.scope: Consumed 2min 43.802s CPU time.
Oct  2 07:53:23 np0005465987 systemd-logind[794]: Session 50 logged out. Waiting for processes to exit.
Oct  2 07:53:23 np0005465987 systemd-logind[794]: Removed session 50.
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.781 2 INFO nova.compute.provider_config [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.792 2 DEBUG oslo_concurrency.lockutils [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.792 2 DEBUG oslo_concurrency.lockutils [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.792 2 DEBUG oslo_concurrency.lockutils [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.793 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.793 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.793 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.793 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.793 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.794 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.794 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.794 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.794 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.794 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.795 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.795 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.795 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.795 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.795 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.796 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.796 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.796 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.796 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.796 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.797 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.797 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.797 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.797 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.797 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.798 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.798 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.798 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.798 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.799 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.799 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.799 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.799 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.799 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.800 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.800 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.800 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.800 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.800 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.801 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.801 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.801 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.801 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.802 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.802 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.802 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.802 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.802 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.803 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.803 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.803 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.803 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.803 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.804 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.804 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.804 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.804 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.804 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.805 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.805 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.805 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.805 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.805 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.805 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.806 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.806 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.806 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.806 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.806 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.807 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.807 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.807 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.807 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.807 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.808 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.808 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.808 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.808 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.808 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.809 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.809 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.809 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.809 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.809 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.810 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.810 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.810 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.810 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.810 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.810 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.811 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.811 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.811 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.811 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.811 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.812 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.812 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.812 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.812 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.812 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.813 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.813 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.813 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.813 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.813 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.813 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.814 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.814 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.814 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.814 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.814 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.815 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.815 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.815 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.815 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.815 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.817 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.818 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.818 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.819 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.819 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.819 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.820 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.820 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.820 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.820 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.821 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.821 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.821 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.821 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.821 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.822 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.822 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.822 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.822 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.823 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.823 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.823 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.823 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.823 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.824 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.824 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.824 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.824 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.824 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.825 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.825 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.825 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.825 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.826 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.826 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.826 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.826 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.827 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.827 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.827 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.827 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.828 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.828 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.828 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.828 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.828 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.829 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.829 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.829 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.829 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.829 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.830 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.830 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.830 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.830 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.830 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.831 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.831 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.831 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.831 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.832 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.832 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.832 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.832 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.832 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.833 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.833 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.833 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.833 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.833 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.834 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.834 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.834 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.834 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.834 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.835 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.835 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.835 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.835 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.836 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.836 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.836 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.836 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.837 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.837 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.837 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.837 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.838 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.838 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.838 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.838 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.838 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.839 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.839 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.839 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.839 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.839 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.840 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.840 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.840 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.840 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.841 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.841 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.841 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.841 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.841 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.842 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.842 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.842 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.842 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.842 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.843 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.843 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.843 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.843 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.843 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.844 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.844 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.844 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.844 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.844 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.845 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.845 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.845 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.845 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.845 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.845 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.845 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.846 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.846 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.846 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.846 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.846 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.846 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.846 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.846 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.847 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.847 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.847 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.847 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.847 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.847 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.848 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.848 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.848 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.848 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.848 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.848 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.848 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.849 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.849 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.849 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.849 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.849 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.849 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.849 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.850 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.850 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.850 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.850 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.850 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.850 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.850 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.851 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.851 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.851 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.851 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.851 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.851 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.851 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.852 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.852 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.852 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.852 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.852 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.852 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.852 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.853 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.853 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.853 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.853 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.853 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.853 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.853 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.853 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.854 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.854 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.854 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.854 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.854 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.854 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.854 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.855 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.855 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.855 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.855 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.855 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.855 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.856 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.856 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.856 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.856 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.856 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.856 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.856 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.857 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.857 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.857 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.857 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.857 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.857 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.857 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.858 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.858 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.858 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.858 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.858 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.858 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.858 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.859 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.859 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.859 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.859 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.859 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.859 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.859 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.859 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.860 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.860 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.860 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.860 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.860 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.860 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.860 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.861 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.861 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.861 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.861 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.861 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.861 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.862 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.862 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.862 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.862 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.862 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.862 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.862 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.863 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.863 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.863 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.863 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.863 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.863 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.863 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.864 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.864 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.864 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.864 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.864 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.864 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.864 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.864 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.865 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.865 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.865 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.865 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.865 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.865 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.865 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.866 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.866 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.866 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.866 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.866 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.866 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.866 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.867 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.867 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.867 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.867 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.867 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.867 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.868 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.868 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.868 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.868 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.868 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.868 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.868 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.869 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.869 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.869 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.869 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.869 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.869 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.870 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.870 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.870 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.870 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.870 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.870 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.870 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.870 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.871 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.871 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.871 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.871 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.871 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.871 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.871 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.872 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.872 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.872 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.872 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.872 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.872 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.873 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.873 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.873 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.873 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.873 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.873 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.874 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.874 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.874 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.874 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.874 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.874 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.874 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.874 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.875 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.875 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.875 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.875 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:23.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.875 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.876 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.876 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.876 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.876 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.876 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.876 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.877 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.877 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.877 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.877 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.877 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.877 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.877 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.878 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.878 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.878 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.878 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.878 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.878 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.879 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.879 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.879 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.879 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.879 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.879 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.879 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.880 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.880 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.880 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.880 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.880 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.880 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.880 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.881 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.881 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.881 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.881 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.881 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.881 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.881 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.882 2 WARNING oslo_config.cfg [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  2 07:53:23 np0005465987 nova_compute[230713]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  2 07:53:23 np0005465987 nova_compute[230713]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  2 07:53:23 np0005465987 nova_compute[230713]: and ``live_migration_inbound_addr`` respectively.
Oct  2 07:53:23 np0005465987 nova_compute[230713]: ).  Its value may be silently ignored in the future.#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.882 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.882 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.882 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.882 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.882 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.883 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.883 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.883 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.883 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.883 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.883 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.883 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.884 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.884 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.884 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.884 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.884 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.884 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.885 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rbd_secret_uuid        = fd4c5763-22d1-50ea-ad0b-96a3dc3040b2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.885 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.885 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.885 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.885 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.885 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.885 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.886 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.886 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.886 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.886 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.886 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.887 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.887 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.887 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.887 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.887 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.887 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.888 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.888 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.888 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.888 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.888 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.888 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.888 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.889 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.889 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.889 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.889 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.889 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.889 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.890 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.890 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.890 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.890 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.890 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.890 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.890 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.891 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.891 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.891 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.891 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.891 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.891 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.892 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.892 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.892 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.892 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.892 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.892 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.892 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.893 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.893 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.893 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.893 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.893 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.893 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.893 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.894 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.894 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.894 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.894 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.894 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.894 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.895 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.895 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.895 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.895 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.895 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.895 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.896 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.896 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.896 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.896 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.896 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.896 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.896 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.897 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.897 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.897 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.897 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.897 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.897 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.897 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.897 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.898 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.898 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.898 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.898 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.898 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.898 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.899 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.899 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.899 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.899 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.899 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.899 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.899 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.899 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.900 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.900 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.900 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.900 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.900 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.900 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.901 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.901 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.901 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.901 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.901 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.901 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.901 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.901 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.902 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.902 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.902 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.902 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.902 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.902 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.902 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.903 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.903 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.903 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.903 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.903 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.904 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.904 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.904 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.904 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.904 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.904 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.904 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.905 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.905 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.905 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.905 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.905 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.905 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.905 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.906 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.906 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.906 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.906 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.906 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.906 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.906 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.907 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.907 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.907 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.907 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.907 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.907 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.908 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.908 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.908 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.908 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.908 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.908 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.908 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.909 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.909 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.909 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.909 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.909 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.909 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.910 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.910 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.910 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.910 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.910 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.910 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.910 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.911 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.911 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.911 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.911 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.911 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.911 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.911 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.912 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.912 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.912 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.912 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.912 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.912 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.913 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.913 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.913 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.913 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.913 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.913 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.913 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.913 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.914 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.914 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.914 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.914 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.914 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.914 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.914 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.915 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.915 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.915 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.915 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.915 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.915 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.915 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.916 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.916 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.916 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.916 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.916 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.916 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.916 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.916 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.917 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.917 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.917 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.917 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.917 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.917 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.917 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.918 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.918 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.918 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.918 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.918 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.918 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.918 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.919 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.919 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.919 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.919 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.919 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.919 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.920 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.920 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.920 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.920 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.920 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.920 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.921 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.921 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.921 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.921 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.921 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.921 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.921 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.922 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.922 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.922 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.922 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.922 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.922 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.923 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.923 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.923 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.923 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.923 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.923 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.924 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.924 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.924 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.924 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.924 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.924 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.924 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.925 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.925 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.925 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.925 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.925 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.925 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.925 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.926 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.926 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.926 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.926 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.926 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.926 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.926 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.927 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.927 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.927 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.927 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.927 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.927 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.928 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.928 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.928 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.928 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.928 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.928 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.929 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.929 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.929 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.929 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.929 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.929 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.930 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.930 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.930 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.930 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.930 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.930 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.930 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.931 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.931 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.931 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.931 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.931 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.931 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.931 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.932 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.932 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.932 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.932 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.932 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.932 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.932 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.933 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.933 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.933 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.933 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.933 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.933 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.934 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.934 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.934 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.934 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.934 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.935 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.935 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.935 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.935 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.935 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.935 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.936 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.936 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.936 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.936 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.936 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.936 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.936 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.937 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.937 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.937 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.937 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.937 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.937 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.938 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.938 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.938 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.938 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.938 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.938 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.938 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.939 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.939 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.939 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.939 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.939 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.939 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.940 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.940 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.940 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.940 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.940 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.940 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.940 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.941 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.941 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.941 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.941 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.941 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.941 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.941 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.942 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.942 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.942 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.942 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.942 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.942 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.943 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.943 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.943 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.943 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.943 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.943 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.943 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.944 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.944 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.944 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.944 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.944 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.944 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.944 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.944 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.945 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.945 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.945 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.945 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.945 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.945 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.945 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.946 2 DEBUG oslo_service.service [None req-5386022b-c207-4a00-ad7e-d6fadf10997e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.947 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.965 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.966 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.967 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.967 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.980 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff22fb3fb20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.982 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff22fb3fb20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.983 2 INFO nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.994 2 INFO nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Libvirt host capabilities <capabilities>
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 
Oct  2 07:53:23 np0005465987 nova_compute[230713]:  <host>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <uuid>1113ba57-9df9-4fb0-a2ed-0ca31cebd82a</uuid>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <cpu>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <arch>x86_64</arch>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <model>EPYC-Rome-v4</model>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <vendor>AMD</vendor>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <microcode version='16777317'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <signature family='23' model='49' stepping='0'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='x2apic'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='tsc-deadline'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='osxsave'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='hypervisor'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='tsc_adjust'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='spec-ctrl'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='stibp'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='arch-capabilities'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='ssbd'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='cmp_legacy'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='topoext'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='virt-ssbd'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='lbrv'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='tsc-scale'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='vmcb-clean'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='pause-filter'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='pfthreshold'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='svme-addr-chk'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='rdctl-no'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='skip-l1dfl-vmentry'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='mds-no'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <feature name='pschange-mc-no'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <pages unit='KiB' size='4'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <pages unit='KiB' size='2048'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <pages unit='KiB' size='1048576'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    </cpu>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <power_management>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <suspend_mem/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    </power_management>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <iommu support='no'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <migration_features>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <live/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <uri_transports>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:        <uri_transport>tcp</uri_transport>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:        <uri_transport>rdma</uri_transport>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      </uri_transports>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    </migration_features>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <topology>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <cells num='1'>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:        <cell id='0'>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:          <memory unit='KiB'>7864096</memory>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:          <pages unit='KiB' size='4'>1966024</pages>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:          <pages unit='KiB' size='2048'>0</pages>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:          <distances>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:            <sibling id='0' value='10'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:          </distances>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:          <cpus num='8'>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:          </cpus>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:        </cell>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      </cells>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    </topology>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <cache>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    </cache>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <secmodel>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <model>selinux</model>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <doi>0</doi>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    </secmodel>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <secmodel>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <model>dac</model>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <doi>0</doi>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    </secmodel>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:  </host>
Oct  2 07:53:23 np0005465987 nova_compute[230713]: 
Oct  2 07:53:23 np0005465987 nova_compute[230713]:  <guest>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <os_type>hvm</os_type>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:    <arch name='i686'>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <wordsize>32</wordsize>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 07:53:23 np0005465987 nova_compute[230713]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <domain type='qemu'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <domain type='kvm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </arch>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <pae/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <nonpae/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <acpi default='on' toggle='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <apic default='on' toggle='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <cpuselection/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <deviceboot/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <disksnapshot default='on' toggle='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <externalSnapshot/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </guest>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <guest>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <os_type>hvm</os_type>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <arch name='x86_64'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <wordsize>64</wordsize>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <domain type='qemu'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <domain type='kvm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </arch>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <acpi default='on' toggle='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <apic default='on' toggle='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <cpuselection/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <deviceboot/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <disksnapshot default='on' toggle='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <externalSnapshot/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </guest>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 
Oct  2 07:53:24 np0005465987 nova_compute[230713]: </capabilities>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: #033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.998 2 WARNING nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:23.998 2 DEBUG nova.virt.libvirt.volume.mount [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.001 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.023 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  2 07:53:24 np0005465987 nova_compute[230713]: <domainCapabilities>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <domain>kvm</domain>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <arch>i686</arch>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <vcpu max='240'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <iothreads supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <os supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <enum name='firmware'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <loader supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>rom</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pflash</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='readonly'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>yes</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>no</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='secure'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>no</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </loader>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </os>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <cpu>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='host-passthrough' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='hostPassthroughMigratable'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>on</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>off</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='maximum' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='maximumMigratable'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>on</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>off</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='host-model' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <vendor>AMD</vendor>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='x2apic'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='hypervisor'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='stibp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='overflow-recov'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='succor'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='lbrv'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc-scale'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='flushbyasid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pause-filter'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pfthreshold'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='rdctl-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='mds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='gds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='rfds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='disable' name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='custom' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Dhyana-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Genoa'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='auto-ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='auto-ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-128'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-256'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-512'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v6'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v7'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='KnightsMill'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4fmaps'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4vnniw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512er'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512pf'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='KnightsMill-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4fmaps'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4vnniw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512er'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512pf'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G4-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tbm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G5-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tbm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SierraForest'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ne-convert'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cmpccxadd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SierraForest-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ne-convert'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cmpccxadd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='athlon'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='athlon-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='core2duo'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='core2duo-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='coreduo'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='coreduo-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='n270'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='n270-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='phenom'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='phenom-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <memoryBacking supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <enum name='sourceType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>file</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>anonymous</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>memfd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </memoryBacking>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <devices>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <disk supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='diskDevice'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>disk</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>cdrom</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>floppy</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>lun</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='bus'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ide</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>fdc</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>scsi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>sata</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-non-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <graphics supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vnc</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>egl-headless</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>dbus</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <video supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='modelType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vga</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>cirrus</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>none</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>bochs</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ramfb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </video>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <hostdev supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='mode'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>subsystem</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='startupPolicy'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>default</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>mandatory</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>requisite</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>optional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='subsysType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pci</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>scsi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='capsType'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='pciBackend'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </hostdev>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <rng supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-non-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>random</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>egd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>builtin</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </rng>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <filesystem supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='driverType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>path</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>handle</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtiofs</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </filesystem>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <tpm supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tpm-tis</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tpm-crb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>emulator</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>external</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendVersion'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>2.0</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </tpm>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <redirdev supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='bus'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </redirdev>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <channel supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pty</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>unix</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </channel>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <crypto supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>qemu</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>builtin</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </crypto>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <interface supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>default</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>passt</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </interface>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <panic supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>isa</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>hyperv</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </panic>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </devices>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <gic supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <vmcoreinfo supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <genid supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <backingStoreInput supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <backup supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <async-teardown supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <ps2 supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <sev supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <sgx supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <hyperv supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='features'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>relaxed</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vapic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>spinlocks</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vpindex</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>runtime</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>synic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>stimer</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>reset</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vendor_id</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>frequencies</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>reenlightenment</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tlbflush</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ipi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>avic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>emsr_bitmap</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>xmm_input</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </hyperv>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <launchSecurity supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: </domainCapabilities>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.028 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  2 07:53:24 np0005465987 nova_compute[230713]: <domainCapabilities>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <domain>kvm</domain>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <arch>i686</arch>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <vcpu max='4096'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <iothreads supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <os supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <enum name='firmware'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <loader supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>rom</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pflash</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='readonly'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>yes</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>no</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='secure'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>no</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </loader>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </os>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <cpu>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='host-passthrough' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='hostPassthroughMigratable'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>on</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>off</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='maximum' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='maximumMigratable'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>on</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>off</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='host-model' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <vendor>AMD</vendor>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='x2apic'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='hypervisor'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='stibp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='overflow-recov'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='succor'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='lbrv'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc-scale'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='flushbyasid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pause-filter'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pfthreshold'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='rdctl-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='mds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='gds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='rfds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='disable' name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='custom' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Dhyana-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Genoa'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='auto-ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='auto-ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-128'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-256'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-512'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v6'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v7'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='KnightsMill'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4fmaps'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4vnniw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512er'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512pf'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='KnightsMill-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4fmaps'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4vnniw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512er'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512pf'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G4-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tbm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G5-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tbm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SierraForest'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ne-convert'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cmpccxadd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SierraForest-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ne-convert'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cmpccxadd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='athlon'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='athlon-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='core2duo'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='core2duo-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='coreduo'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='coreduo-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='n270'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='n270-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='phenom'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='phenom-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <memoryBacking supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <enum name='sourceType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>file</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>anonymous</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>memfd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </memoryBacking>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <devices>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <disk supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='diskDevice'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>disk</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>cdrom</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>floppy</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>lun</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='bus'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>fdc</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>scsi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>sata</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-non-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <graphics supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vnc</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>egl-headless</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>dbus</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <video supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='modelType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vga</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>cirrus</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>none</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>bochs</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ramfb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </video>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <hostdev supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='mode'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>subsystem</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='startupPolicy'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>default</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>mandatory</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>requisite</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>optional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='subsysType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pci</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>scsi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='capsType'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='pciBackend'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </hostdev>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <rng supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-non-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>random</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>egd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>builtin</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </rng>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <filesystem supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='driverType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>path</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>handle</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtiofs</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </filesystem>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <tpm supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tpm-tis</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tpm-crb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>emulator</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>external</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendVersion'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>2.0</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </tpm>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <redirdev supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='bus'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </redirdev>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <channel supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pty</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>unix</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </channel>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <crypto supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>qemu</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>builtin</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </crypto>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <interface supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>default</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>passt</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </interface>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <panic supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>isa</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>hyperv</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </panic>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </devices>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <gic supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <vmcoreinfo supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <genid supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <backingStoreInput supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <backup supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <async-teardown supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <ps2 supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <sev supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <sgx supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <hyperv supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='features'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>relaxed</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vapic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>spinlocks</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vpindex</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>runtime</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>synic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>stimer</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>reset</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vendor_id</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>frequencies</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>reenlightenment</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tlbflush</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ipi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>avic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>emsr_bitmap</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>xmm_input</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </hyperv>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <launchSecurity supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: </domainCapabilities>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.059 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.066 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  2 07:53:24 np0005465987 nova_compute[230713]: <domainCapabilities>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <domain>kvm</domain>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <arch>x86_64</arch>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <vcpu max='240'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <iothreads supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <os supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <enum name='firmware'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <loader supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>rom</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pflash</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='readonly'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>yes</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>no</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='secure'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>no</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </loader>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </os>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <cpu>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='host-passthrough' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='hostPassthroughMigratable'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>on</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>off</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='maximum' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='maximumMigratable'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>on</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>off</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='host-model' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <vendor>AMD</vendor>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='x2apic'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='hypervisor'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='stibp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='overflow-recov'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='succor'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='lbrv'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc-scale'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='flushbyasid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pause-filter'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pfthreshold'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='rdctl-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='mds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='gds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='rfds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='disable' name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='custom' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Dhyana-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Genoa'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='auto-ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='auto-ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-128'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-256'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-512'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v6'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v7'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='KnightsMill'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4fmaps'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4vnniw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512er'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512pf'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='KnightsMill-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4fmaps'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4vnniw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512er'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512pf'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G4-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tbm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G5-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tbm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SierraForest'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ne-convert'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cmpccxadd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SierraForest-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ne-convert'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cmpccxadd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='athlon'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='athlon-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='core2duo'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='core2duo-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='coreduo'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='coreduo-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='n270'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='n270-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='phenom'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='phenom-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <memoryBacking supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <enum name='sourceType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>file</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>anonymous</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>memfd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </memoryBacking>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <devices>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <disk supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='diskDevice'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>disk</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>cdrom</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>floppy</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>lun</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='bus'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ide</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>fdc</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>scsi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>sata</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-non-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <graphics supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vnc</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>egl-headless</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>dbus</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <video supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='modelType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vga</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>cirrus</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>none</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>bochs</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ramfb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </video>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <hostdev supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='mode'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>subsystem</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='startupPolicy'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>default</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>mandatory</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>requisite</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>optional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='subsysType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pci</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>scsi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='capsType'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='pciBackend'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </hostdev>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <rng supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-non-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>random</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>egd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>builtin</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </rng>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <filesystem supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='driverType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>path</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>handle</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtiofs</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </filesystem>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <tpm supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tpm-tis</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tpm-crb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>emulator</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>external</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendVersion'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>2.0</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </tpm>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <redirdev supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='bus'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </redirdev>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <channel supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pty</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>unix</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </channel>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <crypto supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>qemu</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>builtin</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </crypto>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <interface supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>default</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>passt</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </interface>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <panic supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>isa</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>hyperv</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </panic>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </devices>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <gic supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <vmcoreinfo supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <genid supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <backingStoreInput supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <backup supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <async-teardown supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <ps2 supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <sev supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <sgx supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <hyperv supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='features'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>relaxed</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vapic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>spinlocks</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vpindex</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>runtime</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>synic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>stimer</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>reset</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vendor_id</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>frequencies</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>reenlightenment</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tlbflush</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ipi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>avic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>emsr_bitmap</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>xmm_input</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </hyperv>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <launchSecurity supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: </domainCapabilities>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.133 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  2 07:53:24 np0005465987 nova_compute[230713]: <domainCapabilities>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <domain>kvm</domain>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <arch>x86_64</arch>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <vcpu max='4096'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <iothreads supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <os supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <enum name='firmware'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>efi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <loader supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>rom</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pflash</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='readonly'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>yes</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>no</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='secure'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>yes</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>no</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </loader>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </os>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <cpu>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='host-passthrough' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='hostPassthroughMigratable'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>on</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>off</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='maximum' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='maximumMigratable'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>on</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>off</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='host-model' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <vendor>AMD</vendor>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='x2apic'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='hypervisor'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='stibp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='overflow-recov'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='succor'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='lbrv'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='tsc-scale'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='flushbyasid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pause-filter'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pfthreshold'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='rdctl-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='mds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='gds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='require' name='rfds-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <feature policy='disable' name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <mode name='custom' supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Broadwell-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Cooperlake-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Denverton-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Dhyana-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Genoa'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='auto-ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='auto-ibrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Milan-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amd-psfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='no-nested-data-bp'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='null-sel-clr-base'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='stibp-always-on'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-Rome-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='EPYC-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='GraniteRapids-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-128'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-256'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx10-512'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='prefetchiti'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Haswell-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v6'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Icelake-Server-v7'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='IvyBridge-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='KnightsMill'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4fmaps'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4vnniw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512er'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512pf'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='KnightsMill-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4fmaps'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-4vnniw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512er'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512pf'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G4-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tbm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Opteron_G5-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fma4'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tbm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xop'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SapphireRapids-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='amx-tile'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-bf16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-fp16'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512-vpopcntdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bitalg'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vbmi2'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrc'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fzrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='la57'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='taa-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='tsx-ldtrk'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xfd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SierraForest'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ne-convert'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cmpccxadd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='SierraForest-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ifma'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-ne-convert'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx-vnni-int8'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='bus-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cmpccxadd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fbsdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='fsrs'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ibrs-all'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mcdt-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pbrsb-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='psdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='sbdr-ssdp-no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='serialize'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vaes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='vpclmulqdq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Client-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='hle'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='rtm'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Skylake-Server-v5'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512bw'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512cd'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512dq'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512f'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='avx512vl'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='invpcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pcid'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='pku'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='mpx'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v2'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v3'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='core-capability'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='split-lock-detect'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='Snowridge-v4'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='cldemote'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='erms'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='gfni'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdir64b'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='movdiri'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='xsaves'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='athlon'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='athlon-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='core2duo'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='core2duo-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='coreduo'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='coreduo-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='n270'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='n270-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='ss'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='phenom'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <blockers model='phenom-v1'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnow'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <feature name='3dnowext'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </blockers>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </mode>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <memoryBacking supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <enum name='sourceType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>file</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>anonymous</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <value>memfd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </memoryBacking>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <devices>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <disk supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='diskDevice'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>disk</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>cdrom</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>floppy</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>lun</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='bus'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>fdc</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>scsi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>sata</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-non-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <graphics supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vnc</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>egl-headless</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>dbus</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <video supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='modelType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vga</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>cirrus</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>none</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>bochs</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ramfb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </video>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <hostdev supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='mode'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>subsystem</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='startupPolicy'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>default</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>mandatory</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>requisite</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>optional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='subsysType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pci</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>scsi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='capsType'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='pciBackend'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </hostdev>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <rng supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtio-non-transitional</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>random</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>egd</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>builtin</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </rng>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <filesystem supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='driverType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>path</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>handle</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>virtiofs</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </filesystem>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <tpm supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tpm-tis</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tpm-crb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>emulator</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>external</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendVersion'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>2.0</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </tpm>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <redirdev supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='bus'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>usb</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </redirdev>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <channel supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>pty</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>unix</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </channel>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <crypto supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='type'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>qemu</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendModel'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>builtin</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </crypto>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <interface supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='backendType'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>default</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>passt</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </interface>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <panic supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='model'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>isa</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>hyperv</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </panic>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </devices>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <gic supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <vmcoreinfo supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <genid supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <backingStoreInput supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <backup supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <async-teardown supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <ps2 supported='yes'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <sev supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <sgx supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <hyperv supported='yes'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      <enum name='features'>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>relaxed</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vapic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>spinlocks</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vpindex</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>runtime</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>synic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>stimer</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>reset</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>vendor_id</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>frequencies</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>reenlightenment</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>tlbflush</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>ipi</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>avic</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>emsr_bitmap</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:        <value>xmm_input</value>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:      </enum>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    </hyperv>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:    <launchSecurity supported='no'/>
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  </features>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: </domainCapabilities>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.200 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.201 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.201 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.201 2 INFO nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Secure Boot support detected#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.205 2 INFO nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.205 2 INFO nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.220 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] cpu compare xml: <cpu match="exact">
Oct  2 07:53:24 np0005465987 nova_compute[230713]:  <model>Nehalem</model>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: </cpu>
Oct  2 07:53:24 np0005465987 nova_compute[230713]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.224 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.248 2 INFO nova.virt.node [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Determined node identity abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from /var/lib/nova/compute_id#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.291 2 WARNING nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Compute nodes ['abc4c1e4-e97d-4065-b850-e0065c8f9ab7'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.353 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.475 2 WARNING nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.476 2 DEBUG oslo_concurrency.lockutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.476 2 DEBUG oslo_concurrency.lockutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.476 2 DEBUG oslo_concurrency.lockutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.477 2 DEBUG nova.compute.resource_tracker [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.477 2 DEBUG oslo_concurrency.processutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:53:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:53:24 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/842359320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:53:24 np0005465987 nova_compute[230713]: 2025-10-02 11:53:24.891 2 DEBUG oslo_concurrency.processutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:53:24 np0005465987 systemd[1]: Starting libvirt nodedev daemon...
Oct  2 07:53:24 np0005465987 systemd[1]: Started libvirt nodedev daemon.
Oct  2 07:53:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:53:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:25.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:53:25 np0005465987 nova_compute[230713]: 2025-10-02 11:53:25.193 2 WARNING nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:53:25 np0005465987 nova_compute[230713]: 2025-10-02 11:53:25.194 2 DEBUG nova.compute.resource_tracker [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5266MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 07:53:25 np0005465987 nova_compute[230713]: 2025-10-02 11:53:25.194 2 DEBUG oslo_concurrency.lockutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:53:25 np0005465987 nova_compute[230713]: 2025-10-02 11:53:25.194 2 DEBUG oslo_concurrency.lockutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:53:25 np0005465987 nova_compute[230713]: 2025-10-02 11:53:25.211 2 WARNING nova.compute.resource_tracker [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] No compute node record for compute-1.ctlplane.example.com:abc4c1e4-e97d-4065-b850-e0065c8f9ab7: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host abc4c1e4-e97d-4065-b850-e0065c8f9ab7 could not be found.#033[00m
Oct  2 07:53:25 np0005465987 nova_compute[230713]: 2025-10-02 11:53:25.229 2 INFO nova.compute.resource_tracker [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: abc4c1e4-e97d-4065-b850-e0065c8f9ab7#033[00m
Oct  2 07:53:25 np0005465987 nova_compute[230713]: 2025-10-02 11:53:25.325 2 DEBUG nova.compute.resource_tracker [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 07:53:25 np0005465987 nova_compute[230713]: 2025-10-02 11:53:25.325 2 DEBUG nova.compute.resource_tracker [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 07:53:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:25.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.250 2 INFO nova.scheduler.client.report [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [req-89cac73e-e41d-4410-8f88-02f7776a44f9] Created resource provider record via placement API for resource provider with UUID abc4c1e4-e97d-4065-b850-e0065c8f9ab7 and name compute-1.ctlplane.example.com.#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.312 2 DEBUG oslo_concurrency.processutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:53:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:53:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2563342187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.724 2 DEBUG oslo_concurrency.processutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.731 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  2 07:53:26 np0005465987 nova_compute[230713]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.732 2 INFO nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.733 2 DEBUG nova.compute.provider_tree [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.734 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.739 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Libvirt baseline CPU <cpu>
Oct  2 07:53:26 np0005465987 nova_compute[230713]:  <arch>x86_64</arch>
Oct  2 07:53:26 np0005465987 nova_compute[230713]:  <model>Nehalem</model>
Oct  2 07:53:26 np0005465987 nova_compute[230713]:  <vendor>AMD</vendor>
Oct  2 07:53:26 np0005465987 nova_compute[230713]:  <topology sockets="8" cores="1" threads="1"/>
Oct  2 07:53:26 np0005465987 nova_compute[230713]: </cpu>
Oct  2 07:53:26 np0005465987 nova_compute[230713]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.813 2 DEBUG nova.scheduler.client.report [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Updated inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.814 2 DEBUG nova.compute.provider_tree [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Updating resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.815 2 DEBUG nova.compute.provider_tree [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.880 2 DEBUG nova.compute.provider_tree [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Updating resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.905 2 DEBUG nova.compute.resource_tracker [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.906 2 DEBUG oslo_concurrency.lockutils [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:53:26 np0005465987 nova_compute[230713]: 2025-10-02 11:53:26.906 2 DEBUG nova.service [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  2 07:53:27 np0005465987 nova_compute[230713]: 2025-10-02 11:53:27.024 2 DEBUG nova.service [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  2 07:53:27 np0005465987 nova_compute[230713]: 2025-10-02 11:53:27.025 2 DEBUG nova.servicegroup.drivers.db [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  2 07:53:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:27.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:27.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:53:27.919 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:53:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:53:27.919 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:53:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:53:27.919 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:53:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:29.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:29.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:31.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:53:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3414 writes, 18K keys, 3414 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 3414 writes, 3414 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1341 writes, 6804 keys, 1341 commit groups, 1.0 writes per commit group, ingest: 14.76 MB, 0.02 MB/s#012Interval WAL: 1341 writes, 1341 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     71.0      0.30              0.05         9    0.034       0      0       0.0       0.0#012  L6      1/0    7.65 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.1    194.0    160.3      0.41              0.16         8    0.052     35K   4319       0.0       0.0#012 Sum      1/0    7.65 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.1    111.8    122.5      0.72              0.22        17    0.042     35K   4319       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.2    108.4    109.5      0.48              0.13        10    0.048     23K   3041       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    194.0    160.3      0.41              0.16         8    0.052     35K   4319       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     71.4      0.30              0.05         8    0.038       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.07 MB/s write, 0.08 GB read, 0.07 MB/s read, 0.7 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 308.00 MB usage: 4.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(260,4.41 MB,1.43046%) FilterBlock(17,103.86 KB,0.0329303%) IndexBlock(17,213.12 KB,0.0675746%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 07:53:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:31.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:33.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:33.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:35.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:35 np0005465987 podman[231083]: 2025-10-02 11:53:35.850799093 +0000 UTC m=+0.063389889 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  2 07:53:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:35.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:37.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:37.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:39.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:39.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:41.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:41.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:43.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:43.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:45.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:46 np0005465987 nova_compute[230713]: 2025-10-02 11:53:46.028 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:53:46 np0005465987 nova_compute[230713]: 2025-10-02 11:53:46.052 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:53:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:53:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:47.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:53:47 np0005465987 podman[231103]: 2025-10-02 11:53:47.861083406 +0000 UTC m=+0.068463329 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 07:53:47 np0005465987 podman[231102]: 2025-10-02 11:53:47.865145928 +0000 UTC m=+0.078736662 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:53:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:47.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:47 np0005465987 podman[231107]: 2025-10-02 11:53:47.920048123 +0000 UTC m=+0.113495282 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 07:53:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:49.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:49.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:51.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:51.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:53:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:53.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:53:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 07:53:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922491892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 07:53:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 07:53:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922491892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 07:53:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:55.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:57.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:57.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:59.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:53:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:59.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:01.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:03.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:03.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:54:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:05.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:54:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:06 np0005465987 podman[231165]: 2025-10-02 11:54:06.836966909 +0000 UTC m=+0.050624977 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:54:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:07.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:54:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:09.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:54:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:09.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:11.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:54:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:54:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 07:54:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:54:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:11.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:54:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:54:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:54:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:13.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:13.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:15.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:17.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:17.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:18 np0005465987 podman[231318]: 2025-10-02 11:54:18.845495753 +0000 UTC m=+0.062693490 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:54:18 np0005465987 podman[231317]: 2025-10-02 11:54:18.865470394 +0000 UTC m=+0.081302523 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 07:54:18 np0005465987 podman[231316]: 2025-10-02 11:54:18.890627688 +0000 UTC m=+0.111308801 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:54:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:19.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:54:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:54:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:19.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:21.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:22 np0005465987 nova_compute[230713]: 2025-10-02 11:54:22.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:54:22 np0005465987 nova_compute[230713]: 2025-10-02 11:54:22.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:54:22 np0005465987 nova_compute[230713]: 2025-10-02 11:54:22.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 07:54:22 np0005465987 nova_compute[230713]: 2025-10-02 11:54:22.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 07:54:22 np0005465987 nova_compute[230713]: 2025-10-02 11:54:22.999 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 07:54:22 np0005465987 nova_compute[230713]: 2025-10-02 11:54:22.999 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.000 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.000 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.000 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.000 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.001 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.001 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.001 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.053 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.054 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.054 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.054 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.055 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:54:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:54:23 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1722527071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.556 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.723 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.724 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5326MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.724 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.725 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.963 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 07:54:23 np0005465987 nova_compute[230713]: 2025-10-02 11:54:23.964 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 07:54:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:23.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:24 np0005465987 nova_compute[230713]: 2025-10-02 11:54:24.020 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:54:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:54:24 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2984334205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:54:24 np0005465987 nova_compute[230713]: 2025-10-02 11:54:24.506 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:54:24 np0005465987 nova_compute[230713]: 2025-10-02 11:54:24.512 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:54:24 np0005465987 nova_compute[230713]: 2025-10-02 11:54:24.648 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:54:24 np0005465987 nova_compute[230713]: 2025-10-02 11:54:24.649 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 07:54:24 np0005465987 nova_compute[230713]: 2025-10-02 11:54:24.650 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:54:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:25.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:54:27.920 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:54:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:54:27.921 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:54:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:54:27.921 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:54:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:31.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:31.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:33.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:37.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  2 07:54:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  2 07:54:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  2 07:54:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  2 07:54:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  2 07:54:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct  2 07:54:37 np0005465987 podman[231474]: 2025-10-02 11:54:37.839294991 +0000 UTC m=+0.057819876 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 07:54:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct  2 07:54:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  2 07:54:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:37.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:38 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct  2 07:54:38 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct  2 07:54:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:40.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:41.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:42.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:44.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:45.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:54:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:46.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:54:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:47.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:48.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:49.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:49 np0005465987 podman[231494]: 2025-10-02 11:54:49.843465139 +0000 UTC m=+0.061770485 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 07:54:49 np0005465987 podman[231493]: 2025-10-02 11:54:49.855371197 +0000 UTC m=+0.075413171 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 07:54:49 np0005465987 podman[231495]: 2025-10-02 11:54:49.859870501 +0000 UTC m=+0.074391073 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 07:54:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:50.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:54:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:51.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:54:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:52.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:53.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:55.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:56.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:57.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:54:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:58.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:54:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:54:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:59.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:00.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:01.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:02.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:03.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:55:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:04.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:55:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:05.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:06.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:55:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:07.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:55:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:08.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:08 np0005465987 podman[231559]: 2025-10-02 11:55:08.828551258 +0000 UTC m=+0.047270845 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 07:55:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:09.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:10.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:11.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:12.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:13.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:14.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:15.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:16.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:17.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:18.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:19.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:20.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:20 np0005465987 podman[231710]: 2025-10-02 11:55:20.835980078 +0000 UTC m=+0.052968437 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:55:20 np0005465987 podman[231711]: 2025-10-02 11:55:20.83905162 +0000 UTC m=+0.058087353 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct  2 07:55:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:55:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:55:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:55:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:55:20 np0005465987 podman[231709]: 2025-10-02 11:55:20.891763759 +0000 UTC m=+0.111675496 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct  2 07:55:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:21.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:55:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:22.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:55:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:55:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:23.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:55:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:24.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.641 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.669 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.669 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.669 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.687 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.687 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.687 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.688 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.688 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.688 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 07:55:24 np0005465987 nova_compute[230713]: 2025-10-02 11:55:24.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.002 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.003 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.003 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:55:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:25.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:55:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3131646479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.476 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.665 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.666 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5338MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.667 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.667 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.765 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.766 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 07:55:25 np0005465987 nova_compute[230713]: 2025-10-02 11:55:25.802 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:55:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:26.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:55:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1893648748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:55:26 np0005465987 nova_compute[230713]: 2025-10-02 11:55:26.328 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:55:26 np0005465987 nova_compute[230713]: 2025-10-02 11:55:26.335 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:55:26 np0005465987 nova_compute[230713]: 2025-10-02 11:55:26.367 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:55:26 np0005465987 nova_compute[230713]: 2025-10-02 11:55:26.369 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 07:55:26 np0005465987 nova_compute[230713]: 2025-10-02 11:55:26.369 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:55:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:55:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:55:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:55:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:27.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:55:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:55:27.921 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:55:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:55:27.921 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:55:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:55:27.921 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:55:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:28.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:55:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:55:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:30.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:31.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:55:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:32.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:55:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:33.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.430415) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133430499, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2305, "num_deletes": 251, "total_data_size": 5922919, "memory_usage": 5996624, "flush_reason": "Manual Compaction"}
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133464040, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3870551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17732, "largest_seqno": 20032, "table_properties": {"data_size": 3861083, "index_size": 6026, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18835, "raw_average_key_size": 20, "raw_value_size": 3842313, "raw_average_value_size": 4096, "num_data_blocks": 269, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405911, "oldest_key_time": 1759405911, "file_creation_time": 1759406133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 33718 microseconds, and 15018 cpu microseconds.
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.464142) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3870551 bytes OK
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.464168) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.465640) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.465660) EVENT_LOG_v1 {"time_micros": 1759406133465653, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.465680) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5912701, prev total WAL file size 5912701, number of live WAL files 2.
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.467433) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3779KB)], [36(7830KB)]
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133467516, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 11888754, "oldest_snapshot_seqno": -1}
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4431 keys, 9816802 bytes, temperature: kUnknown
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133566179, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9816802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9784368, "index_size": 20228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110603, "raw_average_key_size": 24, "raw_value_size": 9701275, "raw_average_value_size": 2189, "num_data_blocks": 841, "num_entries": 4431, "num_filter_entries": 4431, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759406133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.566435) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9816802 bytes
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.567972) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.4 rd, 99.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(5.6) write-amplify(2.5) OK, records in: 4950, records dropped: 519 output_compression: NoCompression
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.567988) EVENT_LOG_v1 {"time_micros": 1759406133567981, "job": 20, "event": "compaction_finished", "compaction_time_micros": 98733, "compaction_time_cpu_micros": 44501, "output_level": 6, "num_output_files": 1, "total_output_size": 9816802, "num_input_records": 4950, "num_output_records": 4431, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133568667, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406133569858, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.467270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.569953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.569961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.569964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.569967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:55:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:55:33.569970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:55:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:34.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:36.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:55:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:37.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:55:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:38.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:55:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:39.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:55:39 np0005465987 podman[231866]: 2025-10-02 11:55:39.827014051 +0000 UTC m=+0.050866771 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 07:55:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:41.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:55:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:42.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:55:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:43.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:44.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:45.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:46.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:48.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:49.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:50.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:51.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:51 np0005465987 podman[231886]: 2025-10-02 11:55:51.866259303 +0000 UTC m=+0.073939946 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 07:55:51 np0005465987 podman[231887]: 2025-10-02 11:55:51.866253863 +0000 UTC m=+0.073267599 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 07:55:51 np0005465987 podman[231885]: 2025-10-02 11:55:51.922284241 +0000 UTC m=+0.129388179 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 07:55:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:55:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:52.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:55:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:55:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:53.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:55:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:54.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:55:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:55.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:55:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:56.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:57.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:58.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:55:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:00.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:56:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:02.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:56:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:03.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:05.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:06.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:56:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:07.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:56:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:08.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:56:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:09.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:56:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:56:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:10.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:56:10 np0005465987 podman[231948]: 2025-10-02 11:56:10.822918497 +0000 UTC m=+0.047560412 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 07:56:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:11.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:13.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:14.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:56:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:15.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:56:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:56:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:16.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:56:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:56:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:17.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:56:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:19.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:20.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:21.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:56:21.441 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:56:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:56:21.442 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 07:56:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:56:21.442 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:56:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:56:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:22.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:56:22 np0005465987 podman[231969]: 2025-10-02 11:56:22.859654114 +0000 UTC m=+0.073522586 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible)
Oct  2 07:56:22 np0005465987 podman[231970]: 2025-10-02 11:56:22.859655544 +0000 UTC m=+0.069514419 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 07:56:22 np0005465987 podman[231968]: 2025-10-02 11:56:22.890367304 +0000 UTC m=+0.105664615 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 07:56:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:56:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:56:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.370 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.370 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.370 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.399 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.400 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.400 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.994 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.994 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 07:56:24 np0005465987 nova_compute[230713]: 2025-10-02 11:56:24.994 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:56:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:56:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:25.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:56:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:56:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1295016656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:56:25 np0005465987 nova_compute[230713]: 2025-10-02 11:56:25.514 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:56:25 np0005465987 nova_compute[230713]: 2025-10-02 11:56:25.691 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:56:25 np0005465987 nova_compute[230713]: 2025-10-02 11:56:25.692 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5323MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 07:56:25 np0005465987 nova_compute[230713]: 2025-10-02 11:56:25.693 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:56:25 np0005465987 nova_compute[230713]: 2025-10-02 11:56:25.693 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:56:25 np0005465987 nova_compute[230713]: 2025-10-02 11:56:25.830 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 07:56:25 np0005465987 nova_compute[230713]: 2025-10-02 11:56:25.830 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 07:56:25 np0005465987 nova_compute[230713]: 2025-10-02 11:56:25.852 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:56:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:26.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:56:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3620511028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:56:26 np0005465987 nova_compute[230713]: 2025-10-02 11:56:26.284 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:56:26 np0005465987 nova_compute[230713]: 2025-10-02 11:56:26.289 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:56:26 np0005465987 nova_compute[230713]: 2025-10-02 11:56:26.401 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:56:26 np0005465987 nova_compute[230713]: 2025-10-02 11:56:26.403 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 07:56:26 np0005465987 nova_compute[230713]: 2025-10-02 11:56:26.404 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:56:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:27 np0005465987 nova_compute[230713]: 2025-10-02 11:56:27.403 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:56:27 np0005465987 nova_compute[230713]: 2025-10-02 11:56:27.404 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:56:27 np0005465987 nova_compute[230713]: 2025-10-02 11:56:27.404 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:56:27 np0005465987 nova_compute[230713]: 2025-10-02 11:56:27.404 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:56:27 np0005465987 nova_compute[230713]: 2025-10-02 11:56:27.404 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 07:56:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:27.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:56:27.922 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:56:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:56:27.922 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:56:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:56:27.923 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:56:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:28.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:56:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:56:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:56:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:29.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:30.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:31.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:32.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:56:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:33.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:56:34 np0005465987 ceph-mds[84089]: mds.beacon.cephfs.compute-1.zfhmgy missed beacon ack from the monitors
Oct  2 07:56:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:34.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:35.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:56:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:36.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:56:36 np0005465987 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 07:56:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:56:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:37.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:56:38 np0005465987 ceph-mds[84089]: mds.beacon.cephfs.compute-1.zfhmgy missed beacon ack from the monitors
Oct  2 07:56:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:38.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).paxos(paxos updating c 1507..2065) lease_timeout -- calling new election
Oct  2 07:56:38 np0005465987 ceph-mon[80822]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Oct  2 07:56:38 np0005465987 ceph-mon[80822]: paxos.2).electionLogic(14) init, last seen epoch 14
Oct  2 07:56:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:56:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:56:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:40.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:41 np0005465987 podman[232206]: 2025-10-02 11:56:41.822845769 +0000 UTC m=+0.046465743 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:56:42 np0005465987 ceph-mds[84089]: mds.beacon.cephfs.compute-1.zfhmgy missed beacon ack from the monitors
Oct  2 07:56:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:42.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:56:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:56:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:43.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:56:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:44.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:56:44 np0005465987 ceph-mon[80822]: mon.compute-2 calling monitor election
Oct  2 07:56:44 np0005465987 ceph-mon[80822]: mon.compute-1 calling monitor election
Oct  2 07:56:44 np0005465987 ceph-mon[80822]: mon.compute-0 calling monitor election
Oct  2 07:56:44 np0005465987 ceph-mon[80822]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  2 07:56:44 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 07:56:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:45.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:56:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:46.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:56:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:47.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:48.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:49.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:50.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:51.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:52.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:53.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:53 np0005465987 podman[232252]: 2025-10-02 11:56:53.591641026 +0000 UTC m=+0.061571307 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 07:56:53 np0005465987 podman[232253]: 2025-10-02 11:56:53.601429208 +0000 UTC m=+0.067000772 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 07:56:53 np0005465987 podman[232251]: 2025-10-02 11:56:53.621473873 +0000 UTC m=+0.092539584 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Oct  2 07:56:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:54.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:56:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:56:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 07:56:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1982814260' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 07:56:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 07:56:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1982814260' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 07:56:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:56:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:55.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:56:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:56.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:57.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:56:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:58.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:56:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:56:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:59.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:00.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:01.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:02.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:57:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:03.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:57:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:57:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:04.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:57:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:05.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:06.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:57:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:07.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:57:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:57:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:08.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:57:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:09.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:57:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:10.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:57:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:12.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:12 np0005465987 podman[232336]: 2025-10-02 11:57:12.84807692 +0000 UTC m=+0.063769516 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:57:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:13.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:14.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:15.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:16.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:18.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:57:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:19.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:57:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:20.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:21.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:22.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:22 np0005465987 nova_compute[230713]: 2025-10-02 11:57:22.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:22 np0005465987 nova_compute[230713]: 2025-10-02 11:57:22.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 07:57:22 np0005465987 nova_compute[230713]: 2025-10-02 11:57:22.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 07:57:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:23.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:23 np0005465987 podman[232358]: 2025-10-02 11:57:23.840476698 +0000 UTC m=+0.049950576 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct  2 07:57:23 np0005465987 podman[232357]: 2025-10-02 11:57:23.840507619 +0000 UTC m=+0.053446359 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 07:57:23 np0005465987 podman[232356]: 2025-10-02 11:57:23.86712003 +0000 UTC m=+0.083175773 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:57:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:57:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:24.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:57:25 np0005465987 nova_compute[230713]: 2025-10-02 11:57:25.149 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 07:57:25 np0005465987 nova_compute[230713]: 2025-10-02 11:57:25.150 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:25 np0005465987 nova_compute[230713]: 2025-10-02 11:57:25.150 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:25.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:25 np0005465987 nova_compute[230713]: 2025-10-02 11:57:25.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.015 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.016 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.060 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.061 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.061 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.061 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.061 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:57:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:26.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:57:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2032781063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.517 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.701 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.703 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5343MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.703 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.703 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.771 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.772 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 07:57:26 np0005465987 nova_compute[230713]: 2025-10-02 11:57:26.785 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:57:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:57:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3231675116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:57:27 np0005465987 nova_compute[230713]: 2025-10-02 11:57:27.212 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:57:27 np0005465987 nova_compute[230713]: 2025-10-02 11:57:27.216 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:57:27 np0005465987 nova_compute[230713]: 2025-10-02 11:57:27.240 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:57:27 np0005465987 nova_compute[230713]: 2025-10-02 11:57:27.242 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 07:57:27 np0005465987 nova_compute[230713]: 2025-10-02 11:57:27.242 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:57:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 07:57:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:27.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 07:57:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:57:27.923 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:57:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:57:27.924 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:57:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:57:27.924 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:57:28 np0005465987 nova_compute[230713]: 2025-10-02 11:57:28.192 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:28 np0005465987 nova_compute[230713]: 2025-10-02 11:57:28.192 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:28 np0005465987 nova_compute[230713]: 2025-10-02 11:57:28.193 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:28 np0005465987 nova_compute[230713]: 2025-10-02 11:57:28.193 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:57:28 np0005465987 nova_compute[230713]: 2025-10-02 11:57:28.193 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 07:57:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:28.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:29.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:30.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:31.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:32.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:33.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:57:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:34.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:57:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:35.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:57:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:36.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:57:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:37.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:57:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - - [02/Oct/2025:11:57:38.076 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.001000028s
Oct  2 07:57:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:57:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:38.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:57:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:39.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:40.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:41.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct  2 07:57:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:43.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct  2 07:57:43 np0005465987 podman[232465]: 2025-10-02 11:57:43.875037339 +0000 UTC m=+0.091462905 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  2 07:57:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:44.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:45.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct  2 07:57:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:46.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:47.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.653632) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267653669, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1391, "num_deletes": 252, "total_data_size": 3166101, "memory_usage": 3205072, "flush_reason": "Manual Compaction"}
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267698643, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1249446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20037, "largest_seqno": 21423, "table_properties": {"data_size": 1244836, "index_size": 2006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12121, "raw_average_key_size": 20, "raw_value_size": 1234610, "raw_average_value_size": 2092, "num_data_blocks": 91, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406134, "oldest_key_time": 1759406134, "file_creation_time": 1759406267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 45114 microseconds, and 5864 cpu microseconds.
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.698702) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1249446 bytes OK
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.698765) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.701578) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.701599) EVENT_LOG_v1 {"time_micros": 1759406267701592, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.701621) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3159604, prev total WAL file size 3159604, number of live WAL files 2.
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.703284) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1220KB)], [39(9586KB)]
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267703345, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 11066248, "oldest_snapshot_seqno": -1}
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4552 keys, 8029235 bytes, temperature: kUnknown
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267826258, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8029235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7999060, "index_size": 17713, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 113567, "raw_average_key_size": 24, "raw_value_size": 7916761, "raw_average_value_size": 1739, "num_data_blocks": 732, "num_entries": 4552, "num_filter_entries": 4552, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759406267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.826640) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8029235 bytes
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.828588) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.0 rd, 65.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.4 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(15.3) write-amplify(6.4) OK, records in: 5021, records dropped: 469 output_compression: NoCompression
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.828626) EVENT_LOG_v1 {"time_micros": 1759406267828609, "job": 22, "event": "compaction_finished", "compaction_time_micros": 123017, "compaction_time_cpu_micros": 26486, "output_level": 6, "num_output_files": 1, "total_output_size": 8029235, "num_input_records": 5021, "num_output_records": 4552, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267829389, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406267833781, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.703136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.833913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.833921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.833924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.833926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:57:47 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:57:47.833929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:57:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:48.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct  2 07:57:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:57:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:49.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:57:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:50.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:57:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:51.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:57:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:57:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:52.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:57:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct  2 07:57:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:57:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:53.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:57:54 np0005465987 podman[232537]: 2025-10-02 11:57:54.008006633 +0000 UTC m=+0.067975959 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:57:54 np0005465987 podman[232536]: 2025-10-02 11:57:54.010452341 +0000 UTC m=+0.087895307 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 07:57:54 np0005465987 podman[232535]: 2025-10-02 11:57:54.046144041 +0000 UTC m=+0.125251502 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 07:57:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:54.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:54 np0005465987 podman[232722]: 2025-10-02 11:57:54.880410857 +0000 UTC m=+0.340168609 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 07:57:55 np0005465987 podman[232722]: 2025-10-02 11:57:55.032566308 +0000 UTC m=+0.492324060 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 07:57:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:55.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:56.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:57:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:57:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:57:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:57.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:57:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:58.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:57:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:57:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:57:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:57:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:57:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:59.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:58:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:58:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:01.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:02.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:03.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:05.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:07.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:58:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:58:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:08.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:58:09.055 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:58:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:58:09.056 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 07:58:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:09.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:10.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:11.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:58:12.057 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:58:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:13.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:14 np0005465987 podman[233029]: 2025-10-02 11:58:14.848656523 +0000 UTC m=+0.067528206 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 07:58:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:15.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:19.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct  2 07:58:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:20.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct  2 07:58:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:21.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:22.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:22 np0005465987 nova_compute[230713]: 2025-10-02 11:58:22.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:22 np0005465987 nova_compute[230713]: 2025-10-02 11:58:22.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 07:58:22 np0005465987 nova_compute[230713]: 2025-10-02 11:58:22.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 07:58:23 np0005465987 nova_compute[230713]: 2025-10-02 11:58:23.029 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 07:58:23 np0005465987 nova_compute[230713]: 2025-10-02 11:58:23.030 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:23 np0005465987 nova_compute[230713]: 2025-10-02 11:58:23.030 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 07:58:23 np0005465987 nova_compute[230713]: 2025-10-02 11:58:23.078 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 07:58:23 np0005465987 nova_compute[230713]: 2025-10-02 11:58:23.079 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:23 np0005465987 nova_compute[230713]: 2025-10-02 11:58:23.079 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 07:58:23 np0005465987 nova_compute[230713]: 2025-10-02 11:58:23.141 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:23.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:24.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:24 np0005465987 podman[233050]: 2025-10-02 11:58:24.846509594 +0000 UTC m=+0.057076119 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 07:58:24 np0005465987 podman[233049]: 2025-10-02 11:58:24.875515531 +0000 UTC m=+0.076669297 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 07:58:24 np0005465987 podman[233048]: 2025-10-02 11:58:24.950201244 +0000 UTC m=+0.156990415 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:58:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:58:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:25.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:58:26 np0005465987 nova_compute[230713]: 2025-10-02 11:58:26.102 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:26 np0005465987 nova_compute[230713]: 2025-10-02 11:58:26.103 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:26 np0005465987 nova_compute[230713]: 2025-10-02 11:58:26.103 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:26.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:26 np0005465987 nova_compute[230713]: 2025-10-02 11:58:26.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.042 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.043 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.043 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.044 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.044 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:58:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3111796916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.524 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:27.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.675 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.676 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5333MB free_disk=20.967517852783203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.676 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.677 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.809 2 DEBUG oslo_concurrency.processutils [None req-9d8cf823-0f39-4c50-a429-1e8ab4f981de 3d04499745974bee95a8cf060aa5698e 295fcfaf9f484aa7affa8d2aca044c7a - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.851 2 DEBUG oslo_concurrency.processutils [None req-9d8cf823-0f39-4c50-a429-1e8ab4f981de 3d04499745974bee95a8cf060aa5698e 295fcfaf9f484aa7affa8d2aca044c7a - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct  2 07:58:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:58:27.925 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:58:27.925 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:58:27.925 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.960 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 07:58:27 np0005465987 nova_compute[230713]: 2025-10-02 11:58:27.960 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 07:58:28 np0005465987 nova_compute[230713]: 2025-10-02 11:58:28.036 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 07:58:28 np0005465987 nova_compute[230713]: 2025-10-02 11:58:28.235 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 07:58:28 np0005465987 nova_compute[230713]: 2025-10-02 11:58:28.235 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 07:58:28 np0005465987 nova_compute[230713]: 2025-10-02 11:58:28.251 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 07:58:28 np0005465987 nova_compute[230713]: 2025-10-02 11:58:28.276 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 07:58:28 np0005465987 nova_compute[230713]: 2025-10-02 11:58:28.318 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:28.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:58:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2656278343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:58:28 np0005465987 nova_compute[230713]: 2025-10-02 11:58:28.861 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:28 np0005465987 nova_compute[230713]: 2025-10-02 11:58:28.868 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:58:29 np0005465987 nova_compute[230713]: 2025-10-02 11:58:29.005 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:58:29 np0005465987 nova_compute[230713]: 2025-10-02 11:58:29.007 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 07:58:29 np0005465987 nova_compute[230713]: 2025-10-02 11:58:29.008 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:29.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:30 np0005465987 nova_compute[230713]: 2025-10-02 11:58:30.001 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:30 np0005465987 nova_compute[230713]: 2025-10-02 11:58:30.002 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:30 np0005465987 nova_compute[230713]: 2025-10-02 11:58:30.002 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:30 np0005465987 nova_compute[230713]: 2025-10-02 11:58:30.003 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:58:30 np0005465987 nova_compute[230713]: 2025-10-02 11:58:30.003 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 07:58:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:30.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:31.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:32 np0005465987 nova_compute[230713]: 2025-10-02 11:58:32.076 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:32 np0005465987 nova_compute[230713]: 2025-10-02 11:58:32.077 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:32 np0005465987 nova_compute[230713]: 2025-10-02 11:58:32.102 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 07:58:32 np0005465987 nova_compute[230713]: 2025-10-02 11:58:32.189 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:32 np0005465987 nova_compute[230713]: 2025-10-02 11:58:32.191 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:32 np0005465987 nova_compute[230713]: 2025-10-02 11:58:32.198 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 07:58:32 np0005465987 nova_compute[230713]: 2025-10-02 11:58:32.199 2 INFO nova.compute.claims [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 07:58:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:32.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:32 np0005465987 nova_compute[230713]: 2025-10-02 11:58:32.956 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:58:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/584807344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:58:33 np0005465987 nova_compute[230713]: 2025-10-02 11:58:33.418 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:33 np0005465987 nova_compute[230713]: 2025-10-02 11:58:33.425 2 DEBUG nova.compute.provider_tree [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:58:33 np0005465987 nova_compute[230713]: 2025-10-02 11:58:33.469 2 DEBUG nova.scheduler.client.report [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:58:33 np0005465987 nova_compute[230713]: 2025-10-02 11:58:33.520 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:33 np0005465987 nova_compute[230713]: 2025-10-02 11:58:33.520 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 07:58:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:33.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:33 np0005465987 nova_compute[230713]: 2025-10-02 11:58:33.651 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 07:58:33 np0005465987 nova_compute[230713]: 2025-10-02 11:58:33.652 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 07:58:33 np0005465987 nova_compute[230713]: 2025-10-02 11:58:33.741 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 07:58:33 np0005465987 nova_compute[230713]: 2025-10-02 11:58:33.784 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 07:58:34 np0005465987 nova_compute[230713]: 2025-10-02 11:58:34.158 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 07:58:34 np0005465987 nova_compute[230713]: 2025-10-02 11:58:34.161 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 07:58:34 np0005465987 nova_compute[230713]: 2025-10-02 11:58:34.161 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Creating image(s)#033[00m
Oct  2 07:58:34 np0005465987 nova_compute[230713]: 2025-10-02 11:58:34.211 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:58:34 np0005465987 nova_compute[230713]: 2025-10-02 11:58:34.260 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:58:34 np0005465987 nova_compute[230713]: 2025-10-02 11:58:34.307 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:58:34 np0005465987 nova_compute[230713]: 2025-10-02 11:58:34.314 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:34 np0005465987 nova_compute[230713]: 2025-10-02 11:58:34.316 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:34.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:35 np0005465987 nova_compute[230713]: 2025-10-02 11:58:35.077 2 DEBUG nova.virt.libvirt.imagebackend [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/c2d0c2bc-fe21-4689-86ae-d6728c15874c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/c2d0c2bc-fe21-4689-86ae-d6728c15874c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 07:58:35 np0005465987 nova_compute[230713]: 2025-10-02 11:58:35.136 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Automatically allocating a network for project 8972026d0f3a4bf4b6debd9555f9225c. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Oct  2 07:58:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:35.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:36.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.189 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.243 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.part --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.245 2 DEBUG nova.virt.images [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] c2d0c2bc-fe21-4689-86ae-d6728c15874c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.246 2 DEBUG nova.privsep.utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.246 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.part /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.432 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.part /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.converted" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.437 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.492 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968.converted --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.494 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.523 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.528 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:37.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.842 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.905 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] resizing rbd image 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 07:58:37 np0005465987 nova_compute[230713]: 2025-10-02 11:58:37.994 2 DEBUG nova.objects.instance [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lazy-loading 'migration_context' on Instance uuid 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:58:38 np0005465987 nova_compute[230713]: 2025-10-02 11:58:38.028 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 07:58:38 np0005465987 nova_compute[230713]: 2025-10-02 11:58:38.029 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Ensure instance console log exists: /var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 07:58:38 np0005465987 nova_compute[230713]: 2025-10-02 11:58:38.030 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:38 np0005465987 nova_compute[230713]: 2025-10-02 11:58:38.030 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:38 np0005465987 nova_compute[230713]: 2025-10-02 11:58:38.030 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:38.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:39.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:40.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:41.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:42.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct  2 07:58:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:43.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct  2 07:58:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:44.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:45.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:45 np0005465987 podman[233361]: 2025-10-02 11:58:45.843651875 +0000 UTC m=+0.059193008 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 07:58:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:46.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:58:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:47.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:48.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:49.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:50.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:51.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:58:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:52.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct  2 07:58:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:53.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:54 np0005465987 nova_compute[230713]: 2025-10-02 11:58:54.175 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Automatically allocated network: {'id': '48e5c857-28d2-421a-9519-d32a13037daa', 'name': 'auto_allocated_network', 'tenant_id': '8972026d0f3a4bf4b6debd9555f9225c', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['16e0798e-6426-4773-8f30-55d7d7bbe4dc', 'ad194778-c6b1-4bb8-a50b-11049b061235'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-10-02T11:58:36Z', 'updated_at': '2025-10-02T11:58:47Z', 'revision_number': 4, 'project_id': '8972026d0f3a4bf4b6debd9555f9225c'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Oct  2 07:58:54 np0005465987 nova_compute[230713]: 2025-10-02 11:58:54.196 2 WARNING oslo_policy.policy [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  2 07:58:54 np0005465987 nova_compute[230713]: 2025-10-02 11:58:54.197 2 WARNING oslo_policy.policy [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  2 07:58:54 np0005465987 nova_compute[230713]: 2025-10-02 11:58:54.202 2 DEBUG nova.policy [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17903cd0333c407b96f0aede6dd3b16c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8972026d0f3a4bf4b6debd9555f9225c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 07:58:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:54.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:55.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:55 np0005465987 nova_compute[230713]: 2025-10-02 11:58:55.644 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Successfully created port: 0293cd1e-bcfa-4860-9834-b7d85322f2b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 07:58:55 np0005465987 podman[233382]: 2025-10-02 11:58:55.883007506 +0000 UTC m=+0.075514577 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 07:58:55 np0005465987 podman[233381]: 2025-10-02 11:58:55.916418614 +0000 UTC m=+0.109529001 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid)
Oct  2 07:58:55 np0005465987 podman[233380]: 2025-10-02 11:58:55.92613018 +0000 UTC m=+0.126875177 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 07:58:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:56.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:56 np0005465987 nova_compute[230713]: 2025-10-02 11:58:56.713 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:56 np0005465987 nova_compute[230713]: 2025-10-02 11:58:56.713 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:58:56 np0005465987 nova_compute[230713]: 2025-10-02 11:58:56.847 2 DEBUG nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.015 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.015 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.023 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.023 2 INFO nova.compute.claims [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.374 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.591 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Successfully updated port: 0293cd1e-bcfa-4860-9834-b7d85322f2b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.608 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "refresh_cache-98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.609 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquired lock "refresh_cache-98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.609 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 07:58:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:57.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:58:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3339636446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.841 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.848 2 DEBUG nova.compute.provider_tree [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.884 2 ERROR nova.scheduler.client.report [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [req-bef14c69-ffdc-4f12-8410-20a3d205fb2d] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID abc4c1e4-e97d-4065-b850-e0065c8f9ab7.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-bef14c69-ffdc-4f12-8410-20a3d205fb2d"}]}#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.912 2 DEBUG nova.scheduler.client.report [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.929 2 DEBUG nova.scheduler.client.report [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.929 2 DEBUG nova.compute.provider_tree [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.949 2 DEBUG nova.scheduler.client.report [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 07:58:57 np0005465987 nova_compute[230713]: 2025-10-02 11:58:57.983 2 DEBUG nova.scheduler.client.report [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.035 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.170 2 DEBUG nova.compute.manager [req-7b1583f8-af65-4bb5-b4d4-1a65cbc222ff req-8f2a5d7b-7c16-4b55-8749-57a09a8dcd7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Received event network-changed-0293cd1e-bcfa-4860-9834-b7d85322f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.171 2 DEBUG nova.compute.manager [req-7b1583f8-af65-4bb5-b4d4-1a65cbc222ff req-8f2a5d7b-7c16-4b55-8749-57a09a8dcd7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Refreshing instance network info cache due to event network-changed-0293cd1e-bcfa-4860-9834-b7d85322f2b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.172 2 DEBUG oslo_concurrency.lockutils [req-7b1583f8-af65-4bb5-b4d4-1a65cbc222ff req-8f2a5d7b-7c16-4b55-8749-57a09a8dcd7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.213 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 07:58:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:58:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:58.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:58:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:58:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1453781497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.479 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.484 2 DEBUG nova.compute.provider_tree [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.645 2 DEBUG nova.scheduler.client.report [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Updated inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.646 2 DEBUG nova.compute.provider_tree [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Updating resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.646 2 DEBUG nova.compute.provider_tree [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.679 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.679 2 DEBUG nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.721 2 DEBUG nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.722 2 DEBUG nova.network.neutron [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.743 2 INFO nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.761 2 DEBUG nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.867 2 DEBUG nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.868 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.868 2 INFO nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Creating image(s)#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.895 2 DEBUG nova.storage.rbd_utils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] rbd image 05ef51ee-c5d0-405e-a30a-9915a4117983_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.918 2 DEBUG nova.storage.rbd_utils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] rbd image 05ef51ee-c5d0-405e-a30a-9915a4117983_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.946 2 DEBUG nova.storage.rbd_utils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] rbd image 05ef51ee-c5d0-405e-a30a-9915a4117983_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:58:58 np0005465987 nova_compute[230713]: 2025-10-02 11:58:58.949 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.034 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.035 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.037 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.037 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.072 2 DEBUG nova.storage.rbd_utils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] rbd image 05ef51ee-c5d0-405e-a30a-9915a4117983_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.076 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 05ef51ee-c5d0-405e-a30a-9915a4117983_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.171 2 DEBUG nova.policy [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39f72bf379684b959071ccdf03f5f52f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6807e9c97bd646799d4adfa12cdbd47e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 07:58:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:58:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:58:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:59.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.696 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 05ef51ee-c5d0-405e-a30a-9915a4117983_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.772 2 DEBUG nova.storage.rbd_utils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] resizing rbd image 05ef51ee-c5d0-405e-a30a-9915a4117983_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.898 2 DEBUG nova.objects.instance [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lazy-loading 'migration_context' on Instance uuid 05ef51ee-c5d0-405e-a30a-9915a4117983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.949 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.949 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Ensure instance console log exists: /var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.950 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.950 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:59 np0005465987 nova_compute[230713]: 2025-10-02 11:58:59.950 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.159 2 DEBUG nova.network.neutron [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Successfully created port: 28a876da-5d62-4032-8917-ce1c597913a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.400 2 DEBUG nova.network.neutron [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Updating instance_info_cache with network_info: [{"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:00.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.534 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Releasing lock "refresh_cache-98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.535 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Instance network_info: |[{"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.535 2 DEBUG oslo_concurrency.lockutils [req-7b1583f8-af65-4bb5-b4d4-1a65cbc222ff req-8f2a5d7b-7c16-4b55-8749-57a09a8dcd7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.535 2 DEBUG nova.network.neutron [req-7b1583f8-af65-4bb5-b4d4-1a65cbc222ff req-8f2a5d7b-7c16-4b55-8749-57a09a8dcd7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Refreshing network info cache for port 0293cd1e-bcfa-4860-9834-b7d85322f2b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.538 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Start _get_guest_xml network_info=[{"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.541 2 WARNING nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.546 2 DEBUG nova.virt.libvirt.host [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.546 2 DEBUG nova.virt.libvirt.host [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.550 2 DEBUG nova.virt.libvirt.host [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.550 2 DEBUG nova.virt.libvirt.host [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.551 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.552 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.552 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.552 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.552 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.552 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.553 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.553 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.553 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.553 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.553 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.553 2 DEBUG nova.virt.hardware [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.556 2 DEBUG nova.privsep.utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 07:59:00 np0005465987 nova_compute[230713]: 2025-10-02 11:59:00.557 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 07:59:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3537301381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.018 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.052 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.056 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 07:59:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1430692618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.489 2 DEBUG nova.network.neutron [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Successfully updated port: 28a876da-5d62-4032-8917-ce1c597913a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.492 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.495 2 DEBUG nova.virt.libvirt.vif [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T11:58:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-524116669-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-524116669-1',id=2,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8972026d0f3a4bf4b6debd9555f9225c',ramdisk_id='',reservation_id='r-5dqt1vqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-379631237',owner_user_name='tempest-AutoAllocateNetworkTest-379631237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T11:58:33Z,user_data=None,user_id='17903cd0333c407b96f0aede6dd3b16c',uuid=98e7bcbc-ab7d-47d0-bcfc-afa693d84b35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.496 2 DEBUG nova.network.os_vif_util [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converting VIF {"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.498 2 DEBUG nova.network.os_vif_util [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=0293cd1e-bcfa-4860-9834-b7d85322f2b7,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0293cd1e-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.501 2 DEBUG nova.objects.instance [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lazy-loading 'pci_devices' on Instance uuid 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.507 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquiring lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.508 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquired lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.508 2 DEBUG nova.network.neutron [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.527 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] End _get_guest_xml xml=<domain type="kvm">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <uuid>98e7bcbc-ab7d-47d0-bcfc-afa693d84b35</uuid>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <name>instance-00000002</name>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <nova:name>tempest-tempest.common.compute-instance-524116669-1</nova:name>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 11:59:00</nova:creationTime>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <nova:user uuid="17903cd0333c407b96f0aede6dd3b16c">tempest-AutoAllocateNetworkTest-379631237-project-member</nova:user>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <nova:project uuid="8972026d0f3a4bf4b6debd9555f9225c">tempest-AutoAllocateNetworkTest-379631237</nova:project>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <nova:port uuid="0293cd1e-bcfa-4860-9834-b7d85322f2b7">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.1.0.123" ipVersion="4"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="fdfe:381f:8400:1::ef" ipVersion="6"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <system>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <entry name="serial">98e7bcbc-ab7d-47d0-bcfc-afa693d84b35</entry>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <entry name="uuid">98e7bcbc-ab7d-47d0-bcfc-afa693d84b35</entry>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    </system>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <os>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  </os>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <features>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  </features>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  </clock>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  <devices>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      </source>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      </auth>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk.config">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      </source>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      </auth>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:48:0a:45"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <target dev="tap0293cd1e-bc"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    </interface>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35/console.log" append="off"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    </serial>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <video>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    </video>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    </rng>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 07:59:01 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 07:59:01 np0005465987 nova_compute[230713]:  </devices>
Oct  2 07:59:01 np0005465987 nova_compute[230713]: </domain>
Oct  2 07:59:01 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.528 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Preparing to wait for external event network-vif-plugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.528 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.529 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.529 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.531 2 DEBUG nova.virt.libvirt.vif [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T11:58:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-524116669-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-524116669-1',id=2,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8972026d0f3a4bf4b6debd9555f9225c',ramdisk_id='',reservation_id='r-5dqt1vqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-379631237',owner_user_name='tempest-AutoAllocateNetworkTest-379631237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T11:58:33Z,user_data=None,user_id='17903cd0333c407b96f0aede6dd3b16c',uuid=98e7bcbc-ab7d-47d0-bcfc-afa693d84b35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.531 2 DEBUG nova.network.os_vif_util [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converting VIF {"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.532 2 DEBUG nova.network.os_vif_util [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=0293cd1e-bcfa-4860-9834-b7d85322f2b7,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0293cd1e-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.533 2 DEBUG os_vif [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=0293cd1e-bcfa-4860-9834-b7d85322f2b7,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0293cd1e-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.593 2 DEBUG ovsdbapp.backend.ovs_idl [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.593 2 DEBUG ovsdbapp.backend.ovs_idl [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.593 2 DEBUG ovsdbapp.backend.ovs_idl [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 07:59:01 np0005465987 nova_compute[230713]: 2025-10-02 11:59:01.615 2 INFO oslo.privsep.daemon [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpnappp11t/privsep.sock']#033[00m
Oct  2 07:59:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:59:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:01.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:59:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.146 2 DEBUG nova.network.neutron [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.300 2 INFO oslo.privsep.daemon [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.178 778 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.182 778 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.185 778 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.185 778 INFO oslo.privsep.daemon [-] privsep daemon running as pid 778#033[00m
Oct  2 07:59:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:02.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.575 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0293cd1e-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0293cd1e-bc, col_values=(('external_ids', {'iface-id': '0293cd1e-bcfa-4860-9834-b7d85322f2b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:0a:45', 'vm-uuid': '98e7bcbc-ab7d-47d0-bcfc-afa693d84b35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:02 np0005465987 NetworkManager[44910]: <info>  [1759406342.5786] manager: (tap0293cd1e-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.591 2 INFO os_vif [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=0293cd1e-bcfa-4860-9834-b7d85322f2b7,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0293cd1e-bc')#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.644 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.645 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.645 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] No VIF found with MAC fa:16:3e:48:0a:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.645 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Using config drive#033[00m
Oct  2 07:59:02 np0005465987 nova_compute[230713]: 2025-10-02 11:59:02.670 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.030 2 DEBUG nova.compute.manager [req-ae9ab64f-1cf5-4d04-8f50-05cc7e8255e6 req-97c70b1c-3ed0-46d0-9b0f-43cf9a632b4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received event network-changed-28a876da-5d62-4032-8917-ce1c597913a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.030 2 DEBUG nova.compute.manager [req-ae9ab64f-1cf5-4d04-8f50-05cc7e8255e6 req-97c70b1c-3ed0-46d0-9b0f-43cf9a632b4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Refreshing instance network info cache due to event network-changed-28a876da-5d62-4032-8917-ce1c597913a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.031 2 DEBUG oslo_concurrency.lockutils [req-ae9ab64f-1cf5-4d04-8f50-05cc7e8255e6 req-97c70b1c-3ed0-46d0-9b0f-43cf9a632b4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.571 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Creating config drive at /var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35/disk.config#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.580 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3b46lki5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:03.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.640 2 DEBUG nova.network.neutron [req-7b1583f8-af65-4bb5-b4d4-1a65cbc222ff req-8f2a5d7b-7c16-4b55-8749-57a09a8dcd7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Updated VIF entry in instance network info cache for port 0293cd1e-bcfa-4860-9834-b7d85322f2b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.642 2 DEBUG nova.network.neutron [req-7b1583f8-af65-4bb5-b4d4-1a65cbc222ff req-8f2a5d7b-7c16-4b55-8749-57a09a8dcd7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Updating instance_info_cache with network_info: [{"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.670 2 DEBUG oslo_concurrency.lockutils [req-7b1583f8-af65-4bb5-b4d4-1a65cbc222ff req-8f2a5d7b-7c16-4b55-8749-57a09a8dcd7d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.705 2 DEBUG nova.network.neutron [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Updating instance_info_cache with network_info: [{"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.734 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3b46lki5" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.770 2 DEBUG nova.storage.rbd_utils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] rbd image 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.773 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35/disk.config 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.788 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Releasing lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.789 2 DEBUG nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Instance network_info: |[{"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.789 2 DEBUG oslo_concurrency.lockutils [req-ae9ab64f-1cf5-4d04-8f50-05cc7e8255e6 req-97c70b1c-3ed0-46d0-9b0f-43cf9a632b4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.790 2 DEBUG nova.network.neutron [req-ae9ab64f-1cf5-4d04-8f50-05cc7e8255e6 req-97c70b1c-3ed0-46d0-9b0f-43cf9a632b4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Refreshing network info cache for port 28a876da-5d62-4032-8917-ce1c597913a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.793 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Start _get_guest_xml network_info=[{"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.797 2 WARNING nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.800 2 DEBUG nova.virt.libvirt.host [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.801 2 DEBUG nova.virt.libvirt.host [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.803 2 DEBUG nova.virt.libvirt.host [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.803 2 DEBUG nova.virt.libvirt.host [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.804 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.805 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.805 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.805 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.805 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.806 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.806 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.806 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.806 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.806 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.806 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.807 2 DEBUG nova.virt.hardware [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.809 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.935 2 DEBUG oslo_concurrency.processutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35/disk.config 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:03 np0005465987 nova_compute[230713]: 2025-10-02 11:59:03.936 2 INFO nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Deleting local config drive /var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35/disk.config because it was imported into RBD.#033[00m
Oct  2 07:59:03 np0005465987 systemd[1]: Starting libvirt secret daemon...
Oct  2 07:59:04 np0005465987 systemd[1]: Started libvirt secret daemon.
Oct  2 07:59:04 np0005465987 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  2 07:59:04 np0005465987 kernel: tap0293cd1e-bc: entered promiscuous mode
Oct  2 07:59:04 np0005465987 NetworkManager[44910]: <info>  [1759406344.0683] manager: (tap0293cd1e-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct  2 07:59:04 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:04Z|00027|binding|INFO|Claiming lport 0293cd1e-bcfa-4860-9834-b7d85322f2b7 for this chassis.
Oct  2 07:59:04 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:04Z|00028|binding|INFO|0293cd1e-bcfa-4860-9834-b7d85322f2b7: Claiming fa:16:3e:48:0a:45 10.1.0.123 fdfe:381f:8400:1::ef
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.102 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:0a:45 10.1.0.123 fdfe:381f:8400:1::ef'], port_security=['fa:16:3e:48:0a:45 10.1.0.123 fdfe:381f:8400:1::ef'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.123/26 fdfe:381f:8400:1::ef/64', 'neutron:device_id': '98e7bcbc-ab7d-47d0-bcfc-afa693d84b35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e5c857-28d2-421a-9519-d32a13037daa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8972026d0f3a4bf4b6debd9555f9225c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c1955bc9-f08c-4e28-af03-54d4a3949aee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c8e46c-a173-4ff5-bd2b-7026f16b2de8, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0293cd1e-bcfa-4860-9834-b7d85322f2b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.103 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0293cd1e-bcfa-4860-9834-b7d85322f2b7 in datapath 48e5c857-28d2-421a-9519-d32a13037daa bound to our chassis#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.111 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48e5c857-28d2-421a-9519-d32a13037daa#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.112 138325 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpdpsp1c4i/privsep.sock']#033[00m
Oct  2 07:59:04 np0005465987 systemd-udevd[233841]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:59:04 np0005465987 NetworkManager[44910]: <info>  [1759406344.1376] device (tap0293cd1e-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:59:04 np0005465987 NetworkManager[44910]: <info>  [1759406344.1385] device (tap0293cd1e-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 07:59:04 np0005465987 systemd-machined[188335]: New machine qemu-1-instance-00000002.
Oct  2 07:59:04 np0005465987 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:04 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:04Z|00029|binding|INFO|Setting lport 0293cd1e-bcfa-4860-9834-b7d85322f2b7 ovn-installed in OVS
Oct  2 07:59:04 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:04Z|00030|binding|INFO|Setting lport 0293cd1e-bcfa-4860-9834-b7d85322f2b7 up in Southbound
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 07:59:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/841098749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.256 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.288 2 DEBUG nova.storage.rbd_utils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] rbd image 05ef51ee-c5d0-405e-a30a-9915a4117983_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.295 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:04.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.485 2 DEBUG nova.compute.manager [req-b6c480cf-0f10-42a2-85a4-772eb6cd3a8f req-fd0bac06-7067-4d9a-840c-9a8b93d97a66 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Received event network-vif-plugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.486 2 DEBUG oslo_concurrency.lockutils [req-b6c480cf-0f10-42a2-85a4-772eb6cd3a8f req-fd0bac06-7067-4d9a-840c-9a8b93d97a66 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.486 2 DEBUG oslo_concurrency.lockutils [req-b6c480cf-0f10-42a2-85a4-772eb6cd3a8f req-fd0bac06-7067-4d9a-840c-9a8b93d97a66 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.487 2 DEBUG oslo_concurrency.lockutils [req-b6c480cf-0f10-42a2-85a4-772eb6cd3a8f req-fd0bac06-7067-4d9a-840c-9a8b93d97a66 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.487 2 DEBUG nova.compute.manager [req-b6c480cf-0f10-42a2-85a4-772eb6cd3a8f req-fd0bac06-7067-4d9a-840c-9a8b93d97a66 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Processing event network-vif-plugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 07:59:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 07:59:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1493660323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.704 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.706 2 DEBUG nova.virt.libvirt.vif [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T11:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-597937981',display_name='tempest-VolumesAssistedSnapshotsTest-server-597937981',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-597937981',id=5,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPKkHzC5UoZKgLNglsajP+t/XvuXgPqJUdk5kjgTplZdz9bBYLeG5uMVbdoqM9TGvz04jV/stznA9kUBQqWcEeEy/AqvTUml3lYXuQ4tJBJPP73o0nMwL9Feaz12WSkrFw==',key_name='tempest-keypair-480338542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6807e9c97bd646799d4adfa12cdbd47e',ramdisk_id='',reservation_id='r-pgicc5gv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-22685689',owner_user_name='tempest-VolumesAssistedSnapshotsTest-22685689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T11:58:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39f72bf379684b959071ccdf03f5f52f',uuid=05ef51ee-c5d0-405e-a30a-9915a4117983,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.707 2 DEBUG nova.network.os_vif_util [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Converting VIF {"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.708 2 DEBUG nova.network.os_vif_util [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0e:30,bridge_name='br-int',has_traffic_filtering=True,id=28a876da-5d62-4032-8917-ce1c597913a9,network=Network(50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28a876da-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.709 2 DEBUG nova.objects.instance [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lazy-loading 'pci_devices' on Instance uuid 05ef51ee-c5d0-405e-a30a-9915a4117983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.725 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] End _get_guest_xml xml=<domain type="kvm">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <uuid>05ef51ee-c5d0-405e-a30a-9915a4117983</uuid>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <name>instance-00000005</name>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <nova:name>tempest-VolumesAssistedSnapshotsTest-server-597937981</nova:name>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 11:59:03</nova:creationTime>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <nova:user uuid="39f72bf379684b959071ccdf03f5f52f">tempest-VolumesAssistedSnapshotsTest-22685689-project-member</nova:user>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <nova:project uuid="6807e9c97bd646799d4adfa12cdbd47e">tempest-VolumesAssistedSnapshotsTest-22685689</nova:project>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <nova:port uuid="28a876da-5d62-4032-8917-ce1c597913a9">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <system>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <entry name="serial">05ef51ee-c5d0-405e-a30a-9915a4117983</entry>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <entry name="uuid">05ef51ee-c5d0-405e-a30a-9915a4117983</entry>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    </system>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <os>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  </os>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <features>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  </features>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  </clock>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  <devices>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/05ef51ee-c5d0-405e-a30a-9915a4117983_disk">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      </source>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      </auth>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/05ef51ee-c5d0-405e-a30a-9915a4117983_disk.config">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      </source>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      </auth>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:cb:0e:30"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <target dev="tap28a876da-5d"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    </interface>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983/console.log" append="off"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    </serial>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <video>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    </video>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    </rng>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 07:59:04 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 07:59:04 np0005465987 nova_compute[230713]:  </devices>
Oct  2 07:59:04 np0005465987 nova_compute[230713]: </domain>
Oct  2 07:59:04 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.733 2 DEBUG nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Preparing to wait for external event network-vif-plugged-28a876da-5d62-4032-8917-ce1c597913a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.733 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.734 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.734 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.735 2 DEBUG nova.virt.libvirt.vif [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T11:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-597937981',display_name='tempest-VolumesAssistedSnapshotsTest-server-597937981',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-597937981',id=5,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPKkHzC5UoZKgLNglsajP+t/XvuXgPqJUdk5kjgTplZdz9bBYLeG5uMVbdoqM9TGvz04jV/stznA9kUBQqWcEeEy/AqvTUml3lYXuQ4tJBJPP73o0nMwL9Feaz12WSkrFw==',key_name='tempest-keypair-480338542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6807e9c97bd646799d4adfa12cdbd47e',ramdisk_id='',reservation_id='r-pgicc5gv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-22685689',owner_user_name='tempest-VolumesAssistedSnapshotsTest-22685689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T11:58:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39f72bf379684b959071ccdf03f5f52f',uuid=05ef51ee-c5d0-405e-a30a-9915a4117983,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.736 2 DEBUG nova.network.os_vif_util [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Converting VIF {"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.737 2 DEBUG nova.network.os_vif_util [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0e:30,bridge_name='br-int',has_traffic_filtering=True,id=28a876da-5d62-4032-8917-ce1c597913a9,network=Network(50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28a876da-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.738 2 DEBUG os_vif [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0e:30,bridge_name='br-int',has_traffic_filtering=True,id=28a876da-5d62-4032-8917-ce1c597913a9,network=Network(50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28a876da-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28a876da-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.744 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28a876da-5d, col_values=(('external_ids', {'iface-id': '28a876da-5d62-4032-8917-ce1c597913a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:0e:30', 'vm-uuid': '05ef51ee-c5d0-405e-a30a-9915a4117983'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:04 np0005465987 NetworkManager[44910]: <info>  [1759406344.7482] manager: (tap28a876da-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.751 2 INFO os_vif [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0e:30,bridge_name='br-int',has_traffic_filtering=True,id=28a876da-5d62-4032-8917-ce1c597913a9,network=Network(50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28a876da-5d')#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.819 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.820 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.820 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] No VIF found with MAC fa:16:3e:cb:0e:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.821 2 INFO nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Using config drive#033[00m
Oct  2 07:59:04 np0005465987 nova_compute[230713]: 2025-10-02 11:59:04.844 2 DEBUG nova.storage.rbd_utils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] rbd image 05ef51ee-c5d0-405e-a30a-9915a4117983_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.904 138325 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.904 138325 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpdpsp1c4i/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.757 233943 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.761 233943 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.763 233943 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.763 233943 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233943#033[00m
Oct  2 07:59:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:04.907 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b30aa97d-824c-4d56-9b67-ed335fbac3e3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.033 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406345.0325246, 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.033 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] VM Started (Lifecycle Event)#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.035 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.047 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.050 2 INFO nova.virt.libvirt.driver [-] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Instance spawned successfully.#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.051 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.058 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.061 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.071 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.071 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.071 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.072 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.072 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.072 2 DEBUG nova.virt.libvirt.driver [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.085 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.086 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406345.033594, 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.086 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] VM Paused (Lifecycle Event)#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.135 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.138 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406345.03794, 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.138 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] VM Resumed (Lifecycle Event)#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.158 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.162 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.194 2 INFO nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Took 31.03 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.195 2 DEBUG nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.204 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.257 2 INFO nova.compute.manager [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Took 33.10 seconds to build instance.#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.289 2 DEBUG oslo_concurrency.lockutils [None req-3aa091b9-f09d-4ab0-88c9-b4bd51339a3c 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 33.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.339 2 INFO nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Creating config drive at /var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983/disk.config#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.344 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3i46bgnq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.488 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3i46bgnq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.527 2 DEBUG nova.storage.rbd_utils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] rbd image 05ef51ee-c5d0-405e-a30a-9915a4117983_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.532 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983/disk.config 05ef51ee-c5d0-405e-a30a-9915a4117983_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.559 2 DEBUG nova.network.neutron [req-ae9ab64f-1cf5-4d04-8f50-05cc7e8255e6 req-97c70b1c-3ed0-46d0-9b0f-43cf9a632b4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Updated VIF entry in instance network info cache for port 28a876da-5d62-4032-8917-ce1c597913a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.559 2 DEBUG nova.network.neutron [req-ae9ab64f-1cf5-4d04-8f50-05cc7e8255e6 req-97c70b1c-3ed0-46d0-9b0f-43cf9a632b4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Updating instance_info_cache with network_info: [{"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.580 2 DEBUG oslo_concurrency.lockutils [req-ae9ab64f-1cf5-4d04-8f50-05cc7e8255e6 req-97c70b1c-3ed0-46d0-9b0f-43cf9a632b4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:05.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.719 2 DEBUG oslo_concurrency.processutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983/disk.config 05ef51ee-c5d0-405e-a30a-9915a4117983_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.720 2 INFO nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Deleting local config drive /var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983/disk.config because it was imported into RBD.#033[00m
Oct  2 07:59:05 np0005465987 virtqemud[230442]: End of file while reading data: Input/output error
Oct  2 07:59:05 np0005465987 virtqemud[230442]: End of file while reading data: Input/output error
Oct  2 07:59:05 np0005465987 NetworkManager[44910]: <info>  [1759406345.7701] manager: (tap28a876da-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Oct  2 07:59:05 np0005465987 kernel: tap28a876da-5d: entered promiscuous mode
Oct  2 07:59:05 np0005465987 systemd-udevd[233839]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:59:05 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:05Z|00031|binding|INFO|Claiming lport 28a876da-5d62-4032-8917-ce1c597913a9 for this chassis.
Oct  2 07:59:05 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:05Z|00032|binding|INFO|28a876da-5d62-4032-8917-ce1c597913a9: Claiming fa:16:3e:cb:0e:30 10.100.0.4
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:05.785 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0e:30 10.100.0.4'], port_security=['fa:16:3e:cb:0e:30 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '05ef51ee-c5d0-405e-a30a-9915a4117983', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6807e9c97bd646799d4adfa12cdbd47e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '069bb314-33e5-4581-a7d1-f010e907050e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d71b56cb-f932-4ba0-8910-a51018308ba1, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=28a876da-5d62-4032-8917-ce1c597913a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:59:05 np0005465987 NetworkManager[44910]: <info>  [1759406345.7921] device (tap28a876da-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:59:05 np0005465987 NetworkManager[44910]: <info>  [1759406345.7929] device (tap28a876da-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 07:59:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:05.810 233943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:05.810 233943 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:05.810 233943 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:05 np0005465987 systemd-machined[188335]: New machine qemu-2-instance-00000005.
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:05 np0005465987 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Oct  2 07:59:05 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:05Z|00033|binding|INFO|Setting lport 28a876da-5d62-4032-8917-ce1c597913a9 ovn-installed in OVS
Oct  2 07:59:05 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:05Z|00034|binding|INFO|Setting lport 28a876da-5d62-4032-8917-ce1c597913a9 up in Southbound
Oct  2 07:59:05 np0005465987 nova_compute[230713]: 2025-10-02 11:59:05.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:06.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.928 2 DEBUG nova.compute.manager [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Received event network-vif-plugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.929 2 DEBUG oslo_concurrency.lockutils [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.929 2 DEBUG oslo_concurrency.lockutils [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.929 2 DEBUG oslo_concurrency.lockutils [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.929 2 DEBUG nova.compute.manager [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] No waiting events found dispatching network-vif-plugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.930 2 WARNING nova.compute.manager [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Received unexpected event network-vif-plugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 for instance with vm_state active and task_state None.#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.930 2 DEBUG nova.compute.manager [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received event network-vif-plugged-28a876da-5d62-4032-8917-ce1c597913a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.930 2 DEBUG oslo_concurrency.lockutils [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.930 2 DEBUG oslo_concurrency.lockutils [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.931 2 DEBUG oslo_concurrency.lockutils [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.931 2 DEBUG nova.compute.manager [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Processing event network-vif-plugged-28a876da-5d62-4032-8917-ce1c597913a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.931 2 DEBUG nova.compute.manager [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received event network-vif-plugged-28a876da-5d62-4032-8917-ce1c597913a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.931 2 DEBUG oslo_concurrency.lockutils [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.932 2 DEBUG oslo_concurrency.lockutils [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.932 2 DEBUG oslo_concurrency.lockutils [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.932 2 DEBUG nova.compute.manager [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] No waiting events found dispatching network-vif-plugged-28a876da-5d62-4032-8917-ce1c597913a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.932 2 WARNING nova.compute.manager [req-1b229631-f00f-4f60-8b3d-e8ab263e7b38 req-97cd65ec-8c64-4fa2-b9b0-955972c6c498 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received unexpected event network-vif-plugged-28a876da-5d62-4032-8917-ce1c597913a9 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.936 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406346.9353719, 05ef51ee-c5d0-405e-a30a-9915a4117983 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.936 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] VM Started (Lifecycle Event)#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.938 2 DEBUG nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.949 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.956 2 INFO nova.virt.libvirt.driver [-] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Instance spawned successfully.#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.956 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.971 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.980 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.993 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.994 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.994 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.995 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.995 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:06 np0005465987 nova_compute[230713]: 2025-10-02 11:59:06.996 2 DEBUG nova.virt.libvirt.driver [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.011 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.011 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406346.935451, 05ef51ee-c5d0-405e-a30a-9915a4117983 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.011 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] VM Paused (Lifecycle Event)#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.102 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.106 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406346.9487174, 05ef51ee-c5d0-405e-a30a-9915a4117983 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.106 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] VM Resumed (Lifecycle Event)#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.215 2 INFO nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Took 8.35 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.216 2 DEBUG nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.240 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.243 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.302 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 07:59:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:07.330 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff54645-ef16-4f36-8c66-36cb51afb8bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:07.332 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48e5c857-21 in ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 07:59:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:07.334 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48e5c857-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 07:59:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:07.334 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9ceeb358-aff0-4fea-b0e0-dd736a4bd54d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:07.339 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[33503ad9-883f-40ad-b34a-b2d5f769de35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.371 2 INFO nova.compute.manager [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Took 10.39 seconds to build instance.#033[00m
Oct  2 07:59:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:07.380 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[f39e6e05-babe-44bd-a0d2-a103cd0ecec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:07 np0005465987 nova_compute[230713]: 2025-10-02 11:59:07.393 2 DEBUG oslo_concurrency.lockutils [None req-79e96c6f-2725-46c7-8979-f0687b4dfec7 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:07.425 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8b552b48-1a17-4280-a2e2-beb35c7c50ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:07.427 138325 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp4wmt4j9r/privsep.sock']#033[00m
Oct  2 07:59:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:07.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.112 138325 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.113 138325 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4wmt4j9r/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:07.998 234209 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.002 234209 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.005 234209 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.005 234209 INFO oslo.privsep.daemon [-] privsep daemon running as pid 234209#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.116 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[55f4f8d7-b422-4d7c-8992-2dd6e6c81b74]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:59:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:08.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.571 2 DEBUG oslo_concurrency.lockutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.572 2 DEBUG oslo_concurrency.lockutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.572 2 DEBUG oslo_concurrency.lockutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.572 2 DEBUG oslo_concurrency.lockutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.572 2 DEBUG oslo_concurrency.lockutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.573 2 INFO nova.compute.manager [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Terminating instance#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.574 2 DEBUG nova.compute.manager [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 07:59:08 np0005465987 kernel: tap0293cd1e-bc (unregistering): left promiscuous mode
Oct  2 07:59:08 np0005465987 NetworkManager[44910]: <info>  [1759406348.6109] device (tap0293cd1e-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 07:59:08 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:08Z|00035|binding|INFO|Releasing lport 0293cd1e-bcfa-4860-9834-b7d85322f2b7 from this chassis (sb_readonly=0)
Oct  2 07:59:08 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:08Z|00036|binding|INFO|Setting lport 0293cd1e-bcfa-4860-9834-b7d85322f2b7 down in Southbound
Oct  2 07:59:08 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:08Z|00037|binding|INFO|Removing iface tap0293cd1e-bc ovn-installed in OVS
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.627 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:0a:45 10.1.0.123 fdfe:381f:8400:1::ef'], port_security=['fa:16:3e:48:0a:45 10.1.0.123 fdfe:381f:8400:1::ef'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.123/26 fdfe:381f:8400:1::ef/64', 'neutron:device_id': '98e7bcbc-ab7d-47d0-bcfc-afa693d84b35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e5c857-28d2-421a-9519-d32a13037daa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8972026d0f3a4bf4b6debd9555f9225c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1955bc9-f08c-4e28-af03-54d4a3949aee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0c8e46c-a173-4ff5-bd2b-7026f16b2de8, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0293cd1e-bcfa-4860-9834-b7d85322f2b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.639 234209 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.639 234209 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:08.639 234209 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:08 np0005465987 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct  2 07:59:08 np0005465987 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 4.462s CPU time.
Oct  2 07:59:08 np0005465987 systemd-machined[188335]: Machine qemu-1-instance-00000002 terminated.
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.811 2 INFO nova.virt.libvirt.driver [-] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Instance destroyed successfully.#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.812 2 DEBUG nova.objects.instance [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lazy-loading 'resources' on Instance uuid 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.829 2 DEBUG nova.virt.libvirt.vif [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T11:58:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-524116669-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-524116669-1',id=2,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T11:59:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8972026d0f3a4bf4b6debd9555f9225c',ramdisk_id='',reservation_id='r-5dqt1vqy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-379631237',owner_user_name='tempest-AutoAllocateNetworkTest-379631237-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T11:59:05Z,user_data=None,user_id='17903cd0333c407b96f0aede6dd3b16c',uuid=98e7bcbc-ab7d-47d0-bcfc-afa693d84b35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.830 2 DEBUG nova.network.os_vif_util [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converting VIF {"id": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "address": "fa:16:3e:48:0a:45", "network": {"id": "48e5c857-28d2-421a-9519-d32a13037daa", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::ef", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8972026d0f3a4bf4b6debd9555f9225c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0293cd1e-bc", "ovs_interfaceid": "0293cd1e-bcfa-4860-9834-b7d85322f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.832 2 DEBUG nova.network.os_vif_util [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=0293cd1e-bcfa-4860-9834-b7d85322f2b7,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0293cd1e-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.833 2 DEBUG os_vif [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=0293cd1e-bcfa-4860-9834-b7d85322f2b7,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0293cd1e-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0293cd1e-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 07:59:08 np0005465987 nova_compute[230713]: 2025-10-02 11:59:08.852 2 INFO os_vif [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=0293cd1e-bcfa-4860-9834-b7d85322f2b7,network=Network(48e5c857-28d2-421a-9519-d32a13037daa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0293cd1e-bc')#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.076 2 DEBUG nova.compute.manager [req-db7da02e-3eb2-44b2-b7f1-1bbc4ab7e4f5 req-dc47790f-8594-43f6-afbd-98e572bca236 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Received event network-vif-unplugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.077 2 DEBUG oslo_concurrency.lockutils [req-db7da02e-3eb2-44b2-b7f1-1bbc4ab7e4f5 req-dc47790f-8594-43f6-afbd-98e572bca236 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.077 2 DEBUG oslo_concurrency.lockutils [req-db7da02e-3eb2-44b2-b7f1-1bbc4ab7e4f5 req-dc47790f-8594-43f6-afbd-98e572bca236 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.077 2 DEBUG oslo_concurrency.lockutils [req-db7da02e-3eb2-44b2-b7f1-1bbc4ab7e4f5 req-dc47790f-8594-43f6-afbd-98e572bca236 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.078 2 DEBUG nova.compute.manager [req-db7da02e-3eb2-44b2-b7f1-1bbc4ab7e4f5 req-dc47790f-8594-43f6-afbd-98e572bca236 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] No waiting events found dispatching network-vif-unplugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.078 2 DEBUG nova.compute.manager [req-db7da02e-3eb2-44b2-b7f1-1bbc4ab7e4f5 req-dc47790f-8594-43f6-afbd-98e572bca236 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Received event network-vif-unplugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 07:59:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:59:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.204 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[801d94c6-c9af-436a-8976-a261fca6bc37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.211 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d806ad-0f0a-4bf4-9053-1909b82499cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 NetworkManager[44910]: <info>  [1759406349.2142] manager: (tap48e5c857-20): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct  2 07:59:09 np0005465987 systemd-udevd[234216]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.242 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f5ad04-f9b6-447e-9eaf-f88d869e97e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.245 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[2955d18d-196f-47f4-9918-5a9477ed5926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 NetworkManager[44910]: <info>  [1759406349.2675] device (tap48e5c857-20): carrier: link connected
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.272 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[51e0ebda-4c38-4694-b7d3-a539ecbef5e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.292 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[747ccea1-7a11-46ae-b21d-faf1057fe41f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e5c857-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:83:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441784, 'reachable_time': 17088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234274, 'error': None, 'target': 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.298 2 INFO nova.virt.libvirt.driver [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Deleting instance files /var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_del#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.299 2 INFO nova.virt.libvirt.driver [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Deletion of /var/lib/nova/instances/98e7bcbc-ab7d-47d0-bcfc-afa693d84b35_del complete#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.310 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cd89380d-68dd-4009-b3d4-895900756192]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:83c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441784, 'tstamp': 441784}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234275, 'error': None, 'target': 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.328 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0ecaa8-e9a9-44fc-9a9a-940bbf197144]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e5c857-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:83:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441784, 'reachable_time': 17088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 234277, 'error': None, 'target': 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.363 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb82a84-6446-4db9-9cd5-ff70ba0526f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.378 2 DEBUG nova.virt.libvirt.host [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.378 2 INFO nova.virt.libvirt.host [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] UEFI support detected#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.380 2 INFO nova.compute.manager [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.380 2 DEBUG oslo.service.loopingcall [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.380 2 DEBUG nova.compute.manager [-] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.380 2 DEBUG nova.network.neutron [-] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.424 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[95e8b007-28c3-4b39-a4df-1051161f7e0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.426 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e5c857-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.427 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.428 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48e5c857-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:09 np0005465987 kernel: tap48e5c857-20: entered promiscuous mode
Oct  2 07:59:09 np0005465987 NetworkManager[44910]: <info>  [1759406349.4312] manager: (tap48e5c857-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.432 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48e5c857-20, col_values=(('external_ids', {'iface-id': '108050f3-e876-480b-8cdd-c1255d33ae84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:09 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:09Z|00038|binding|INFO|Releasing lport 108050f3-e876-480b-8cdd-c1255d33ae84 from this chassis (sb_readonly=0)
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.450 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48e5c857-28d2-421a-9519-d32a13037daa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48e5c857-28d2-421a-9519-d32a13037daa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.451 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[af214f7a-62f1-43c8-9b74-e4f353ec4f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:09 np0005465987 nova_compute[230713]: 2025-10-02 11:59:09.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.453 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-48e5c857-28d2-421a-9519-d32a13037daa
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/48e5c857-28d2-421a-9519-d32a13037daa.pid.haproxy
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 48e5c857-28d2-421a-9519-d32a13037daa
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 07:59:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:09.454 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'env', 'PROCESS_TAG=haproxy-48e5c857-28d2-421a-9519-d32a13037daa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48e5c857-28d2-421a-9519-d32a13037daa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 07:59:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:59:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:09.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:59:09 np0005465987 podman[234309]: 2025-10-02 11:59:09.851587891 +0000 UTC m=+0.067098184 container create 382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 07:59:09 np0005465987 podman[234309]: 2025-10-02 11:59:09.813824844 +0000 UTC m=+0.029335127 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:59:09 np0005465987 systemd[1]: Started libpod-conmon-382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa.scope.
Oct  2 07:59:09 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:59:09 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68982ea96d5333c3251cf0fa8c2ba8c240c3b7bbe4d624d577c09d1b8347fee6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 07:59:09 np0005465987 podman[234309]: 2025-10-02 11:59:09.983944178 +0000 UTC m=+0.199454511 container init 382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:59:09 np0005465987 podman[234309]: 2025-10-02 11:59:09.994515979 +0000 UTC m=+0.210026292 container start 382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 07:59:10 np0005465987 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[234324]: [NOTICE]   (234328) : New worker (234330) forked
Oct  2 07:59:10 np0005465987 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[234324]: [NOTICE]   (234328) : Loading success.
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.063 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 28a876da-5d62-4032-8917-ce1c597913a9 in datapath 50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4 unbound from our chassis#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.067 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.080 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ac55a952-cfc7-4363-adcd-8dede58badd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.081 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50bc3bdc-61 in ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.082 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50bc3bdc-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.083 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7807161e-3a84-47c8-93d6-e7db644cedc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.084 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab586ac-232f-47d9-bde8-691346d5ce74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.104 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0378a6-16f6-42e1-8a9d-16e61f8c3682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.127 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[eb97f6d1-16de-4fe4-95b0-56a7814dcbc6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.160 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[10a3ca86-b9f0-4505-86b9-9f160cd0b9e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.1660] manager: (tap50bc3bdc-60): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Oct  2 07:59:10 np0005465987 systemd-udevd[234266]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.167 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6426e767-22e1-4f0c-b2b0-f92f2e04b0f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.206 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c9f3df-4c4e-41da-ac07-f3297737f9fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.208 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[486f8f1d-e763-4c23-a843-328db1eba0db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.2309] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/32)
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.2313] device (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.2327] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/33)
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.2331] device (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.2344] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.2352] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.2360] device (tap50bc3bdc-60): carrier: link connected
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.2363] device (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.2368] device (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.241 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3d27de34-fa69-42df-8851-850a5992ea4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.262 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[accf8756-4a60-4859-a6a4-b3a1551a03f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50bc3bdc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:ff:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441881, 'reachable_time': 31751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234350, 'error': None, 'target': 'ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.265 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.282 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e93aff6a-093c-420f-857f-73e8183fb45a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:ff9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441881, 'tstamp': 441881}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234351, 'error': None, 'target': 'ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.303 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5675719e-2e64-4404-a0f3-d5c6d3ce5964]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50bc3bdc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:ff:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441881, 'reachable_time': 31751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 234352, 'error': None, 'target': 'ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.336 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[808f5b3f-fa76-4102-af62-7a0041c71e8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:10 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:10Z|00039|binding|INFO|Releasing lport 108050f3-e876-480b-8cdd-c1255d33ae84 from this chassis (sb_readonly=0)
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.401 2 DEBUG nova.network.neutron [-] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.406 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7c6f45-8461-4e2c-b0e3-163d28035792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.407 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50bc3bdc-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.408 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.408 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50bc3bdc-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:10 np0005465987 kernel: tap50bc3bdc-60: entered promiscuous mode
Oct  2 07:59:10 np0005465987 NetworkManager[44910]: <info>  [1759406350.4120] manager: (tap50bc3bdc-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.418 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50bc3bdc-60, col_values=(('external_ids', {'iface-id': 'df46d95f-a3d2-40c7-97f8-7bfd21813208'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:10 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:10Z|00040|binding|INFO|Releasing lport df46d95f-a3d2-40c7-97f8-7bfd21813208 from this chassis (sb_readonly=1)
Oct  2 07:59:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:10.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.420 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.421 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[655d5005-364e-418b-9d42-1b12f2328861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.421 2 INFO nova.compute.manager [-] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Took 1.04 seconds to deallocate network for instance.#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.422 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4.pid.haproxy
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.423 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4', 'env', 'PROCESS_TAG=haproxy-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.475 2 DEBUG oslo_concurrency.lockutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.476 2 DEBUG oslo_concurrency.lockutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.542 2 DEBUG oslo_concurrency.processutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.569 2 DEBUG nova.compute.manager [req-e763c115-e4d4-478a-a810-b079b2525212 req-0ad09f85-2010-40cb-9b4c-99397bd9e60d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Received event network-vif-deleted-0293cd1e-bcfa-4860-9834-b7d85322f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:10 np0005465987 podman[234403]: 2025-10-02 11:59:10.813804934 +0000 UTC m=+0.060403632 container create 1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.813 2 DEBUG nova.compute.manager [req-9bea7567-019c-41af-a5ad-9856620bfb97 req-36ac6316-a2b6-4644-9ec9-87a94d6f6f46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received event network-changed-28a876da-5d62-4032-8917-ce1c597913a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.823 2 DEBUG nova.compute.manager [req-9bea7567-019c-41af-a5ad-9856620bfb97 req-36ac6316-a2b6-4644-9ec9-87a94d6f6f46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Refreshing instance network info cache due to event network-changed-28a876da-5d62-4032-8917-ce1c597913a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.824 2 DEBUG oslo_concurrency.lockutils [req-9bea7567-019c-41af-a5ad-9856620bfb97 req-36ac6316-a2b6-4644-9ec9-87a94d6f6f46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.824 2 DEBUG oslo_concurrency.lockutils [req-9bea7567-019c-41af-a5ad-9856620bfb97 req-36ac6316-a2b6-4644-9ec9-87a94d6f6f46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:59:10 np0005465987 nova_compute[230713]: 2025-10-02 11:59:10.824 2 DEBUG nova.network.neutron [req-9bea7567-019c-41af-a5ad-9856620bfb97 req-36ac6316-a2b6-4644-9ec9-87a94d6f6f46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Refreshing network info cache for port 28a876da-5d62-4032-8917-ce1c597913a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 07:59:10 np0005465987 systemd[1]: Started libpod-conmon-1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97.scope.
Oct  2 07:59:10 np0005465987 podman[234403]: 2025-10-02 11:59:10.777190578 +0000 UTC m=+0.023789266 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:59:10 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:59:10 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800f5f90a7475c1016ab56a7bcd351742db5228e7641805fbec15bbfed881dc1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 07:59:10 np0005465987 podman[234403]: 2025-10-02 11:59:10.920787343 +0000 UTC m=+0.167386021 container init 1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:59:10 np0005465987 podman[234403]: 2025-10-02 11:59:10.931830547 +0000 UTC m=+0.178429215 container start 1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 07:59:10 np0005465987 neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4[234418]: [NOTICE]   (234422) : New worker (234424) forked
Oct  2 07:59:10 np0005465987 neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4[234418]: [NOTICE]   (234422) : Loading success.
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.976 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0293cd1e-bcfa-4860-9834-b7d85322f2b7 in datapath 48e5c857-28d2-421a-9519-d32a13037daa unbound from our chassis#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.978 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48e5c857-28d2-421a-9519-d32a13037daa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.979 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ed1edb-4ee1-435b-8fe1-47cfebc61248]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:10.980 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa namespace which is not needed anymore#033[00m
Oct  2 07:59:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:59:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2314377611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.024 2 DEBUG oslo_concurrency.processutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.029 2 DEBUG nova.compute.provider_tree [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.051 2 DEBUG nova.scheduler.client.report [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.074 2 DEBUG oslo_concurrency.lockutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.100 2 INFO nova.scheduler.client.report [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Deleted allocations for instance 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35#033[00m
Oct  2 07:59:11 np0005465987 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[234324]: [NOTICE]   (234328) : haproxy version is 2.8.14-c23fe91
Oct  2 07:59:11 np0005465987 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[234324]: [NOTICE]   (234328) : path to executable is /usr/sbin/haproxy
Oct  2 07:59:11 np0005465987 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[234324]: [WARNING]  (234328) : Exiting Master process...
Oct  2 07:59:11 np0005465987 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[234324]: [ALERT]    (234328) : Current worker (234330) exited with code 143 (Terminated)
Oct  2 07:59:11 np0005465987 neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa[234324]: [WARNING]  (234328) : All workers exited. Exiting... (0)
Oct  2 07:59:11 np0005465987 systemd[1]: libpod-382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa.scope: Deactivated successfully.
Oct  2 07:59:11 np0005465987 podman[234452]: 2025-10-02 11:59:11.151587205 +0000 UTC m=+0.046643672 container died 382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 07:59:11 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa-userdata-shm.mount: Deactivated successfully.
Oct  2 07:59:11 np0005465987 systemd[1]: var-lib-containers-storage-overlay-68982ea96d5333c3251cf0fa8c2ba8c240c3b7bbe4d624d577c09d1b8347fee6-merged.mount: Deactivated successfully.
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.189 2 DEBUG oslo_concurrency.lockutils [None req-0224e101-3bb6-4556-a15a-b26ba9aede4d 17903cd0333c407b96f0aede6dd3b16c 8972026d0f3a4bf4b6debd9555f9225c - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:11 np0005465987 podman[234452]: 2025-10-02 11:59:11.193843397 +0000 UTC m=+0.088899854 container cleanup 382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 07:59:11 np0005465987 systemd[1]: libpod-conmon-382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa.scope: Deactivated successfully.
Oct  2 07:59:11 np0005465987 podman[234482]: 2025-10-02 11:59:11.256370055 +0000 UTC m=+0.041636715 container remove 382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.262 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[673903f1-bc17-44c8-b12d-6a2899ca86d9]: (4, ('Thu Oct  2 11:59:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa (382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa)\n382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa\nThu Oct  2 11:59:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa (382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa)\n382a10338a96320bfe268a5fe712e76b04208684430873aa47fd77c9745f71aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.263 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[05973119-c8e3-43c3-9e26-3b3d6ce8dafb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.264 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e5c857-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:11 np0005465987 kernel: tap48e5c857-20: left promiscuous mode
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.267 2 DEBUG nova.compute.manager [req-ee37f80c-d0ee-4270-b1ea-a18f2b67e38d req-6c0d6238-1bdb-4967-b2e8-d4be5eb49a1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Received event network-vif-plugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.268 2 DEBUG oslo_concurrency.lockutils [req-ee37f80c-d0ee-4270-b1ea-a18f2b67e38d req-6c0d6238-1bdb-4967-b2e8-d4be5eb49a1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.268 2 DEBUG oslo_concurrency.lockutils [req-ee37f80c-d0ee-4270-b1ea-a18f2b67e38d req-6c0d6238-1bdb-4967-b2e8-d4be5eb49a1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.268 2 DEBUG oslo_concurrency.lockutils [req-ee37f80c-d0ee-4270-b1ea-a18f2b67e38d req-6c0d6238-1bdb-4967-b2e8-d4be5eb49a1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "98e7bcbc-ab7d-47d0-bcfc-afa693d84b35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.269 2 DEBUG nova.compute.manager [req-ee37f80c-d0ee-4270-b1ea-a18f2b67e38d req-6c0d6238-1bdb-4967-b2e8-d4be5eb49a1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] No waiting events found dispatching network-vif-plugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.269 2 WARNING nova.compute.manager [req-ee37f80c-d0ee-4270-b1ea-a18f2b67e38d req-6c0d6238-1bdb-4967-b2e8-d4be5eb49a1e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Received unexpected event network-vif-plugged-0293cd1e-bcfa-4860-9834-b7d85322f2b7 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.270 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b88b0a63-6e13-4e76-b947-c719b918dee2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:11 np0005465987 nova_compute[230713]: 2025-10-02 11:59:11.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.300 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[aee50d53-3119-4303-957e-28028a535376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.301 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[751f5c46-7585-4966-9d2b-7b3ffd2bf371]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.327 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[75b05615-9afb-4d7d-bb14-f14d1491a0d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441777, 'reachable_time': 26049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234497, 'error': None, 'target': 'ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:11 np0005465987 systemd[1]: run-netns-ovnmeta\x2d48e5c857\x2d28d2\x2d421a\x2d9519\x2dd32a13037daa.mount: Deactivated successfully.
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.343 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48e5c857-28d2-421a-9519-d32a13037daa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.344 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffef8fa-88b5-4b9d-a8ea-42dee0ed3a0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:11.345 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 07:59:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:11.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:12 np0005465987 nova_compute[230713]: 2025-10-02 11:59:12.413 2 DEBUG nova.network.neutron [req-9bea7567-019c-41af-a5ad-9856620bfb97 req-36ac6316-a2b6-4644-9ec9-87a94d6f6f46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Updated VIF entry in instance network info cache for port 28a876da-5d62-4032-8917-ce1c597913a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 07:59:12 np0005465987 nova_compute[230713]: 2025-10-02 11:59:12.415 2 DEBUG nova.network.neutron [req-9bea7567-019c-41af-a5ad-9856620bfb97 req-36ac6316-a2b6-4644-9ec9-87a94d6f6f46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Updating instance_info_cache with network_info: [{"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:12 np0005465987 nova_compute[230713]: 2025-10-02 11:59:12.439 2 DEBUG oslo_concurrency.lockutils [req-9bea7567-019c-41af-a5ad-9856620bfb97 req-36ac6316-a2b6-4644-9ec9-87a94d6f6f46 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:59:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:13.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:59:13 np0005465987 nova_compute[230713]: 2025-10-02 11:59:13.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.209101) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354209138, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1288, "num_deletes": 251, "total_data_size": 2622021, "memory_usage": 2655504, "flush_reason": "Manual Compaction"}
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354227347, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1706365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21428, "largest_seqno": 22711, "table_properties": {"data_size": 1700792, "index_size": 2904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12442, "raw_average_key_size": 20, "raw_value_size": 1689373, "raw_average_value_size": 2738, "num_data_blocks": 129, "num_entries": 617, "num_filter_entries": 617, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406268, "oldest_key_time": 1759406268, "file_creation_time": 1759406354, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 18309 microseconds, and 7189 cpu microseconds.
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.227405) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1706365 bytes OK
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.227433) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.229243) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.229267) EVENT_LOG_v1 {"time_micros": 1759406354229260, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.229289) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2615898, prev total WAL file size 2615898, number of live WAL files 2.
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.230693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1666KB)], [42(7841KB)]
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354230769, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 9735600, "oldest_snapshot_seqno": -1}
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 4647 keys, 7690040 bytes, temperature: kUnknown
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354290948, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 7690040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7659551, "index_size": 17771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 116284, "raw_average_key_size": 25, "raw_value_size": 7575859, "raw_average_value_size": 1630, "num_data_blocks": 730, "num_entries": 4647, "num_filter_entries": 4647, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759406354, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.291257) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7690040 bytes
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.292869) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.4 rd, 127.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 5169, records dropped: 522 output_compression: NoCompression
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.292899) EVENT_LOG_v1 {"time_micros": 1759406354292887, "job": 24, "event": "compaction_finished", "compaction_time_micros": 60305, "compaction_time_cpu_micros": 31500, "output_level": 6, "num_output_files": 1, "total_output_size": 7690040, "num_input_records": 5169, "num_output_records": 4647, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354293786, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406354296431, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.230566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.296632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.296637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.296639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.296641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:14.296642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:59:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 07:59:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:15.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:15 np0005465987 nova_compute[230713]: 2025-10-02 11:59:15.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:16 np0005465987 nova_compute[230713]: 2025-10-02 11:59:16.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:16.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:16 np0005465987 podman[234550]: 2025-10-02 11:59:16.871088937 +0000 UTC m=+0.079622658 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 07:59:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:17.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:59:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:18.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:59:18 np0005465987 nova_compute[230713]: 2025-10-02 11:59:18.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:59:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:19.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:59:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:20.347 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:20.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:21 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:21Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:0e:30 10.100.0.4
Oct  2 07:59:21 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:21Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:0e:30 10.100.0.4
Oct  2 07:59:21 np0005465987 nova_compute[230713]: 2025-10-02 11:59:21.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:21.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:59:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:22.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:59:23 np0005465987 nova_compute[230713]: 2025-10-02 11:59:23.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:23.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:23 np0005465987 nova_compute[230713]: 2025-10-02 11:59:23.809 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406348.806898, 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:23 np0005465987 nova_compute[230713]: 2025-10-02 11:59:23.809 2 INFO nova.compute.manager [-] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] VM Stopped (Lifecycle Event)#033[00m
Oct  2 07:59:23 np0005465987 nova_compute[230713]: 2025-10-02 11:59:23.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:23 np0005465987 nova_compute[230713]: 2025-10-02 11:59:23.961 2 DEBUG nova.compute.manager [None req-ff7a1a48-f77b-4769-8315-645243e09561 - - - - - -] [instance: 98e7bcbc-ab7d-47d0-bcfc-afa693d84b35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:23 np0005465987 nova_compute[230713]: 2025-10-02 11:59:23.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:23 np0005465987 nova_compute[230713]: 2025-10-02 11:59:23.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 07:59:23 np0005465987 nova_compute[230713]: 2025-10-02 11:59:23.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 07:59:24 np0005465987 nova_compute[230713]: 2025-10-02 11:59:24.183 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:59:24 np0005465987 nova_compute[230713]: 2025-10-02 11:59:24.183 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:59:24 np0005465987 nova_compute[230713]: 2025-10-02 11:59:24.184 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 07:59:24 np0005465987 nova_compute[230713]: 2025-10-02 11:59:24.184 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 05ef51ee-c5d0-405e-a30a-9915a4117983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:59:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:24.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:59:25 np0005465987 nova_compute[230713]: 2025-10-02 11:59:25.470 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Updating instance_info_cache with network_info: [{"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:25 np0005465987 nova_compute[230713]: 2025-10-02 11:59:25.488 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-05ef51ee-c5d0-405e-a30a-9915a4117983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:25 np0005465987 nova_compute[230713]: 2025-10-02 11:59:25.488 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 07:59:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:59:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:25.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:59:25 np0005465987 nova_compute[230713]: 2025-10-02 11:59:25.971 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:25 np0005465987 nova_compute[230713]: 2025-10-02 11:59:25.971 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:26 np0005465987 nova_compute[230713]: 2025-10-02 11:59:26.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:26.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:26 np0005465987 podman[234573]: 2025-10-02 11:59:26.856472833 +0000 UTC m=+0.058621592 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 07:59:26 np0005465987 podman[234572]: 2025-10-02 11:59:26.90363879 +0000 UTC m=+0.102698043 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 07:59:26 np0005465987 podman[234574]: 2025-10-02 11:59:26.936992806 +0000 UTC m=+0.132492181 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 07:59:26 np0005465987 nova_compute[230713]: 2025-10-02 11:59:26.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:27 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:27Z|00041|binding|INFO|Releasing lport df46d95f-a3d2-40c7-97f8-7bfd21813208 from this chassis (sb_readonly=0)
Oct  2 07:59:27 np0005465987 nova_compute[230713]: 2025-10-02 11:59:27.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:27.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:27.902381) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406367902462, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 398, "num_deletes": 256, "total_data_size": 372008, "memory_usage": 381336, "flush_reason": "Manual Compaction"}
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406367908150, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 245410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22716, "largest_seqno": 23109, "table_properties": {"data_size": 243113, "index_size": 397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5343, "raw_average_key_size": 17, "raw_value_size": 238547, "raw_average_value_size": 764, "num_data_blocks": 18, "num_entries": 312, "num_filter_entries": 312, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406355, "oldest_key_time": 1759406355, "file_creation_time": 1759406367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5808 microseconds, and 1730 cpu microseconds.
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:27.908208) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 245410 bytes OK
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:27.908233) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:27.910981) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:27.911020) EVENT_LOG_v1 {"time_micros": 1759406367911009, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:27.911049) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 369398, prev total WAL file size 369398, number of live WAL files 2.
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:27.911821) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(239KB)], [45(7509KB)]
Oct  2 07:59:27 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406367911875, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 7935450, "oldest_snapshot_seqno": -1}
Oct  2 07:59:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:27.925 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:27.926 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:27.927 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:27 np0005465987 nova_compute[230713]: 2025-10-02 11:59:27.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.002 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.004 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.004 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:28.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1443149474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.466 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.555 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.556 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 4439 keys, 7823091 bytes, temperature: kUnknown
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406368572102, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 7823091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7793230, "index_size": 17655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 113168, "raw_average_key_size": 25, "raw_value_size": 7712461, "raw_average_value_size": 1737, "num_data_blocks": 721, "num_entries": 4439, "num_filter_entries": 4439, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759406367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:28.572387) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7823091 bytes
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:28.605307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 12.0 rd, 11.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.3 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(64.2) write-amplify(31.9) OK, records in: 4959, records dropped: 520 output_compression: NoCompression
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:28.605352) EVENT_LOG_v1 {"time_micros": 1759406368605334, "job": 26, "event": "compaction_finished", "compaction_time_micros": 660257, "compaction_time_cpu_micros": 37948, "output_level": 6, "num_output_files": 1, "total_output_size": 7823091, "num_input_records": 4959, "num_output_records": 4439, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406368605661, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406368608451, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:27.911691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:28.608497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:28.608504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:28.608507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:28.608512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:28 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-11:59:28.608516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.803 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.805 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4837MB free_disk=20.876487731933594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.805 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.806 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.915 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 05ef51ee-c5d0-405e-a30a-9915a4117983 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.916 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.916 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 07:59:28 np0005465987 nova_compute[230713]: 2025-10-02 11:59:28.976 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:59:29 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2460624604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:59:29 np0005465987 nova_compute[230713]: 2025-10-02 11:59:29.537 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:29 np0005465987 nova_compute[230713]: 2025-10-02 11:59:29.543 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:59:29 np0005465987 nova_compute[230713]: 2025-10-02 11:59:29.561 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:59:29 np0005465987 nova_compute[230713]: 2025-10-02 11:59:29.589 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 07:59:29 np0005465987 nova_compute[230713]: 2025-10-02 11:59:29.590 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:29.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:30.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:30 np0005465987 nova_compute[230713]: 2025-10-02 11:59:30.584 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:30 np0005465987 nova_compute[230713]: 2025-10-02 11:59:30.585 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:30 np0005465987 nova_compute[230713]: 2025-10-02 11:59:30.601 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:30 np0005465987 nova_compute[230713]: 2025-10-02 11:59:30.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:31.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.899 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.899 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.913 2 DEBUG nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.964 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.982 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.982 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.989 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 07:59:31 np0005465987 nova_compute[230713]: 2025-10-02 11:59:31.989 2 INFO nova.compute.claims [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.131 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:32.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:59:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3948024583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.532 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.536 2 DEBUG nova.compute.provider_tree [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.560 2 DEBUG nova.scheduler.client.report [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.585 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.586 2 DEBUG nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.625 2 DEBUG nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.625 2 DEBUG nova.network.neutron [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.650 2 INFO nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.777 2 DEBUG nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.889 2 DEBUG nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.890 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.890 2 INFO nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Creating image(s)#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.914 2 DEBUG nova.storage.rbd_utils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.937 2 DEBUG nova.storage.rbd_utils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.962 2 DEBUG nova.storage.rbd_utils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:32 np0005465987 nova_compute[230713]: 2025-10-02 11:59:32.965 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:33 np0005465987 nova_compute[230713]: 2025-10-02 11:59:33.025 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:33 np0005465987 nova_compute[230713]: 2025-10-02 11:59:33.026 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:33 np0005465987 nova_compute[230713]: 2025-10-02 11:59:33.026 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:33 np0005465987 nova_compute[230713]: 2025-10-02 11:59:33.026 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:33 np0005465987 nova_compute[230713]: 2025-10-02 11:59:33.053 2 DEBUG nova.storage.rbd_utils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:33 np0005465987 nova_compute[230713]: 2025-10-02 11:59:33.056 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:33 np0005465987 nova_compute[230713]: 2025-10-02 11:59:33.423 2 DEBUG nova.policy [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b54b5e15e4c94d1f95a272981e9d9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd977ad6a90874946819537242925a8f0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 07:59:33 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  2 07:59:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:33.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:33 np0005465987 nova_compute[230713]: 2025-10-02 11:59:33.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.144 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.231 2 DEBUG nova.storage.rbd_utils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] resizing rbd image 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 07:59:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:59:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.561 2 DEBUG nova.network.neutron [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Successfully updated port: b0281338-9ff1-492d-b3aa-aeee41f08075 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.580 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.580 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquired lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.580 2 DEBUG nova.network.neutron [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.687 2 DEBUG nova.objects.instance [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lazy-loading 'migration_context' on Instance uuid 530e7ace-2feb-4a9e-9430-5a1bfc678d22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.703 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.703 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Ensure instance console log exists: /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.704 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.704 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.704 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.715 2 DEBUG nova.compute.manager [req-c68b46d6-f228-4318-940a-6921ababe8b5 req-0013f6fa-60c9-4eb3-81d0-ea7452806f39 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-changed-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.715 2 DEBUG nova.compute.manager [req-c68b46d6-f228-4318-940a-6921ababe8b5 req-0013f6fa-60c9-4eb3-81d0-ea7452806f39 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Refreshing instance network info cache due to event network-changed-b0281338-9ff1-492d-b3aa-aeee41f08075. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.716 2 DEBUG oslo_concurrency.lockutils [req-c68b46d6-f228-4318-940a-6921ababe8b5 req-0013f6fa-60c9-4eb3-81d0-ea7452806f39 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:59:34 np0005465987 nova_compute[230713]: 2025-10-02 11:59:34.792 2 DEBUG nova.network.neutron [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.648 2 DEBUG nova.network.neutron [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updating instance_info_cache with network_info: [{"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:35.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.670 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Releasing lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.670 2 DEBUG nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Instance network_info: |[{"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.671 2 DEBUG oslo_concurrency.lockutils [req-c68b46d6-f228-4318-940a-6921ababe8b5 req-0013f6fa-60c9-4eb3-81d0-ea7452806f39 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.671 2 DEBUG nova.network.neutron [req-c68b46d6-f228-4318-940a-6921ababe8b5 req-0013f6fa-60c9-4eb3-81d0-ea7452806f39 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Refreshing network info cache for port b0281338-9ff1-492d-b3aa-aeee41f08075 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.674 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Start _get_guest_xml network_info=[{"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.679 2 WARNING nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.683 2 DEBUG nova.virt.libvirt.host [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.684 2 DEBUG nova.virt.libvirt.host [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.687 2 DEBUG nova.virt.libvirt.host [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.687 2 DEBUG nova.virt.libvirt.host [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.688 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.688 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.689 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.689 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.689 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.689 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.690 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.690 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.690 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.690 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.690 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.691 2 DEBUG nova.virt.hardware [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 07:59:35 np0005465987 nova_compute[230713]: 2025-10-02 11:59:35.693 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 07:59:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/563414360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.303 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.333 2 DEBUG nova.storage.rbd_utils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.338 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:36.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 07:59:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3325323058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.808 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.810 2 DEBUG nova.virt.libvirt.vif [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T11:59:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-492220666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-492220666',id=7,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-b2u06b3r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T11:59:32Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=530e7ace-2feb-4a9e-9430-5a1bfc678d22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.811 2 DEBUG nova.network.os_vif_util [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converting VIF {"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.812 2 DEBUG nova.network.os_vif_util [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.814 2 DEBUG nova.objects.instance [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 530e7ace-2feb-4a9e-9430-5a1bfc678d22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.852 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] End _get_guest_xml xml=<domain type="kvm">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <uuid>530e7ace-2feb-4a9e-9430-5a1bfc678d22</uuid>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <name>instance-00000007</name>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-492220666</nova:name>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 11:59:35</nova:creationTime>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <nova:user uuid="b54b5e15e4c94d1f95a272981e9d9a89">tempest-LiveAutoBlockMigrationV225Test-959655280-project-member</nova:user>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <nova:project uuid="d977ad6a90874946819537242925a8f0">tempest-LiveAutoBlockMigrationV225Test-959655280</nova:project>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <nova:port uuid="b0281338-9ff1-492d-b3aa-aeee41f08075">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <system>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <entry name="serial">530e7ace-2feb-4a9e-9430-5a1bfc678d22</entry>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <entry name="uuid">530e7ace-2feb-4a9e-9430-5a1bfc678d22</entry>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    </system>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <os>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  </os>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <features>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  </features>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  </clock>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  <devices>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk.config">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:0f:16:f5"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <target dev="tapb0281338-9f"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    </interface>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22/console.log" append="off"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    </serial>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <video>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    </video>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    </rng>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 07:59:36 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 07:59:36 np0005465987 nova_compute[230713]:  </devices>
Oct  2 07:59:36 np0005465987 nova_compute[230713]: </domain>
Oct  2 07:59:36 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.853 2 DEBUG nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Preparing to wait for external event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.854 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.854 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.854 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.855 2 DEBUG nova.virt.libvirt.vif [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T11:59:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-492220666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-492220666',id=7,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-b2u06b3r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T11:59:32Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=530e7ace-2feb-4a9e-9430-5a1bfc678d22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.856 2 DEBUG nova.network.os_vif_util [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converting VIF {"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.857 2 DEBUG nova.network.os_vif_util [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.857 2 DEBUG os_vif [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0281338-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0281338-9f, col_values=(('external_ids', {'iface-id': 'b0281338-9ff1-492d-b3aa-aeee41f08075', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:16:f5', 'vm-uuid': '530e7ace-2feb-4a9e-9430-5a1bfc678d22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:36 np0005465987 NetworkManager[44910]: <info>  [1759406376.8682] manager: (tapb0281338-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:36 np0005465987 nova_compute[230713]: 2025-10-02 11:59:36.878 2 INFO os_vif [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f')#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.146 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.147 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.147 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] No VIF found with MAC fa:16:3e:0f:16:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.148 2 INFO nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Using config drive#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.241 2 DEBUG nova.storage.rbd_utils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.478 2 DEBUG nova.network.neutron [req-c68b46d6-f228-4318-940a-6921ababe8b5 req-0013f6fa-60c9-4eb3-81d0-ea7452806f39 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updated VIF entry in instance network info cache for port b0281338-9ff1-492d-b3aa-aeee41f08075. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.478 2 DEBUG nova.network.neutron [req-c68b46d6-f228-4318-940a-6921ababe8b5 req-0013f6fa-60c9-4eb3-81d0-ea7452806f39 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updating instance_info_cache with network_info: [{"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.501 2 DEBUG oslo_concurrency.lockutils [req-c68b46d6-f228-4318-940a-6921ababe8b5 req-0013f6fa-60c9-4eb3-81d0-ea7452806f39 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:37.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.754 2 INFO nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Creating config drive at /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22/disk.config#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.769 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbj91iakd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.893 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbj91iakd" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.926 2 DEBUG nova.storage.rbd_utils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 07:59:37 np0005465987 nova_compute[230713]: 2025-10-02 11:59:37.930 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22/disk.config 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:38.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:38 np0005465987 nova_compute[230713]: 2025-10-02 11:59:38.811 2 DEBUG oslo_concurrency.processutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22/disk.config 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.881s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:38 np0005465987 nova_compute[230713]: 2025-10-02 11:59:38.812 2 INFO nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Deleting local config drive /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22/disk.config because it was imported into RBD.#033[00m
Oct  2 07:59:38 np0005465987 kernel: tapb0281338-9f: entered promiscuous mode
Oct  2 07:59:38 np0005465987 NetworkManager[44910]: <info>  [1759406378.8694] manager: (tapb0281338-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct  2 07:59:38 np0005465987 systemd-udevd[235002]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:59:38 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:38Z|00042|binding|INFO|Claiming lport b0281338-9ff1-492d-b3aa-aeee41f08075 for this chassis.
Oct  2 07:59:38 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:38Z|00043|binding|INFO|b0281338-9ff1-492d-b3aa-aeee41f08075: Claiming fa:16:3e:0f:16:f5 10.100.0.13
Oct  2 07:59:38 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:38Z|00044|binding|INFO|Claiming lport 36ee45f1-8361-4e2d-a0d4-c4907ede4743 for this chassis.
Oct  2 07:59:38 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:38Z|00045|binding|INFO|36ee45f1-8361-4e2d-a0d4-c4907ede4743: Claiming fa:16:3e:1a:38:f6 19.80.0.52
Oct  2 07:59:38 np0005465987 nova_compute[230713]: 2025-10-02 11:59:38.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:38 np0005465987 NetworkManager[44910]: <info>  [1759406378.9387] device (tapb0281338-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:59:38 np0005465987 NetworkManager[44910]: <info>  [1759406378.9395] device (tapb0281338-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 07:59:38 np0005465987 systemd-machined[188335]: New machine qemu-3-instance-00000007.
Oct  2 07:59:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:38.972 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:38:f6 19.80.0.52'], port_security=['fa:16:3e:1a:38:f6 19.80.0.52'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['b0281338-9ff1-492d-b3aa-aeee41f08075'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-589500470', 'neutron:cidrs': '19.80.0.52/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-589500470', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=ec7a7eef-8b3d-415d-8728-897cefc62c0c, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=36ee45f1-8361-4e2d-a0d4-c4907ede4743) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:59:38 np0005465987 nova_compute[230713]: 2025-10-02 11:59:38.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:38.976 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:16:f5 10.100.0.13'], port_security=['fa:16:3e:0f:16:f5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1480997899', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '530e7ace-2feb-4a9e-9430-5a1bfc678d22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1480997899', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=b0281338-9ff1-492d-b3aa-aeee41f08075) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:59:38 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:38Z|00046|binding|INFO|Setting lport b0281338-9ff1-492d-b3aa-aeee41f08075 ovn-installed in OVS
Oct  2 07:59:38 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:38Z|00047|binding|INFO|Setting lport b0281338-9ff1-492d-b3aa-aeee41f08075 up in Southbound
Oct  2 07:59:38 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:38Z|00048|binding|INFO|Setting lport 36ee45f1-8361-4e2d-a0d4-c4907ede4743 up in Southbound
Oct  2 07:59:38 np0005465987 nova_compute[230713]: 2025-10-02 11:59:38.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:38.978 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 36ee45f1-8361-4e2d-a0d4-c4907ede4743 in datapath 4874fd53-2e81-4c7f-81b7-b85c23edc180 bound to our chassis#033[00m
Oct  2 07:59:38 np0005465987 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Oct  2 07:59:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:38.980 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4874fd53-2e81-4c7f-81b7-b85c23edc180#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:38.999 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[60a810ff-6cd6-4ae5-8120-f04f25d3462c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.001 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4874fd53-21 in ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.003 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4874fd53-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.003 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f878686b-148b-422e-a31c-997242a512a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.004 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[64ffcc8f-537a-4333-8749-4eb045b03d28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.017 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[7b90f66e-8606-43dc-9729-3bc5af972b41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.038 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0d9a90fe-a0ec-48d6-8741-e397822e42b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.077 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9d5f77-2e1a-485b-9df3-ce9ddd18ea20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 NetworkManager[44910]: <info>  [1759406379.0947] manager: (tap4874fd53-20): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.095 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[64a78ce6-1ae2-4515-8d9c-71d2e665a9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 systemd-udevd[235006]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.131 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9517e197-a672-4d21-8363-2fa179fac25f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.136 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0bae874b-229f-4f91-93ac-2034ee938bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 NetworkManager[44910]: <info>  [1759406379.1675] device (tap4874fd53-20): carrier: link connected
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.176 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[cbcfe075-4307-409d-aa74-459621d5253e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.197 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4cde86-8f2f-4812-ba2f-34817ca270bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4874fd53-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:c0:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444774, 'reachable_time': 27393, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235038, 'error': None, 'target': 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.216 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a1810a67-111b-47f9-b715-4bdec977099d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:c040'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444774, 'tstamp': 444774}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235039, 'error': None, 'target': 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.233 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d3aaf23f-047f-4de7-8028-512e270fa881]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4874fd53-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:c0:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444774, 'reachable_time': 27393, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235040, 'error': None, 'target': 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.271 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe7d744-ca3a-4a39-af7f-e59f28ea7ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.337 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f71fc96c-2341-411f-b2e6-6af8e41df06b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.341 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4874fd53-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.342 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.342 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4874fd53-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:39 np0005465987 nova_compute[230713]: 2025-10-02 11:59:39.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:39 np0005465987 kernel: tap4874fd53-20: entered promiscuous mode
Oct  2 07:59:39 np0005465987 NetworkManager[44910]: <info>  [1759406379.3464] manager: (tap4874fd53-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.348 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4874fd53-20, col_values=(('external_ids', {'iface-id': 'a3d6bcff-8fb0-4a18-b382-3921f86e4cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:39 np0005465987 nova_compute[230713]: 2025-10-02 11:59:39.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:39 np0005465987 nova_compute[230713]: 2025-10-02 11:59:39.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.352 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4874fd53-2e81-4c7f-81b7-b85c23edc180.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4874fd53-2e81-4c7f-81b7-b85c23edc180.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 07:59:39 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:39Z|00049|binding|INFO|Releasing lport a3d6bcff-8fb0-4a18-b382-3921f86e4cde from this chassis (sb_readonly=0)
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.353 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b6dec162-d915-4e36-9a2e-be4338cdab0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.355 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-4874fd53-2e81-4c7f-81b7-b85c23edc180
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/4874fd53-2e81-4c7f-81b7-b85c23edc180.pid.haproxy
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 4874fd53-2e81-4c7f-81b7-b85c23edc180
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.358 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'env', 'PROCESS_TAG=haproxy-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4874fd53-2e81-4c7f-81b7-b85c23edc180.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 07:59:39 np0005465987 nova_compute[230713]: 2025-10-02 11:59:39.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 07:59:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:39.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 07:59:39 np0005465987 podman[235106]: 2025-10-02 11:59:39.770479081 +0000 UTC m=+0.071226448 container create 506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:59:39 np0005465987 podman[235106]: 2025-10-02 11:59:39.73477349 +0000 UTC m=+0.035520957 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:59:39 np0005465987 systemd[1]: Started libpod-conmon-506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452.scope.
Oct  2 07:59:39 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:59:39 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e968753ff735d242b721dede2a3c48178467496859cbfa87e41e7eeedb034f2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 07:59:39 np0005465987 podman[235106]: 2025-10-02 11:59:39.887585109 +0000 UTC m=+0.188332516 container init 506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 07:59:39 np0005465987 podman[235106]: 2025-10-02 11:59:39.895139137 +0000 UTC m=+0.195886504 container start 506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:59:39 np0005465987 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[235130]: [NOTICE]   (235134) : New worker (235136) forked
Oct  2 07:59:39 np0005465987 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[235130]: [NOTICE]   (235134) : Loading success.
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.951 138325 INFO neutron.agent.ovn.metadata.agent [-] Port b0281338-9ff1-492d-b3aa-aeee41f08075 in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 unbound from our chassis#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.954 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e991676-99e8-43d9-8575-4a21f50b0ed5#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.966 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[db8ff33a-f07e-4d56-9a60-979c26bf5a5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.967 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e991676-91 in ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.969 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e991676-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.969 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c3c6bf-9168-4b07-9b72-f84284cc287f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.970 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5749d2e0-8d23-4632-881f-ece84f48cf54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:39.987 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[1d420e27-7291-4b41-876b-f11c679404f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.012 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b16a8deb-5103-4331-a445-aa258094a25e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.054 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[464a09f3-0bb2-4002-a998-725554186a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.060 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7db14cc4-876e-4c77-b03a-e82b4ad00f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 NetworkManager[44910]: <info>  [1759406380.0613] manager: (tap1e991676-90): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct  2 07:59:40 np0005465987 systemd-udevd[235025]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.096 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e22b29b3-aa06-4e36-8327-7bfda7caa5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.098 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[647aa1a4-a505-4205-a512-844a61a04810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 NetworkManager[44910]: <info>  [1759406380.1207] device (tap1e991676-90): carrier: link connected
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.126 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0bf1c5-5f69-474d-8abc-c743521ac11f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.145 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[375981db-32a8-463c-a2ce-68add1237a73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444869, 'reachable_time': 18564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235155, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.158 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e1db2dcf-df63-4ac8-878d-8336d3f11f83]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:ef6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444869, 'tstamp': 444869}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235156, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.171 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[091b8a47-7b5b-4061-ac19-e5e02e326dcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444869, 'reachable_time': 18564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235157, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.199 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[00f48cb7-f063-40f1-8ccc-381d1b905d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.258 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406380.2577543, 530e7ace-2feb-4a9e-9430-5a1bfc678d22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.259 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] VM Started (Lifecycle Event)#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.279 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[234f62b4-8325-48f7-b748-ee046ffa8085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.281 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.281 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.282 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e991676-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:40 np0005465987 NetworkManager[44910]: <info>  [1759406380.2851] manager: (tap1e991676-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct  2 07:59:40 np0005465987 kernel: tap1e991676-90: entered promiscuous mode
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.287 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e991676-90, col_values=(('external_ids', {'iface-id': 'a0e52ac7-beb4-4d4b-863a-9b95be4c9e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:40 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:40Z|00050|binding|INFO|Releasing lport a0e52ac7-beb4-4d4b-863a-9b95be4c9e74 from this chassis (sb_readonly=0)
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.292 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.299 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406380.2578616, 530e7ace-2feb-4a9e-9430-5a1bfc678d22 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.299 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] VM Paused (Lifecycle Event)#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.309 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.311 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab7cdf6-af9d-4f6f-a40b-61e11678710b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.312 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 07:59:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:40.314 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'env', 'PROCESS_TAG=haproxy-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e991676-99e8-43d9-8575-4a21f50b0ed5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.344 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.349 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.395 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 07:59:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:40.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:40 np0005465987 podman[235189]: 2025-10-02 11:59:40.733218007 +0000 UTC m=+0.070750415 container create 42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.765 2 DEBUG nova.compute.manager [req-f6d3a6dc-a22e-40e8-822a-278c767fe6e9 req-f3093c8a-6da6-4550-a5f8-ea65d56fa2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.766 2 DEBUG oslo_concurrency.lockutils [req-f6d3a6dc-a22e-40e8-822a-278c767fe6e9 req-f3093c8a-6da6-4550-a5f8-ea65d56fa2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.767 2 DEBUG oslo_concurrency.lockutils [req-f6d3a6dc-a22e-40e8-822a-278c767fe6e9 req-f3093c8a-6da6-4550-a5f8-ea65d56fa2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.767 2 DEBUG oslo_concurrency.lockutils [req-f6d3a6dc-a22e-40e8-822a-278c767fe6e9 req-f3093c8a-6da6-4550-a5f8-ea65d56fa2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.768 2 DEBUG nova.compute.manager [req-f6d3a6dc-a22e-40e8-822a-278c767fe6e9 req-f3093c8a-6da6-4550-a5f8-ea65d56fa2f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Processing event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.769 2 DEBUG nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.775 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406380.774181, 530e7ace-2feb-4a9e-9430-5a1bfc678d22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.776 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] VM Resumed (Lifecycle Event)#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.779 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 07:59:40 np0005465987 systemd[1]: Started libpod-conmon-42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41.scope.
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.784 2 INFO nova.virt.libvirt.driver [-] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Instance spawned successfully.#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.785 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 07:59:40 np0005465987 podman[235189]: 2025-10-02 11:59:40.699998314 +0000 UTC m=+0.037530772 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.796 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.802 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.817 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.817 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.818 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.818 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.818 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.819 2 DEBUG nova.virt.libvirt.driver [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 07:59:40 np0005465987 systemd[1]: Started libcrun container.
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.823 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 07:59:40 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1edafb8af3650df7f63ba35bac9bfd9d659b01f4e2e63967df95f7cc7307e338/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 07:59:40 np0005465987 podman[235189]: 2025-10-02 11:59:40.845252756 +0000 UTC m=+0.182785214 container init 42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 07:59:40 np0005465987 podman[235189]: 2025-10-02 11:59:40.852386521 +0000 UTC m=+0.189918929 container start 42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:59:40 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[235204]: [NOTICE]   (235208) : New worker (235210) forked
Oct  2 07:59:40 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[235204]: [NOTICE]   (235208) : Loading success.
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.886 2 INFO nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Took 8.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.887 2 DEBUG nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:40 np0005465987 nova_compute[230713]: 2025-10-02 11:59:40.998 2 INFO nova.compute.manager [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Took 9.04 seconds to build instance.#033[00m
Oct  2 07:59:41 np0005465987 nova_compute[230713]: 2025-10-02 11:59:41.019 2 DEBUG oslo_concurrency.lockutils [None req-c523168c-655c-4c66-b38b-3cf8e8ccfb77 b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:41 np0005465987 nova_compute[230713]: 2025-10-02 11:59:41.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:41.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:41 np0005465987 nova_compute[230713]: 2025-10-02 11:59:41.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:42.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:42 np0005465987 nova_compute[230713]: 2025-10-02 11:59:42.867 2 DEBUG nova.compute.manager [req-b4fcf79c-0658-48ca-8b6a-65c4acf6e2dd req-3ca4ad04-7b3a-433c-8eaa-e313d2131ab3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:42 np0005465987 nova_compute[230713]: 2025-10-02 11:59:42.868 2 DEBUG oslo_concurrency.lockutils [req-b4fcf79c-0658-48ca-8b6a-65c4acf6e2dd req-3ca4ad04-7b3a-433c-8eaa-e313d2131ab3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:42 np0005465987 nova_compute[230713]: 2025-10-02 11:59:42.868 2 DEBUG oslo_concurrency.lockutils [req-b4fcf79c-0658-48ca-8b6a-65c4acf6e2dd req-3ca4ad04-7b3a-433c-8eaa-e313d2131ab3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:42 np0005465987 nova_compute[230713]: 2025-10-02 11:59:42.868 2 DEBUG oslo_concurrency.lockutils [req-b4fcf79c-0658-48ca-8b6a-65c4acf6e2dd req-3ca4ad04-7b3a-433c-8eaa-e313d2131ab3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:42 np0005465987 nova_compute[230713]: 2025-10-02 11:59:42.868 2 DEBUG nova.compute.manager [req-b4fcf79c-0658-48ca-8b6a-65c4acf6e2dd req-3ca4ad04-7b3a-433c-8eaa-e313d2131ab3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] No waiting events found dispatching network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:42 np0005465987 nova_compute[230713]: 2025-10-02 11:59:42.869 2 WARNING nova.compute.manager [req-b4fcf79c-0658-48ca-8b6a-65c4acf6e2dd req-3ca4ad04-7b3a-433c-8eaa-e313d2131ab3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received unexpected event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 for instance with vm_state active and task_state None.#033[00m
Oct  2 07:59:43 np0005465987 nova_compute[230713]: 2025-10-02 11:59:43.117 2 DEBUG oslo_concurrency.lockutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:43 np0005465987 nova_compute[230713]: 2025-10-02 11:59:43.117 2 DEBUG oslo_concurrency.lockutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:43 np0005465987 nova_compute[230713]: 2025-10-02 11:59:43.296 2 DEBUG nova.objects.instance [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lazy-loading 'flavor' on Instance uuid 05ef51ee-c5d0-405e-a30a-9915a4117983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:43 np0005465987 nova_compute[230713]: 2025-10-02 11:59:43.480 2 DEBUG oslo_concurrency.lockutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:43.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:43 np0005465987 nova_compute[230713]: 2025-10-02 11:59:43.810 2 DEBUG oslo_concurrency.lockutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:43 np0005465987 nova_compute[230713]: 2025-10-02 11:59:43.812 2 DEBUG oslo_concurrency.lockutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:43 np0005465987 nova_compute[230713]: 2025-10-02 11:59:43.812 2 INFO nova.compute.manager [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Attaching volume 78cf867c-0608-4b68-9e5e-81bf2bdbef67 to /dev/vdb#033[00m
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.178 2 DEBUG os_brick.utils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.179 2 INFO oslo.privsep.daemon [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp971y98n8/privsep.sock']#033[00m
Oct  2 07:59:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:44.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.883 2 INFO oslo.privsep.daemon [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.740 1357 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.745 1357 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.747 1357 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.747 1357 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1357#033[00m
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.890 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[d27997a4-f9f7-4bc2-a924-8514ddf08731]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.977 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:44 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.998 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:44.998 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[bbff458b-2109-4614-8c4d-d6f3338a5199]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.001 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.012 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.012 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[55d9da28-4a00-46c9-922b-867c3db9dce5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.016 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.026 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.026 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[39e4adee-4865-4317-9b75-cab5f1421521]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.028 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[65a5ab0a-1c0e-4234-a229-673de2d75060]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.029 2 DEBUG oslo_concurrency.processutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.061 2 DEBUG oslo_concurrency.processutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.066 2 DEBUG os_brick.initiator.connectors.lightos [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.068 2 DEBUG os_brick.initiator.connectors.lightos [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.068 2 DEBUG os_brick.initiator.connectors.lightos [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.069 2 DEBUG os_brick.utils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] <== get_connector_properties: return (890ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.070 2 DEBUG nova.virt.block_device [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Updating existing volume attachment record: 2cea492b-23bb-4813-965e-cef8abb03743 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 07:59:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:45.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 07:59:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1394270312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.976 2 DEBUG oslo_concurrency.lockutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.977 2 DEBUG oslo_concurrency.lockutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.979 2 DEBUG oslo_concurrency.lockutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:45 np0005465987 nova_compute[230713]: 2025-10-02 11:59:45.989 2 DEBUG nova.objects.instance [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lazy-loading 'flavor' on Instance uuid 05ef51ee-c5d0-405e-a30a-9915a4117983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.017 2 DEBUG nova.virt.libvirt.driver [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Attempting to attach volume 78cf867c-0608-4b68-9e5e-81bf2bdbef67 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.021 2 DEBUG nova.virt.libvirt.guest [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 07:59:46 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 07:59:46 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-78cf867c-0608-4b68-9e5e-81bf2bdbef67">
Oct  2 07:59:46 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 07:59:46 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 07:59:46 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 07:59:46 np0005465987 nova_compute[230713]:  </source>
Oct  2 07:59:46 np0005465987 nova_compute[230713]:  <auth username="openstack">
Oct  2 07:59:46 np0005465987 nova_compute[230713]:    <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 07:59:46 np0005465987 nova_compute[230713]:  </auth>
Oct  2 07:59:46 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 07:59:46 np0005465987 nova_compute[230713]:  <serial>78cf867c-0608-4b68-9e5e-81bf2bdbef67</serial>
Oct  2 07:59:46 np0005465987 nova_compute[230713]: </disk>
Oct  2 07:59:46 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 07:59:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.310 2 DEBUG nova.virt.libvirt.driver [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.311 2 DEBUG nova.virt.libvirt.driver [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.311 2 DEBUG nova.virt.libvirt.driver [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.312 2 DEBUG nova.virt.libvirt.driver [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] No VIF found with MAC fa:16:3e:cb:0e:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.384 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Check if temp file /var/lib/nova/instances/tmpvwueeft7 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.384 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvwueeft7',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='530e7ace-2feb-4a9e-9430-5a1bfc678d22',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.585 2 DEBUG oslo_concurrency.lockutils [None req-fdf2a931-3901-4dcd-9146-52191210668c 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:46 np0005465987 nova_compute[230713]: 2025-10-02 11:59:46.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.478 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.479 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.554 2 INFO nova.compute.rpcapi [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.555 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:47.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:47 np0005465987 podman[235252]: 2025-10-02 11:59:47.849990695 +0000 UTC m=+0.069162861 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.916 2 DEBUG nova.virt.libvirt.driver [None req-f5222d98-1ca4-41d3-b295-3e006fac2f19 9ea6f16f5b154686a9e7eb51a974118c 8b14804ff58240e7b32c0c58b913b595 - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] volume_snapshot_create: create_info: {'snapshot_id': '3467b246-6a4a-4fd7-9c39-bb11075e5f68', 'type': 'qcow2', 'new_file': 'new_file'} volume_snapshot_create /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3572#033[00m
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.921 2 ERROR nova.virt.libvirt.driver [None req-f5222d98-1ca4-41d3-b295-3e006fac2f19 9ea6f16f5b154686a9e7eb51a974118c 8b14804ff58240e7b32c0c58b913b595 - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Error occurred during volume_snapshot_create, sending error status to Cinder.: nova.exception.InternalError: Found no disk to snapshot.
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.921 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Traceback (most recent call last):
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.921 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.921 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983]     self._volume_snapshot_create(context, instance, guest,
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.921 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.921 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983]     raise exception.InternalError(msg)
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.921 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] nova.exception.InternalError: Found no disk to snapshot.
Oct  2 07:59:47 np0005465987 nova_compute[230713]: 2025-10-02 11:59:47.921 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] #033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.037 2 DEBUG nova.virt.libvirt.driver [None req-a775f5d2-ca55-4e70-9840-a0918ed6433b 9ea6f16f5b154686a9e7eb51a974118c 8b14804ff58240e7b32c0c58b913b595 - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] volume_snapshot_delete: delete_info: {'volume_id': '78cf867c-0608-4b68-9e5e-81bf2bdbef67'} _volume_snapshot_delete /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3673#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.037 2 ERROR nova.virt.libvirt.driver [None req-a775f5d2-ca55-4e70-9840-a0918ed6433b 9ea6f16f5b154686a9e7eb51a974118c 8b14804ff58240e7b32c0c58b913b595 - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Error occurred during volume_snapshot_delete, sending error status to Cinder.: KeyError: 'type'
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.037 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Traceback (most recent call last):
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.037 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.037 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983]     self._volume_snapshot_delete(context, instance, volume_id,
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.037 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.037 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983]     if delete_info['type'] != 'qcow2':
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.037 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] KeyError: 'type'
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.037 2 ERROR nova.virt.libvirt.driver [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] #033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver [None req-f5222d98-1ca4-41d3-b295-3e006fac2f19 9ea6f16f5b154686a9e7eb51a974118c 8b14804ff58240e7b32c0c58b913b595 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot 3467b246-6a4a-4fd7-9c39-bb11075e5f68 could not be found.
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     self._volume_snapshot_create(context, instance, guest,
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     raise exception.InternalError(msg)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver nova.exception.InternalError: Found no disk to snapshot.
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver 
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver 
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot 3467b246-6a4a-4fd7-9c39-bb11075e5f68 could not be found. (HTTP 404) (Request-ID: req-2f06f007-a354-4c8d-9ea2-c0c3802bcf84)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver 
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver 
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot 3467b246-6a4a-4fd7-9c39-bb11075e5f68 could not be found.
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.059 2 ERROR nova.virt.libvirt.driver #033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server [None req-f5222d98-1ca4-41d3-b295-3e006fac2f19 9ea6f16f5b154686a9e7eb51a974118c 8b14804ff58240e7b32c0c58b913b595 - - default default] Exception during message handling: nova.exception.InternalError: Found no disk to snapshot.
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4410, in volume_snapshot_create
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_create(context, instance, volume_id,
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3597, in volume_snapshot_create
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     self._volume_snapshot_create(context, instance, guest,
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server     raise exception.InternalError(msg)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server nova.exception.InternalError: Found no disk to snapshot.
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.070 2 ERROR oslo_messaging.rpc.server #033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver [None req-a775f5d2-ca55-4e70-9840-a0918ed6433b 9ea6f16f5b154686a9e7eb51a974118c 8b14804ff58240e7b32c0c58b913b595 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot None could not be found.
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     self._volume_snapshot_delete(context, instance, volume_id,
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     if delete_info['type'] != 'qcow2':
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver KeyError: 'type'
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver 
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver 
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot None could not be found. (HTTP 404) (Request-ID: req-fad4a099-3d13-4e31-816d-34605df2ae3b)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver 
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver 
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot None could not be found.
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.174 2 ERROR nova.virt.libvirt.driver #033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server [None req-a775f5d2-ca55-4e70-9840-a0918ed6433b 9ea6f16f5b154686a9e7eb51a974118c 8b14804ff58240e7b32c0c58b913b595 - - default default] Exception during message handling: KeyError: 'type'
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4422, in volume_snapshot_delete
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_delete(context, instance, volume_id,
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3853, in volume_snapshot_delete
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     self._volume_snapshot_delete(context, instance, volume_id,
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server     if delete_info['type'] != 'qcow2':
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server KeyError: 'type'
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.176 2 ERROR oslo_messaging.rpc.server #033[00m
Oct  2 07:59:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:48.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.631 2 DEBUG oslo_concurrency.lockutils [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.632 2 DEBUG oslo_concurrency.lockutils [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.646 2 INFO nova.compute.manager [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Detaching volume 78cf867c-0608-4b68-9e5e-81bf2bdbef67#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.804 2 INFO nova.virt.block_device [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Attempting to driver detach volume 78cf867c-0608-4b68-9e5e-81bf2bdbef67 from mountpoint /dev/vdb#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.813 2 DEBUG nova.virt.libvirt.driver [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Attempting to detach device vdb from instance 05ef51ee-c5d0-405e-a30a-9915a4117983 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.814 2 DEBUG nova.virt.libvirt.guest [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-78cf867c-0608-4b68-9e5e-81bf2bdbef67">
Oct  2 07:59:48 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  </source>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <serial>78cf867c-0608-4b68-9e5e-81bf2bdbef67</serial>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]: </disk>
Oct  2 07:59:48 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.827 2 INFO nova.virt.libvirt.driver [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Successfully detached device vdb from instance 05ef51ee-c5d0-405e-a30a-9915a4117983 from the persistent domain config.#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.827 2 DEBUG nova.virt.libvirt.driver [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 05ef51ee-c5d0-405e-a30a-9915a4117983 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.828 2 DEBUG nova.virt.libvirt.guest [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-78cf867c-0608-4b68-9e5e-81bf2bdbef67">
Oct  2 07:59:48 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  </source>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <serial>78cf867c-0608-4b68-9e5e-81bf2bdbef67</serial>
Oct  2 07:59:48 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 07:59:48 np0005465987 nova_compute[230713]: </disk>
Oct  2 07:59:48 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.931 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759406388.9310184, 05ef51ee-c5d0-405e-a30a-9915a4117983 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.932 2 DEBUG nova.virt.libvirt.driver [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 05ef51ee-c5d0-405e-a30a-9915a4117983 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 07:59:48 np0005465987 nova_compute[230713]: 2025-10-02 11:59:48.972 2 INFO nova.virt.libvirt.driver [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Successfully detached device vdb from instance 05ef51ee-c5d0-405e-a30a-9915a4117983 from the live domain config.#033[00m
Oct  2 07:59:49 np0005465987 nova_compute[230713]: 2025-10-02 11:59:49.343 2 DEBUG nova.objects.instance [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lazy-loading 'flavor' on Instance uuid 05ef51ee-c5d0-405e-a30a-9915a4117983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:49 np0005465987 nova_compute[230713]: 2025-10-02 11:59:49.399 2 DEBUG oslo_concurrency.lockutils [None req-981e4e2c-1acd-483f-90c7-e25ec93da692 11cba1de83a640aabbce96d56cb376fc 9fab310b79144b0baa3a349f8bb4143f - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:49.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:49 np0005465987 nova_compute[230713]: 2025-10-02 11:59:49.870 2 DEBUG nova.compute.manager [req-9a3e4a5c-47ee-48b1-9b1b-248c00ceba6a req-08d12ba7-d2be-4c15-b7f7-b8ea2d1502d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-unplugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:49 np0005465987 nova_compute[230713]: 2025-10-02 11:59:49.870 2 DEBUG oslo_concurrency.lockutils [req-9a3e4a5c-47ee-48b1-9b1b-248c00ceba6a req-08d12ba7-d2be-4c15-b7f7-b8ea2d1502d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:49 np0005465987 nova_compute[230713]: 2025-10-02 11:59:49.870 2 DEBUG oslo_concurrency.lockutils [req-9a3e4a5c-47ee-48b1-9b1b-248c00ceba6a req-08d12ba7-d2be-4c15-b7f7-b8ea2d1502d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:49 np0005465987 nova_compute[230713]: 2025-10-02 11:59:49.871 2 DEBUG oslo_concurrency.lockutils [req-9a3e4a5c-47ee-48b1-9b1b-248c00ceba6a req-08d12ba7-d2be-4c15-b7f7-b8ea2d1502d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:49 np0005465987 nova_compute[230713]: 2025-10-02 11:59:49.871 2 DEBUG nova.compute.manager [req-9a3e4a5c-47ee-48b1-9b1b-248c00ceba6a req-08d12ba7-d2be-4c15-b7f7-b8ea2d1502d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] No waiting events found dispatching network-vif-unplugged-b0281338-9ff1-492d-b3aa-aeee41f08075 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:49 np0005465987 nova_compute[230713]: 2025-10-02 11:59:49.871 2 DEBUG nova.compute.manager [req-9a3e4a5c-47ee-48b1-9b1b-248c00ceba6a req-08d12ba7-d2be-4c15-b7f7-b8ea2d1502d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-unplugged-b0281338-9ff1-492d-b3aa-aeee41f08075 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 07:59:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:50.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.556 2 INFO nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Took 3.08 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.556 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.575 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvwueeft7',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='530e7ace-2feb-4a9e-9430-5a1bfc678d22',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(555eda39-03cf-4b31-acfe-68fce4316f8f),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.578 2 DEBUG nova.objects.instance [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lazy-loading 'migration_context' on Instance uuid 530e7ace-2feb-4a9e-9430-5a1bfc678d22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.579 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.582 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.586 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.606 2 DEBUG nova.virt.libvirt.vif [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T11:59:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-492220666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-492220666',id=7,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T11:59:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-b2u06b3r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T11:59:40Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=530e7ace-2feb-4a9e-9430-5a1bfc678d22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.607 2 DEBUG nova.network.os_vif_util [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converting VIF {"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.610 2 DEBUG nova.network.os_vif_util [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.610 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updating guest XML with vif config: <interface type="ethernet">
Oct  2 07:59:50 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:0f:16:f5"/>
Oct  2 07:59:50 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 07:59:50 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 07:59:50 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 07:59:50 np0005465987 nova_compute[230713]:  <target dev="tapb0281338-9f"/>
Oct  2 07:59:50 np0005465987 nova_compute[230713]: </interface>
Oct  2 07:59:50 np0005465987 nova_compute[230713]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Oct  2 07:59:50 np0005465987 nova_compute[230713]: 2025-10-02 11:59:50.611 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.088 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.089 2 INFO nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.181 2 INFO nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.684 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.685 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 07:59:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 07:59:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:51.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.788 2 DEBUG oslo_concurrency.lockutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.788 2 DEBUG oslo_concurrency.lockutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.788 2 DEBUG oslo_concurrency.lockutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.789 2 DEBUG oslo_concurrency.lockutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.789 2 DEBUG oslo_concurrency.lockutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.790 2 INFO nova.compute.manager [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Terminating instance#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.791 2 DEBUG nova.compute.manager [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 07:59:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:51 np0005465987 kernel: tap28a876da-5d (unregistering): left promiscuous mode
Oct  2 07:59:51 np0005465987 NetworkManager[44910]: <info>  [1759406391.8558] device (tap28a876da-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 07:59:51 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:51Z|00051|binding|INFO|Releasing lport 28a876da-5d62-4032-8917-ce1c597913a9 from this chassis (sb_readonly=0)
Oct  2 07:59:51 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:51Z|00052|binding|INFO|Setting lport 28a876da-5d62-4032-8917-ce1c597913a9 down in Southbound
Oct  2 07:59:51 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:51Z|00053|binding|INFO|Removing iface tap28a876da-5d ovn-installed in OVS
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:51.878 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0e:30 10.100.0.4'], port_security=['fa:16:3e:cb:0e:30 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '05ef51ee-c5d0-405e-a30a-9915a4117983', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6807e9c97bd646799d4adfa12cdbd47e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '069bb314-33e5-4581-a7d1-f010e907050e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d71b56cb-f932-4ba0-8910-a51018308ba1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=28a876da-5d62-4032-8917-ce1c597913a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:59:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:51.881 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 28a876da-5d62-4032-8917-ce1c597913a9 in datapath 50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4 unbound from our chassis#033[00m
Oct  2 07:59:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:51.884 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 07:59:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:51.886 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfd954a-65f0-4fe3-843b-42aca3353254]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:51.887 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4 namespace which is not needed anymore#033[00m
Oct  2 07:59:51 np0005465987 nova_compute[230713]: 2025-10-02 11:59:51.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:51 np0005465987 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct  2 07:59:51 np0005465987 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 15.651s CPU time.
Oct  2 07:59:51 np0005465987 systemd-machined[188335]: Machine qemu-2-instance-00000005 terminated.
Oct  2 07:59:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.042 2 INFO nova.virt.libvirt.driver [-] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Instance destroyed successfully.#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.042 2 DEBUG nova.objects.instance [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lazy-loading 'resources' on Instance uuid 05ef51ee-c5d0-405e-a30a-9915a4117983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.066 2 DEBUG nova.virt.libvirt.vif [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T11:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-597937981',display_name='tempest-VolumesAssistedSnapshotsTest-server-597937981',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-597937981',id=5,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPKkHzC5UoZKgLNglsajP+t/XvuXgPqJUdk5kjgTplZdz9bBYLeG5uMVbdoqM9TGvz04jV/stznA9kUBQqWcEeEy/AqvTUml3lYXuQ4tJBJPP73o0nMwL9Feaz12WSkrFw==',key_name='tempest-keypair-480338542',keypairs=<?>,launch_index=0,launched_at=2025-10-02T11:59:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6807e9c97bd646799d4adfa12cdbd47e',ramdisk_id='',reservation_id='r-pgicc5gv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAssistedSnapshotsTest-22685689',owner_user_name='tempest-VolumesAssistedSnapshotsTest-22685689-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T11:59:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39f72bf379684b959071ccdf03f5f52f',uuid=05ef51ee-c5d0-405e-a30a-9915a4117983,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.067 2 DEBUG nova.network.os_vif_util [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Converting VIF {"id": "28a876da-5d62-4032-8917-ce1c597913a9", "address": "fa:16:3e:cb:0e:30", "network": {"id": "50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1820286714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6807e9c97bd646799d4adfa12cdbd47e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28a876da-5d", "ovs_interfaceid": "28a876da-5d62-4032-8917-ce1c597913a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.067 2 DEBUG nova.network.os_vif_util [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:0e:30,bridge_name='br-int',has_traffic_filtering=True,id=28a876da-5d62-4032-8917-ce1c597913a9,network=Network(50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28a876da-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.068 2 DEBUG os_vif [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:0e:30,bridge_name='br-int',has_traffic_filtering=True,id=28a876da-5d62-4032-8917-ce1c597913a9,network=Network(50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28a876da-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.070 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28a876da-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.078 2 INFO os_vif [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:0e:30,bridge_name='br-int',has_traffic_filtering=True,id=28a876da-5d62-4032-8917-ce1c597913a9,network=Network(50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28a876da-5d')#033[00m
Oct  2 07:59:52 np0005465987 neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4[234418]: [NOTICE]   (234422) : haproxy version is 2.8.14-c23fe91
Oct  2 07:59:52 np0005465987 neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4[234418]: [NOTICE]   (234422) : path to executable is /usr/sbin/haproxy
Oct  2 07:59:52 np0005465987 neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4[234418]: [WARNING]  (234422) : Exiting Master process...
Oct  2 07:59:52 np0005465987 neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4[234418]: [WARNING]  (234422) : Exiting Master process...
Oct  2 07:59:52 np0005465987 neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4[234418]: [ALERT]    (234422) : Current worker (234424) exited with code 143 (Terminated)
Oct  2 07:59:52 np0005465987 neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4[234418]: [WARNING]  (234422) : All workers exited. Exiting... (0)
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.095 2 DEBUG nova.compute.manager [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.096 2 DEBUG oslo_concurrency.lockutils [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.096 2 DEBUG oslo_concurrency.lockutils [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.097 2 DEBUG oslo_concurrency.lockutils [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.097 2 DEBUG nova.compute.manager [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] No waiting events found dispatching network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.097 2 WARNING nova.compute.manager [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received unexpected event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 07:59:52 np0005465987 systemd[1]: libpod-1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97.scope: Deactivated successfully.
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.098 2 DEBUG nova.compute.manager [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-changed-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.098 2 DEBUG nova.compute.manager [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Refreshing instance network info cache due to event network-changed-b0281338-9ff1-492d-b3aa-aeee41f08075. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.098 2 DEBUG oslo_concurrency.lockutils [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.099 2 DEBUG oslo_concurrency.lockutils [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:59:52 np0005465987 conmon[234418]: conmon 1af4e85330fb23885cd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97.scope/container/memory.events
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.099 2 DEBUG nova.network.neutron [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Refreshing network info cache for port b0281338-9ff1-492d-b3aa-aeee41f08075 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 07:59:52 np0005465987 podman[235299]: 2025-10-02 11:59:52.10672172 +0000 UTC m=+0.074166119 container died 1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 07:59:52 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97-userdata-shm.mount: Deactivated successfully.
Oct  2 07:59:52 np0005465987 systemd[1]: var-lib-containers-storage-overlay-800f5f90a7475c1016ab56a7bcd351742db5228e7641805fbec15bbfed881dc1-merged.mount: Deactivated successfully.
Oct  2 07:59:52 np0005465987 podman[235299]: 2025-10-02 11:59:52.148960011 +0000 UTC m=+0.116404450 container cleanup 1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 07:59:52 np0005465987 systemd[1]: libpod-conmon-1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97.scope: Deactivated successfully.
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.188 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.189 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.219 2 DEBUG nova.compute.manager [req-8951a00c-80dd-49f4-abd8-057af7167fcf req-2edf072a-c986-4f44-89f1-04b7130871d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received event network-vif-unplugged-28a876da-5d62-4032-8917-ce1c597913a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.219 2 DEBUG oslo_concurrency.lockutils [req-8951a00c-80dd-49f4-abd8-057af7167fcf req-2edf072a-c986-4f44-89f1-04b7130871d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.219 2 DEBUG oslo_concurrency.lockutils [req-8951a00c-80dd-49f4-abd8-057af7167fcf req-2edf072a-c986-4f44-89f1-04b7130871d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.220 2 DEBUG oslo_concurrency.lockutils [req-8951a00c-80dd-49f4-abd8-057af7167fcf req-2edf072a-c986-4f44-89f1-04b7130871d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.220 2 DEBUG nova.compute.manager [req-8951a00c-80dd-49f4-abd8-057af7167fcf req-2edf072a-c986-4f44-89f1-04b7130871d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] No waiting events found dispatching network-vif-unplugged-28a876da-5d62-4032-8917-ce1c597913a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.220 2 DEBUG nova.compute.manager [req-8951a00c-80dd-49f4-abd8-057af7167fcf req-2edf072a-c986-4f44-89f1-04b7130871d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received event network-vif-unplugged-28a876da-5d62-4032-8917-ce1c597913a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 07:59:52 np0005465987 podman[235356]: 2025-10-02 11:59:52.225095653 +0000 UTC m=+0.051874226 container remove 1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 07:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:52.230 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5a396e50-bc27-458e-be5d-0a3d14717aa9]: (4, ('Thu Oct  2 11:59:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4 (1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97)\n1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97\nThu Oct  2 11:59:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4 (1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97)\n1af4e85330fb23885cd0aa72d28443de65d3d814302be11286b74fe985217f97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:52.232 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[73b01fd3-4cd1-4fd8-a18a-290ff7281011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:52.233 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50bc3bdc-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:52 np0005465987 kernel: tap50bc3bdc-60: left promiscuous mode
Oct  2 07:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:52.251 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9d7a91-c248-436c-bac3-89670a32753c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:52.292 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8721a215-0166-41d8-8981-632a3c28e7f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:52.293 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c83f883b-4a54-4ab2-9692-dcfeba6f9f1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:52.309 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3642f513-03ec-45af-b28b-3b11041d0ec5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441872, 'reachable_time': 26141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235372, 'error': None, 'target': 'ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:52 np0005465987 systemd[1]: run-netns-ovnmeta\x2d50bc3bdc\x2d6903\x2d4eb8\x2da59f\x2dce2ac764c0c4.mount: Deactivated successfully.
Oct  2 07:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:52.315 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50bc3bdc-6903-4eb8-a59f-ce2ac764c0c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 07:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:52.316 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[46d36e22-85ea-489c-9ef1-4ac3ea83499c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:52.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.599 2 INFO nova.virt.libvirt.driver [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Deleting instance files /var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983_del#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.599 2 INFO nova.virt.libvirt.driver [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Deletion of /var/lib/nova/instances/05ef51ee-c5d0-405e-a30a-9915a4117983_del complete#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.644 2 INFO nova.compute.manager [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.645 2 DEBUG oslo.service.loopingcall [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.645 2 DEBUG nova.compute.manager [-] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.645 2 DEBUG nova.network.neutron [-] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.691 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 07:59:52 np0005465987 nova_compute[230713]: 2025-10-02 11:59:52.691 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.195 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.196 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.506 2 DEBUG nova.network.neutron [-] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.559 2 INFO nova.compute.manager [-] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Took 0.91 seconds to deallocate network for instance.#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.581 2 DEBUG nova.network.neutron [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updated VIF entry in instance network info cache for port b0281338-9ff1-492d-b3aa-aeee41f08075. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.582 2 DEBUG nova.network.neutron [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Updating instance_info_cache with network_info: [{"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.634 2 DEBUG nova.compute.manager [req-49f41cc0-4207-4bea-8a40-3ecfeccaa9a3 req-9aae643c-c691-4393-8868-d15027b392e4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received event network-vif-deleted-28a876da-5d62-4032-8917-ce1c597913a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.636 2 DEBUG oslo_concurrency.lockutils [req-4a38d6f8-ed79-4fc8-99ff-02ac6e36e335 req-217cc717-0e28-4613-ba8e-e07875db22d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-530e7ace-2feb-4a9e-9430-5a1bfc678d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.646 2 DEBUG oslo_concurrency.lockutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.646 2 DEBUG oslo_concurrency.lockutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:53.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.702 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.703 2 DEBUG nova.virt.libvirt.migration [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.714 2 DEBUG oslo_concurrency.processutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.901 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406393.9004383, 530e7ace-2feb-4a9e-9430-5a1bfc678d22 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.902 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] VM Paused (Lifecycle Event)#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.925 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.930 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 07:59:53 np0005465987 nova_compute[230713]: 2025-10-02 11:59:53.962 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Oct  2 07:59:54 np0005465987 kernel: tapb0281338-9f (unregistering): left promiscuous mode
Oct  2 07:59:54 np0005465987 NetworkManager[44910]: <info>  [1759406394.0702] device (tapb0281338-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:54Z|00054|binding|INFO|Releasing lport b0281338-9ff1-492d-b3aa-aeee41f08075 from this chassis (sb_readonly=0)
Oct  2 07:59:54 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:54Z|00055|binding|INFO|Setting lport b0281338-9ff1-492d-b3aa-aeee41f08075 down in Southbound
Oct  2 07:59:54 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:54Z|00056|binding|INFO|Releasing lport 36ee45f1-8361-4e2d-a0d4-c4907ede4743 from this chassis (sb_readonly=0)
Oct  2 07:59:54 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:54Z|00057|binding|INFO|Setting lport 36ee45f1-8361-4e2d-a0d4-c4907ede4743 down in Southbound
Oct  2 07:59:54 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:54Z|00058|binding|INFO|Removing iface tapb0281338-9f ovn-installed in OVS
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.093 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:38:f6 19.80.0.52'], port_security=['fa:16:3e:1a:38:f6 19.80.0.52'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['b0281338-9ff1-492d-b3aa-aeee41f08075'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-589500470', 'neutron:cidrs': '19.80.0.52/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-589500470', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=ec7a7eef-8b3d-415d-8728-897cefc62c0c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=36ee45f1-8361-4e2d-a0d4-c4907ede4743) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:59:54 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:54Z|00059|binding|INFO|Releasing lport a3d6bcff-8fb0-4a18-b382-3921f86e4cde from this chassis (sb_readonly=0)
Oct  2 07:59:54 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:54Z|00060|binding|INFO|Releasing lport a0e52ac7-beb4-4d4b-863a-9b95be4c9e74 from this chassis (sb_readonly=0)
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.096 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:16:f5 10.100.0.13'], port_security=['fa:16:3e:0f:16:f5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '6718a9ec-e13c-42f0-978a-6c44c48d0d54'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1480997899', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '530e7ace-2feb-4a9e-9430-5a1bfc678d22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1480997899', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '8', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=b0281338-9ff1-492d-b3aa-aeee41f08075) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.102 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 36ee45f1-8361-4e2d-a0d4-c4907ede4743 in datapath 4874fd53-2e81-4c7f-81b7-b85c23edc180 unbound from our chassis#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.105 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4874fd53-2e81-4c7f-81b7-b85c23edc180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.111 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bdfb78-40f5-46b9-8a27-451617b18807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.112 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 namespace which is not needed anymore#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct  2 07:59:54 np0005465987 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 14.194s CPU time.
Oct  2 07:59:54 np0005465987 systemd-machined[188335]: Machine qemu-3-instance-00000007 terminated.
Oct  2 07:59:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 07:59:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2715093501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 07:59:54 np0005465987 virtqemud[230442]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk: No such file or directory
Oct  2 07:59:54 np0005465987 virtqemud[230442]: Unable to get XATTR trusted.libvirt.security.ref_dac on 530e7ace-2feb-4a9e-9430-5a1bfc678d22_disk: No such file or directory
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.230 2 DEBUG oslo_concurrency.processutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[235130]: [NOTICE]   (235134) : haproxy version is 2.8.14-c23fe91
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[235130]: [NOTICE]   (235134) : path to executable is /usr/sbin/haproxy
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[235130]: [WARNING]  (235134) : Exiting Master process...
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[235130]: [WARNING]  (235134) : Exiting Master process...
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[235130]: [ALERT]    (235134) : Current worker (235136) exited with code 143 (Terminated)
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180[235130]: [WARNING]  (235134) : All workers exited. Exiting... (0)
Oct  2 07:59:54 np0005465987 NetworkManager[44910]: <info>  [1759406394.2352] manager: (tapb0281338-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Oct  2 07:59:54 np0005465987 systemd[1]: libpod-506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452.scope: Deactivated successfully.
Oct  2 07:59:54 np0005465987 podman[235419]: 2025-10-02 11:59:54.243790517 +0000 UTC m=+0.050370535 container died 506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.246 2 DEBUG nova.compute.provider_tree [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.262 2 DEBUG nova.scheduler.client.report [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.267 2 DEBUG nova.virt.libvirt.guest [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.268 2 INFO nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Migration operation has completed#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.268 2 INFO nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] _post_live_migration() is started..#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.276 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.276 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.276 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Oct  2 07:59:54 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452-userdata-shm.mount: Deactivated successfully.
Oct  2 07:59:54 np0005465987 systemd[1]: var-lib-containers-storage-overlay-e968753ff735d242b721dede2a3c48178467496859cbfa87e41e7eeedb034f2c-merged.mount: Deactivated successfully.
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.292 2 DEBUG oslo_concurrency.lockutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:54 np0005465987 podman[235419]: 2025-10-02 11:59:54.296096103 +0000 UTC m=+0.102676101 container cleanup 506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.305 2 DEBUG nova.compute.manager [req-7bb77cc9-91f8-436f-bef9-f7b21bad1792 req-1fc8d960-8612-43cc-a6bf-86608776adfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received event network-vif-plugged-28a876da-5d62-4032-8917-ce1c597913a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.305 2 DEBUG oslo_concurrency.lockutils [req-7bb77cc9-91f8-436f-bef9-f7b21bad1792 req-1fc8d960-8612-43cc-a6bf-86608776adfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.305 2 DEBUG oslo_concurrency.lockutils [req-7bb77cc9-91f8-436f-bef9-f7b21bad1792 req-1fc8d960-8612-43cc-a6bf-86608776adfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.305 2 DEBUG oslo_concurrency.lockutils [req-7bb77cc9-91f8-436f-bef9-f7b21bad1792 req-1fc8d960-8612-43cc-a6bf-86608776adfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.306 2 DEBUG nova.compute.manager [req-7bb77cc9-91f8-436f-bef9-f7b21bad1792 req-1fc8d960-8612-43cc-a6bf-86608776adfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] No waiting events found dispatching network-vif-plugged-28a876da-5d62-4032-8917-ce1c597913a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.306 2 WARNING nova.compute.manager [req-7bb77cc9-91f8-436f-bef9-f7b21bad1792 req-1fc8d960-8612-43cc-a6bf-86608776adfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Received unexpected event network-vif-plugged-28a876da-5d62-4032-8917-ce1c597913a9 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 07:59:54 np0005465987 systemd[1]: libpod-conmon-506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452.scope: Deactivated successfully.
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.328 2 INFO nova.scheduler.client.report [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Deleted allocations for instance 05ef51ee-c5d0-405e-a30a-9915a4117983#033[00m
Oct  2 07:59:54 np0005465987 podman[235456]: 2025-10-02 11:59:54.360626347 +0000 UTC m=+0.042217271 container remove 506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.367 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4514d90d-f624-4fab-a1c9-1af2d2541e12]: (4, ('Thu Oct  2 11:59:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 (506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452)\n506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452\nThu Oct  2 11:59:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 (506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452)\n506f2dfd65721241e96948de728e1855e7ba9e3159e9620f384637d883978452\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.368 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[767b679a-d03a-459e-bc32-e9203361e19e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.369 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4874fd53-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 kernel: tap4874fd53-20: left promiscuous mode
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.404 2 DEBUG oslo_concurrency.lockutils [None req-b0713e64-28f0-4ed1-9bfe-8ada5e0bb353 39f72bf379684b959071ccdf03f5f52f 6807e9c97bd646799d4adfa12cdbd47e - - default default] Lock "05ef51ee-c5d0-405e-a30a-9915a4117983" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.412 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[09406dde-e32e-43e6-84ef-6a4b5c9a7a01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_controller[129172]: 2025-10-02T11:59:54Z|00061|binding|INFO|Releasing lport a0e52ac7-beb4-4d4b-863a-9b95be4c9e74 from this chassis (sb_readonly=0)
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.447 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[652e5aa8-6123-4ec9-be79-57e096e7ce65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.447 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f66080-ef6d-4893-9b6d-31fd913fb6cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.461 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae173ce-36f3-430d-a7e5-ddb1cce7508d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444764, 'reachable_time': 15908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235474, 'error': None, 'target': 'ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 systemd[1]: run-netns-ovnmeta\x2d4874fd53\x2d2e81\x2d4c7f\x2d81b7\x2db85c23edc180.mount: Deactivated successfully.
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.465 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4874fd53-2e81-4c7f-81b7-b85c23edc180 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.465 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[bfea7837-c3a6-4193-90f8-03c1d77b2992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.465 138325 INFO neutron.agent.ovn.metadata.agent [-] Port b0281338-9ff1-492d-b3aa-aeee41f08075 in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 unbound from our chassis#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.466 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e991676-99e8-43d9-8575-4a21f50b0ed5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.467 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[75fd1dde-bd41-4f7d-b05e-6dbeec57aefb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.467 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 namespace which is not needed anymore#033[00m
Oct  2 07:59:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:54.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[235204]: [NOTICE]   (235208) : haproxy version is 2.8.14-c23fe91
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[235204]: [NOTICE]   (235208) : path to executable is /usr/sbin/haproxy
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[235204]: [WARNING]  (235208) : Exiting Master process...
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[235204]: [WARNING]  (235208) : Exiting Master process...
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[235204]: [ALERT]    (235208) : Current worker (235210) exited with code 143 (Terminated)
Oct  2 07:59:54 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[235204]: [WARNING]  (235208) : All workers exited. Exiting... (0)
Oct  2 07:59:54 np0005465987 systemd[1]: libpod-42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41.scope: Deactivated successfully.
Oct  2 07:59:54 np0005465987 podman[235492]: 2025-10-02 11:59:54.616331504 +0000 UTC m=+0.052190065 container died 42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:59:54 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41-userdata-shm.mount: Deactivated successfully.
Oct  2 07:59:54 np0005465987 systemd[1]: var-lib-containers-storage-overlay-1edafb8af3650df7f63ba35bac9bfd9d659b01f4e2e63967df95f7cc7307e338-merged.mount: Deactivated successfully.
Oct  2 07:59:54 np0005465987 podman[235492]: 2025-10-02 11:59:54.649818444 +0000 UTC m=+0.085677035 container cleanup 42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 07:59:54 np0005465987 systemd[1]: libpod-conmon-42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41.scope: Deactivated successfully.
Oct  2 07:59:54 np0005465987 podman[235522]: 2025-10-02 11:59:54.743005815 +0000 UTC m=+0.061673256 container remove 42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.749 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3593f3-ba35-415f-b335-3330f557a4f6]: (4, ('Thu Oct  2 11:59:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 (42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41)\n42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41\nThu Oct  2 11:59:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 (42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41)\n42350a55ae2b00d01e0c4d3f3a649c271f97bb8f963cf92f8162ad780869ad41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.751 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6f4cb3-6db4-4dde-96db-ec9ad3502ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.753 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:54 np0005465987 kernel: tap1e991676-90: left promiscuous mode
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.766 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[deedbefe-0b7d-49d9-8e51-627c50234cc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.791 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb75549-cdfb-4fb0-bdd7-7f3ab63fcd17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.792 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a4c197-9573-47c4-b574-809a3eb83b21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.813 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[501178b6-79fd-47a8-b0b9-48a624c86496]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444861, 'reachable_time': 21260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235541, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.815 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 07:59:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 11:59:54.815 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[3e32cb25-a717-4a16-8288-f33c0641e505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.898 2 DEBUG nova.compute.manager [req-8f32f1cb-8bb8-4c9b-bdd2-9997e82e2a23 req-2a65c917-2964-4b54-b78a-7dc9112eefaf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-unplugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.899 2 DEBUG oslo_concurrency.lockutils [req-8f32f1cb-8bb8-4c9b-bdd2-9997e82e2a23 req-2a65c917-2964-4b54-b78a-7dc9112eefaf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.899 2 DEBUG oslo_concurrency.lockutils [req-8f32f1cb-8bb8-4c9b-bdd2-9997e82e2a23 req-2a65c917-2964-4b54-b78a-7dc9112eefaf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.900 2 DEBUG oslo_concurrency.lockutils [req-8f32f1cb-8bb8-4c9b-bdd2-9997e82e2a23 req-2a65c917-2964-4b54-b78a-7dc9112eefaf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.900 2 DEBUG nova.compute.manager [req-8f32f1cb-8bb8-4c9b-bdd2-9997e82e2a23 req-2a65c917-2964-4b54-b78a-7dc9112eefaf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] No waiting events found dispatching network-vif-unplugged-b0281338-9ff1-492d-b3aa-aeee41f08075 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:54 np0005465987 nova_compute[230713]: 2025-10-02 11:59:54.900 2 DEBUG nova.compute.manager [req-8f32f1cb-8bb8-4c9b-bdd2-9997e82e2a23 req-2a65c917-2964-4b54-b78a-7dc9112eefaf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-unplugged-b0281338-9ff1-492d-b3aa-aeee41f08075 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 07:59:55 np0005465987 systemd[1]: run-netns-ovnmeta\x2d1e991676\x2d99e8\x2d43d9\x2d8575\x2d4a21f50b0ed5.mount: Deactivated successfully.
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.462 2 DEBUG nova.network.neutron [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Activated binding for port b0281338-9ff1-492d-b3aa-aeee41f08075 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.463 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.463 2 DEBUG nova.virt.libvirt.vif [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T11:59:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-492220666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-492220666',id=7,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T11:59:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-b2u06b3r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T11:59:45Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=530e7ace-2feb-4a9e-9430-5a1bfc678d22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.464 2 DEBUG nova.network.os_vif_util [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converting VIF {"id": "b0281338-9ff1-492d-b3aa-aeee41f08075", "address": "fa:16:3e:0f:16:f5", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0281338-9f", "ovs_interfaceid": "b0281338-9ff1-492d-b3aa-aeee41f08075", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.464 2 DEBUG nova.network.os_vif_util [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.465 2 DEBUG os_vif [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.466 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0281338-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.471 2 INFO os_vif [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:16:f5,bridge_name='br-int',has_traffic_filtering=True,id=b0281338-9ff1-492d-b3aa-aeee41f08075,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0281338-9f')#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.472 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.472 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.472 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.472 2 DEBUG nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.473 2 INFO nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Deleting instance files /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22_del#033[00m
Oct  2 07:59:55 np0005465987 nova_compute[230713]: 2025-10-02 11:59:55.473 2 INFO nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Deletion of /var/lib/nova/instances/530e7ace-2feb-4a9e-9430-5a1bfc678d22_del complete#033[00m
Oct  2 07:59:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:55.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:56 np0005465987 nova_compute[230713]: 2025-10-02 11:59:56.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 07:59:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:56.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.059 2 DEBUG nova.compute.manager [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.059 2 DEBUG oslo_concurrency.lockutils [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.060 2 DEBUG oslo_concurrency.lockutils [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.060 2 DEBUG oslo_concurrency.lockutils [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.060 2 DEBUG nova.compute.manager [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] No waiting events found dispatching network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.061 2 WARNING nova.compute.manager [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received unexpected event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.061 2 DEBUG nova.compute.manager [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.061 2 DEBUG oslo_concurrency.lockutils [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.062 2 DEBUG oslo_concurrency.lockutils [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.062 2 DEBUG oslo_concurrency.lockutils [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.063 2 DEBUG nova.compute.manager [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] No waiting events found dispatching network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.063 2 WARNING nova.compute.manager [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received unexpected event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.063 2 DEBUG nova.compute.manager [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.064 2 DEBUG oslo_concurrency.lockutils [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.064 2 DEBUG oslo_concurrency.lockutils [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.064 2 DEBUG oslo_concurrency.lockutils [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.064 2 DEBUG nova.compute.manager [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] No waiting events found dispatching network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 07:59:57 np0005465987 nova_compute[230713]: 2025-10-02 11:59:57.065 2 WARNING nova.compute.manager [req-cc4c749c-867e-4323-8704-dedf722a8436 req-4c9f360a-1907-40c1-9da2-ec8cece262cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Received unexpected event network-vif-plugged-b0281338-9ff1-492d-b3aa-aeee41f08075 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 07:59:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:57.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:57 np0005465987 podman[235544]: 2025-10-02 11:59:57.86291527 +0000 UTC m=+0.070888220 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  2 07:59:57 np0005465987 podman[235543]: 2025-10-02 11:59:57.890255391 +0000 UTC m=+0.097718376 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:59:57 np0005465987 podman[235542]: 2025-10-02 11:59:57.890435755 +0000 UTC m=+0.101423757 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:59:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct  2 07:59:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:58.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 07:59:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:59.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 08:00:00 np0005465987 nova_compute[230713]: 2025-10-02 12:00:00.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:00.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:01 np0005465987 nova_compute[230713]: 2025-10-02 12:00:01.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:01.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:02.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:03 np0005465987 nova_compute[230713]: 2025-10-02 12:00:03.536 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:03 np0005465987 nova_compute[230713]: 2025-10-02 12:00:03.537 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:03 np0005465987 nova_compute[230713]: 2025-10-02 12:00:03.537 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "530e7ace-2feb-4a9e-9430-5a1bfc678d22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:03 np0005465987 nova_compute[230713]: 2025-10-02 12:00:03.982 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:03 np0005465987 nova_compute[230713]: 2025-10-02 12:00:03.983 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:03 np0005465987 nova_compute[230713]: 2025-10-02 12:00:03.983 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:03 np0005465987 nova_compute[230713]: 2025-10-02 12:00:03.984 2 DEBUG nova.compute.resource_tracker [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:00:03 np0005465987 nova_compute[230713]: 2025-10-02 12:00:03.984 2 DEBUG oslo_concurrency.processutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:00:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/134888817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:00:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:04.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.501 2 DEBUG oslo_concurrency.processutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.686 2 WARNING nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.687 2 DEBUG nova.compute.resource_tracker [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4962MB free_disk=20.88165283203125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.687 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.688 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.856 2 DEBUG nova.compute.resource_tracker [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Migration for instance 530e7ace-2feb-4a9e-9430-5a1bfc678d22 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.942 2 DEBUG nova.compute.resource_tracker [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.974 2 DEBUG nova.compute.resource_tracker [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Migration 555eda39-03cf-4b31-acfe-68fce4316f8f is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.975 2 DEBUG nova.compute.resource_tracker [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:00:04 np0005465987 nova_compute[230713]: 2025-10-02 12:00:04.975 2 DEBUG nova.compute.resource_tracker [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.005 2 DEBUG oslo_concurrency.processutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:00:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1217306547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.433 2 DEBUG oslo_concurrency.processutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.440 2 DEBUG nova.compute.provider_tree [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.513 2 DEBUG nova.scheduler.client.report [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.667 2 DEBUG nova.compute.resource_tracker [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.667 2 DEBUG oslo_concurrency.lockutils [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.674 2 INFO nova.compute.manager [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Oct  2 08:00:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:05.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.858 2 INFO nova.scheduler.client.report [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Deleted allocation for migration 555eda39-03cf-4b31-acfe-68fce4316f8f#033[00m
Oct  2 08:00:05 np0005465987 nova_compute[230713]: 2025-10-02 12:00:05.858 2 DEBUG nova.virt.libvirt.driver [None req-28623aee-e339-4af0-a473-622f833a079e 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct  2 08:00:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:06.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:06 np0005465987 nova_compute[230713]: 2025-10-02 12:00:06.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:06 np0005465987 nova_compute[230713]: 2025-10-02 12:00:06.792 2 DEBUG nova.compute.manager [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:00:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:06 np0005465987 nova_compute[230713]: 2025-10-02 12:00:06.922 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:06 np0005465987 nova_compute[230713]: 2025-10-02 12:00:06.923 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:06 np0005465987 nova_compute[230713]: 2025-10-02 12:00:06.971 2 DEBUG nova.objects.instance [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'pci_requests' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.003 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.004 2 INFO nova.compute.claims [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.004 2 DEBUG nova.objects.instance [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'resources' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.025 2 DEBUG nova.objects.instance [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.039 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406392.0381217, 05ef51ee-c5d0-405e-a30a-9915a4117983 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.039 2 INFO nova.compute.manager [-] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.094 2 DEBUG nova.compute.manager [None req-4b93ff83-0383-45a7-855d-e86223c4460e - - - - - -] [instance: 05ef51ee-c5d0-405e-a30a-9915a4117983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.119 2 INFO nova.compute.resource_tracker [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating resource usage from migration a7a9cc47-fbe3-422d-995e-09749ba0ce00#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.119 2 DEBUG nova.compute.resource_tracker [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Starting to track incoming migration a7a9cc47-fbe3-422d-995e-09749ba0ce00 with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.226 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:00:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4210241670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.646 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.652 2 DEBUG nova.compute.provider_tree [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.703 2 DEBUG nova.scheduler.client.report [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:00:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:07.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.810 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:07 np0005465987 nova_compute[230713]: 2025-10-02 12:00:07.811 2 INFO nova.compute.manager [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Migrating#033[00m
Oct  2 08:00:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:08.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:09 np0005465987 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 08:00:09 np0005465987 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 08:00:09 np0005465987 systemd-logind[794]: New session 53 of user nova.
Oct  2 08:00:09 np0005465987 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 08:00:09 np0005465987 systemd[1]: Starting User Manager for UID 42436...
Oct  2 08:00:09 np0005465987 nova_compute[230713]: 2025-10-02 12:00:09.279 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406394.276765, 530e7ace-2feb-4a9e-9430-5a1bfc678d22 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:00:09 np0005465987 nova_compute[230713]: 2025-10-02 12:00:09.280 2 INFO nova.compute.manager [-] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:00:09 np0005465987 systemd[235678]: Queued start job for default target Main User Target.
Oct  2 08:00:09 np0005465987 systemd[235678]: Created slice User Application Slice.
Oct  2 08:00:09 np0005465987 systemd[235678]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:00:09 np0005465987 systemd[235678]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:00:09 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:00:09 np0005465987 systemd[235678]: Reached target Paths.
Oct  2 08:00:09 np0005465987 systemd[235678]: Reached target Timers.
Oct  2 08:00:09 np0005465987 systemd[235678]: Starting D-Bus User Message Bus Socket...
Oct  2 08:00:09 np0005465987 systemd[235678]: Starting Create User's Volatile Files and Directories...
Oct  2 08:00:09 np0005465987 systemd[235678]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:00:09 np0005465987 systemd[235678]: Reached target Sockets.
Oct  2 08:00:09 np0005465987 nova_compute[230713]: 2025-10-02 12:00:09.333 2 DEBUG nova.compute.manager [None req-4d1c2e62-5be1-40d8-9c9a-671e378c295a - - - - - -] [instance: 530e7ace-2feb-4a9e-9430-5a1bfc678d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:00:09 np0005465987 systemd[235678]: Finished Create User's Volatile Files and Directories.
Oct  2 08:00:09 np0005465987 systemd[235678]: Reached target Basic System.
Oct  2 08:00:09 np0005465987 systemd[235678]: Reached target Main User Target.
Oct  2 08:00:09 np0005465987 systemd[235678]: Startup finished in 182ms.
Oct  2 08:00:09 np0005465987 systemd[1]: Started User Manager for UID 42436.
Oct  2 08:00:09 np0005465987 systemd[1]: Started Session 53 of User nova.
Oct  2 08:00:09 np0005465987 systemd[1]: session-53.scope: Deactivated successfully.
Oct  2 08:00:09 np0005465987 systemd-logind[794]: Session 53 logged out. Waiting for processes to exit.
Oct  2 08:00:09 np0005465987 systemd-logind[794]: Removed session 53.
Oct  2 08:00:09 np0005465987 systemd-logind[794]: New session 55 of user nova.
Oct  2 08:00:09 np0005465987 systemd[1]: Started Session 55 of User nova.
Oct  2 08:00:09 np0005465987 systemd[1]: session-55.scope: Deactivated successfully.
Oct  2 08:00:09 np0005465987 systemd-logind[794]: Session 55 logged out. Waiting for processes to exit.
Oct  2 08:00:09 np0005465987 systemd-logind[794]: Removed session 55.
Oct  2 08:00:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:09.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:10 np0005465987 nova_compute[230713]: 2025-10-02 12:00:10.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:10.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:11 np0005465987 nova_compute[230713]: 2025-10-02 12:00:11.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:11.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:12.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:13.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:14.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:15.210 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:00:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:15.212 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:00:15 np0005465987 nova_compute[230713]: 2025-10-02 12:00:15.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:15 np0005465987 nova_compute[230713]: 2025-10-02 12:00:15.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:15.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:00:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:00:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:00:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:16.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:16 np0005465987 nova_compute[230713]: 2025-10-02 12:00:16.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:17.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:18.214 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:00:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:18.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:18 np0005465987 podman[235832]: 2025-10-02 12:00:18.870735176 +0000 UTC m=+0.069091850 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 08:00:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:19.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:19 np0005465987 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 08:00:19 np0005465987 systemd[235678]: Activating special unit Exit the Session...
Oct  2 08:00:19 np0005465987 systemd[235678]: Stopped target Main User Target.
Oct  2 08:00:19 np0005465987 systemd[235678]: Stopped target Basic System.
Oct  2 08:00:19 np0005465987 systemd[235678]: Stopped target Paths.
Oct  2 08:00:19 np0005465987 systemd[235678]: Stopped target Sockets.
Oct  2 08:00:19 np0005465987 systemd[235678]: Stopped target Timers.
Oct  2 08:00:19 np0005465987 systemd[235678]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:00:19 np0005465987 systemd[235678]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:00:19 np0005465987 systemd[235678]: Closed D-Bus User Message Bus Socket.
Oct  2 08:00:19 np0005465987 systemd[235678]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:00:19 np0005465987 systemd[235678]: Removed slice User Application Slice.
Oct  2 08:00:19 np0005465987 systemd[235678]: Reached target Shutdown.
Oct  2 08:00:19 np0005465987 systemd[235678]: Finished Exit the Session.
Oct  2 08:00:19 np0005465987 systemd[235678]: Reached target Exit the Session.
Oct  2 08:00:19 np0005465987 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 08:00:19 np0005465987 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 08:00:19 np0005465987 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 08:00:19 np0005465987 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 08:00:19 np0005465987 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 08:00:19 np0005465987 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 08:00:19 np0005465987 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 08:00:20 np0005465987 nova_compute[230713]: 2025-10-02 12:00:20.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:20.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:21 np0005465987 nova_compute[230713]: 2025-10-02 12:00:21.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:21.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:22.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:23 np0005465987 nova_compute[230713]: 2025-10-02 12:00:23.045 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:00:23 np0005465987 nova_compute[230713]: 2025-10-02 12:00:23.046 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquired lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:00:23 np0005465987 nova_compute[230713]: 2025-10-02 12:00:23.046 2 DEBUG nova.network.neutron [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:00:23 np0005465987 nova_compute[230713]: 2025-10-02 12:00:23.365 2 DEBUG nova.network.neutron [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:00:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:23.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:23 np0005465987 nova_compute[230713]: 2025-10-02 12:00:23.839 2 DEBUG nova.network.neutron [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:00:23 np0005465987 nova_compute[230713]: 2025-10-02 12:00:23.856 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Releasing lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:00:23 np0005465987 nova_compute[230713]: 2025-10-02 12:00:23.980 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:00:23 np0005465987 nova_compute[230713]: 2025-10-02 12:00:23.983 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:00:23 np0005465987 nova_compute[230713]: 2025-10-02 12:00:23.984 2 INFO nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Creating image(s)#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.033 2 DEBUG nova.storage.rbd_utils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] creating snapshot(nova-resize) on rbd image(c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:00:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct  2 08:00:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:24.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.771 2 DEBUG nova.objects.instance [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.877 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.878 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Ensure instance console log exists: /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.878 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.878 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.879 2 DEBUG oslo_concurrency.lockutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.881 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.885 2 WARNING nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.890 2 DEBUG nova.virt.libvirt.host [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.891 2 DEBUG nova.virt.libvirt.host [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.897 2 DEBUG nova.virt.libvirt.host [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.897 2 DEBUG nova.virt.libvirt.host [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.898 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.898 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.899 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.899 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.899 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.899 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.899 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.900 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.900 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.900 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.900 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.900 2 DEBUG nova.virt.hardware [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.901 2 DEBUG nova.objects.instance [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:00:24 np0005465987 nova_compute[230713]: 2025-10-02 12:00:24.916 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:00:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1469888007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:00:25 np0005465987 nova_compute[230713]: 2025-10-02 12:00:25.302 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:25 np0005465987 nova_compute[230713]: 2025-10-02 12:00:25.337 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:25 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:00:25 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:00:25 np0005465987 nova_compute[230713]: 2025-10-02 12:00:25.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:25.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:00:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4219610136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:00:25 np0005465987 nova_compute[230713]: 2025-10-02 12:00:25.935 2 DEBUG oslo_concurrency.processutils [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:25 np0005465987 nova_compute[230713]: 2025-10-02 12:00:25.938 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <uuid>c5bc8428-581c-4ea6-85d8-6f153a1ee723</uuid>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <name>instance-00000008</name>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <memory>196608</memory>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <nova:name>tempest-MigrationsAdminTest-server-1495132323</nova:name>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:00:24</nova:creationTime>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.micro">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <nova:memory>192</nova:memory>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <nova:user uuid="1a06819bf8cc4ff7bccbbb2616ff2d21">tempest-MigrationsAdminTest-819597356-project-member</nova:user>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <nova:project uuid="f1ce36070fb047479c3a083f36733f63">tempest-MigrationsAdminTest-819597356</nova:project>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <entry name="serial">c5bc8428-581c-4ea6-85d8-6f153a1ee723</entry>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <entry name="uuid">c5bc8428-581c-4ea6-85d8-6f153a1ee723</entry>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c5bc8428-581c-4ea6-85d8-6f153a1ee723_disk.config">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723/console.log" append="off"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:00:25 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:00:25 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:00:25 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:00:25 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:00:25 np0005465987 nova_compute[230713]: 2025-10-02 12:00:25.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:00:25 np0005465987 nova_compute[230713]: 2025-10-02 12:00:25.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:00:25 np0005465987 nova_compute[230713]: 2025-10-02 12:00:25.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.003 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.004 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.217 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.221 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.221 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.222 2 INFO nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Using config drive#033[00m
Oct  2 08:00:26 np0005465987 systemd-machined[188335]: New machine qemu-4-instance-00000008.
Oct  2 08:00:26 np0005465987 systemd[1]: Started Virtual Machine qemu-4-instance-00000008.
Oct  2 08:00:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:26.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.704 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.721 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.721 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.721 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:00:26 np0005465987 nova_compute[230713]: 2025-10-02 12:00:26.722 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:00:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.395 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406427.3943036, c5bc8428-581c-4ea6-85d8-6f153a1ee723 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.396 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.398 2 DEBUG nova.compute.manager [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.402 2 INFO nova.virt.libvirt.driver [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance running successfully.#033[00m
Oct  2 08:00:27 np0005465987 virtqemud[230442]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.405 2 DEBUG nova.virt.libvirt.guest [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.405 2 DEBUG nova.virt.libvirt.driver [None req-0d3d58cd-a7f5-4484-aa8f-69724d57ff2f 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.479 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.482 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.550 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.550 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406427.3994603, c5bc8428-581c-4ea6-85d8-6f153a1ee723 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.551 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] VM Started (Lifecycle Event)#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.590 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.593 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:00:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:27.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:27.927 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:27.927 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:27.927 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:00:27 np0005465987 nova_compute[230713]: 2025-10-02 12:00:27.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.005 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.006 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.374 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.375 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.398 2 DEBUG nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:00:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:00:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/189751903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.433 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.502 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.503 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.510 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.510 2 INFO nova.compute.claims [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:00:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:28.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.534 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.534 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:00:28 np0005465987 podman[236135]: 2025-10-02 12:00:28.546505645 +0000 UTC m=+0.068583166 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:00:28 np0005465987 podman[236136]: 2025-10-02 12:00:28.576941551 +0000 UTC m=+0.083394893 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct  2 08:00:28 np0005465987 podman[236134]: 2025-10-02 12:00:28.589834495 +0000 UTC m=+0.110363063 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.673 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.741 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.742 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4900MB free_disk=20.851749420166016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:00:28 np0005465987 nova_compute[230713]: 2025-10-02 12:00:28.743 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:00:29 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1987659302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.104 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.110 2 DEBUG nova.compute.provider_tree [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.131 2 DEBUG nova.scheduler.client.report [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.166 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.167 2 DEBUG nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.171 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.233 2 DEBUG nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.234 2 DEBUG nova.network.neutron [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.247 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Applying migration context for instance c5bc8428-581c-4ea6-85d8-6f153a1ee723 as it has an incoming, in-progress migration a7a9cc47-fbe3-422d-995e-09749ba0ce00. Migration status is finished _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.248 2 INFO nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating resource usage from migration a7a9cc47-fbe3-422d-995e-09749ba0ce00#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.264 2 INFO nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.295 2 DEBUG nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.306 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance c5bc8428-581c-4ea6-85d8-6f153a1ee723 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.306 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance cd9deba5-2505-4edd-ac7c-483615217473 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.307 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.307 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.343 2 INFO nova.virt.block_device [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Booting with volume 134b86f3-5f3c-45b7-9cd8-0ead1590c407 at /dev/vda#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.393 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.612 2 DEBUG nova.policy [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b54b5e15e4c94d1f95a272981e9d9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd977ad6a90874946819537242925a8f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.617 2 DEBUG os_brick.utils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.618 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.629 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.630 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[eb207d30-7b8d-41cb-b2cf-40b6e99dc04f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.631 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.643 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.643 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[bfea3635-01a3-46d4-b056-29f0f54c597f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.645 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.655 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.655 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[823be9f8-f036-4256-a1be-e3d187a2e45f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.657 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca52a86-45c6-4312-9da0-055b86c90cd5]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.657 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.688 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.691 2 DEBUG os_brick.initiator.connectors.lightos [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.691 2 DEBUG os_brick.initiator.connectors.lightos [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.692 2 DEBUG os_brick.initiator.connectors.lightos [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.692 2 DEBUG os_brick.utils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] <== get_connector_properties: return (74ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.693 2 DEBUG nova.virt.block_device [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating existing volume attachment record: 829db4f7-64a6-478e-8284-7771fae0f3c1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:00:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:29.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:00:29 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1036707683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.854 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.860 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:00:29 np0005465987 nova_compute[230713]: 2025-10-02 12:00:29.933 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:00:30 np0005465987 nova_compute[230713]: 2025-10-02 12:00:30.000 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:00:30 np0005465987 nova_compute[230713]: 2025-10-02 12:00:30.001 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:30 np0005465987 nova_compute[230713]: 2025-10-02 12:00:30.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:30.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:30 np0005465987 nova_compute[230713]: 2025-10-02 12:00:30.995 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:00:30 np0005465987 nova_compute[230713]: 2025-10-02 12:00:30.998 2 DEBUG nova.network.neutron [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Successfully created port: cc377346-4a61-4e38-b682-f2111cc47ffd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.001 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.337 2 DEBUG nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.340 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.341 2 INFO nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Creating image(s)#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.342 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.343 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Ensure instance console log exists: /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.344 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.345 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.346 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:31 np0005465987 nova_compute[230713]: 2025-10-02 12:00:31.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:31.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Oct  2 08:00:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:32.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:32 np0005465987 nova_compute[230713]: 2025-10-02 12:00:32.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:00:32 np0005465987 nova_compute[230713]: 2025-10-02 12:00:32.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:00:32 np0005465987 nova_compute[230713]: 2025-10-02 12:00:32.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:00:33 np0005465987 nova_compute[230713]: 2025-10-02 12:00:33.541 2 DEBUG nova.network.neutron [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Successfully updated port: cc377346-4a61-4e38-b682-f2111cc47ffd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:00:33 np0005465987 nova_compute[230713]: 2025-10-02 12:00:33.573 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:00:33 np0005465987 nova_compute[230713]: 2025-10-02 12:00:33.573 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquired lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:00:33 np0005465987 nova_compute[230713]: 2025-10-02 12:00:33.574 2 DEBUG nova.network.neutron [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:00:33 np0005465987 nova_compute[230713]: 2025-10-02 12:00:33.697 2 DEBUG nova.compute.manager [req-0ec7e74c-6e2f-4293-b0cd-4bb605a52c6d req-5a846c04-5945-4133-bf6b-d8def45a2951 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-changed-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:00:33 np0005465987 nova_compute[230713]: 2025-10-02 12:00:33.698 2 DEBUG nova.compute.manager [req-0ec7e74c-6e2f-4293-b0cd-4bb605a52c6d req-5a846c04-5945-4133-bf6b-d8def45a2951 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Refreshing instance network info cache due to event network-changed-cc377346-4a61-4e38-b682-f2111cc47ffd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:00:33 np0005465987 nova_compute[230713]: 2025-10-02 12:00:33.698 2 DEBUG oslo_concurrency.lockutils [req-0ec7e74c-6e2f-4293-b0cd-4bb605a52c6d req-5a846c04-5945-4133-bf6b-d8def45a2951 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:00:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:33.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:33 np0005465987 nova_compute[230713]: 2025-10-02 12:00:33.860 2 DEBUG nova.network.neutron [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:00:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:34.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.313 2 DEBUG nova.network.neutron [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.360 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Releasing lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.361 2 DEBUG nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Instance network_info: |[{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.362 2 DEBUG oslo_concurrency.lockutils [req-0ec7e74c-6e2f-4293-b0cd-4bb605a52c6d req-5a846c04-5945-4133-bf6b-d8def45a2951 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.362 2 DEBUG nova.network.neutron [req-0ec7e74c-6e2f-4293-b0cd-4bb605a52c6d req-5a846c04-5945-4133-bf6b-d8def45a2951 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Refreshing network info cache for port cc377346-4a61-4e38-b682-f2111cc47ffd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.367 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Start _get_guest_xml network_info=[{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '829db4f7-64a6-478e-8284-7771fae0f3c1', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-134b86f3-5f3c-45b7-9cd8-0ead1590c407', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '134b86f3-5f3c-45b7-9cd8-0ead1590c407', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'cd9deba5-2505-4edd-ac7c-483615217473', 'attached_at': '', 'detached_at': '', 'volume_id': '134b86f3-5f3c-45b7-9cd8-0ead1590c407', 'serial': '134b86f3-5f3c-45b7-9cd8-0ead1590c407'}, 'guest_format': None, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.372 2 WARNING nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.388 2 DEBUG nova.virt.libvirt.host [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.390 2 DEBUG nova.virt.libvirt.host [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.393 2 DEBUG nova.virt.libvirt.host [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.394 2 DEBUG nova.virt.libvirt.host [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.396 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.396 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.397 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.398 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.398 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.399 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.399 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.400 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.401 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.401 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.402 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.403 2 DEBUG nova.virt.hardware [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.444 2 DEBUG nova.storage.rbd_utils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image cd9deba5-2505-4edd-ac7c-483615217473_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.450 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:35.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:00:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3141163756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.896 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.958 2 DEBUG nova.virt.libvirt.vif [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1192256805',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1192256805',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-rhco07g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:00:29Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=cd9deba5-2505-4edd-ac7c-483615217473,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.960 2 DEBUG nova.network.os_vif_util [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converting VIF {"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.962 2 DEBUG nova.network.os_vif_util [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:00:35 np0005465987 nova_compute[230713]: 2025-10-02 12:00:35.966 2 DEBUG nova.objects.instance [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd9deba5-2505-4edd-ac7c-483615217473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.076 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <uuid>cd9deba5-2505-4edd-ac7c-483615217473</uuid>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <name>instance-00000009</name>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-1192256805</nova:name>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:00:35</nova:creationTime>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <nova:user uuid="b54b5e15e4c94d1f95a272981e9d9a89">tempest-LiveAutoBlockMigrationV225Test-959655280-project-member</nova:user>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <nova:project uuid="d977ad6a90874946819537242925a8f0">tempest-LiveAutoBlockMigrationV225Test-959655280</nova:project>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <nova:port uuid="cc377346-4a61-4e38-b682-f2111cc47ffd">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <entry name="serial">cd9deba5-2505-4edd-ac7c-483615217473</entry>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <entry name="uuid">cd9deba5-2505-4edd-ac7c-483615217473</entry>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/cd9deba5-2505-4edd-ac7c-483615217473_disk.config">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-134b86f3-5f3c-45b7-9cd8-0ead1590c407">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <serial>134b86f3-5f3c-45b7-9cd8-0ead1590c407</serial>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:1b:73:25"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <target dev="tapcc377346-4a"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/console.log" append="off"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:00:36 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:00:36 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:00:36 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:00:36 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.079 2 DEBUG nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Preparing to wait for external event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.080 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.081 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.081 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.083 2 DEBUG nova.virt.libvirt.vif [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1192256805',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1192256805',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-rhco07g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:00:29Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=cd9deba5-2505-4edd-ac7c-483615217473,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.083 2 DEBUG nova.network.os_vif_util [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converting VIF {"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.085 2 DEBUG nova.network.os_vif_util [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.085 2 DEBUG os_vif [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc377346-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.095 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc377346-4a, col_values=(('external_ids', {'iface-id': 'cc377346-4a61-4e38-b682-f2111cc47ffd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:73:25', 'vm-uuid': 'cd9deba5-2505-4edd-ac7c-483615217473'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:00:36 np0005465987 NetworkManager[44910]: <info>  [1759406436.1379] manager: (tapcc377346-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.146 2 INFO os_vif [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a')#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.214 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.214 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.215 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] No VIF found with MAC fa:16:3e:1b:73:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.215 2 INFO nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Using config drive#033[00m
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.248 2 DEBUG nova.storage.rbd_utils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image cd9deba5-2505-4edd-ac7c-483615217473_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:00:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:36.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:36 np0005465987 nova_compute[230713]: 2025-10-02 12:00:36.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.011 2 INFO nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Creating config drive at /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/disk.config#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.016 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpotfnaufa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.143 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpotfnaufa" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.184 2 DEBUG nova.storage.rbd_utils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] rbd image cd9deba5-2505-4edd-ac7c-483615217473_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.188 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/disk.config cd9deba5-2505-4edd-ac7c-483615217473_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.393 2 DEBUG oslo_concurrency.processutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/disk.config cd9deba5-2505-4edd-ac7c-483615217473_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.396 2 INFO nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Deleting local config drive /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/disk.config because it was imported into RBD.#033[00m
Oct  2 08:00:37 np0005465987 kernel: tapcc377346-4a: entered promiscuous mode
Oct  2 08:00:37 np0005465987 NetworkManager[44910]: <info>  [1759406437.4637] manager: (tapcc377346-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:37Z|00062|binding|INFO|Claiming lport cc377346-4a61-4e38-b682-f2111cc47ffd for this chassis.
Oct  2 08:00:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:37Z|00063|binding|INFO|cc377346-4a61-4e38-b682-f2111cc47ffd: Claiming fa:16:3e:1b:73:25 10.100.0.4
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.489 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:73:25 10.100.0.4'], port_security=['fa:16:3e:1b:73:25 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cd9deba5-2505-4edd-ac7c-483615217473', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=cc377346-4a61-4e38-b682-f2111cc47ffd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.492 138325 INFO neutron.agent.ovn.metadata.agent [-] Port cc377346-4a61-4e38-b682-f2111cc47ffd in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 bound to our chassis#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.494 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e991676-99e8-43d9-8575-4a21f50b0ed5#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.512 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4bc36e-1ac2-4192-84a2-641aca70044c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.513 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e991676-91 in ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.515 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e991676-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.515 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[aee0f812-e323-4ed8-ac1a-a498a49c1d48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.517 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7325e853-137c-4b03-a5a4-06b007f78772]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 systemd-udevd[236365]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:00:37 np0005465987 systemd-machined[188335]: New machine qemu-5-instance-00000009.
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.526 2 DEBUG nova.network.neutron [req-0ec7e74c-6e2f-4293-b0cd-4bb605a52c6d req-5a846c04-5945-4133-bf6b-d8def45a2951 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updated VIF entry in instance network info cache for port cc377346-4a61-4e38-b682-f2111cc47ffd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.527 2 DEBUG nova.network.neutron [req-0ec7e74c-6e2f-4293-b0cd-4bb605a52c6d req-5a846c04-5945-4133-bf6b-d8def45a2951 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.530 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c7f8ba-7a11-454d-8e31-4a39a0794b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 systemd[1]: Started Virtual Machine qemu-5-instance-00000009.
Oct  2 08:00:37 np0005465987 NetworkManager[44910]: <info>  [1759406437.5462] device (tapcc377346-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:00:37 np0005465987 NetworkManager[44910]: <info>  [1759406437.5484] device (tapcc377346-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.550 2 DEBUG oslo_concurrency.lockutils [req-0ec7e74c-6e2f-4293-b0cd-4bb605a52c6d req-5a846c04-5945-4133-bf6b-d8def45a2951 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.569 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c19b12a5-385a-4194-b8fd-fb688cbd4657]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:37Z|00064|binding|INFO|Setting lport cc377346-4a61-4e38-b682-f2111cc47ffd ovn-installed in OVS
Oct  2 08:00:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:37Z|00065|binding|INFO|Setting lport cc377346-4a61-4e38-b682-f2111cc47ffd up in Southbound
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.618 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a72c0d96-576f-4d22-824e-70f759cec4ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.627 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[144ec5a5-b906-4d8f-9a7c-5929d868f6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 NetworkManager[44910]: <info>  [1759406437.6292] manager: (tap1e991676-90): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.663 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0a51a9af-2195-4d66-9c01-7cc74a5658b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.666 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6361093d-4efe-49e4-86c3-1282b0efcc30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 NetworkManager[44910]: <info>  [1759406437.6905] device (tap1e991676-90): carrier: link connected
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.696 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[021a1db5-dcf9-4e4e-aefe-41e614bc41f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.714 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3e4a97-6c65-4a57-b3eb-ebaadfd50c3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450626, 'reachable_time': 44079, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236397, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.735 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4a776d-450b-4a98-bf25-804f9e7f00a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:ef6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 450626, 'tstamp': 450626}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236398, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.759 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[561dda85-39d3-48ed-b1b4-143ccfe68956]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450626, 'reachable_time': 44079, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 236399, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.796 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[83de2bd2-53b2-42f9-b4e4-3ae70e627598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.851 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb9ceb7-d89a-46b2-a241-41561b90c7d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.854 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.854 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.855 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e991676-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:00:37 np0005465987 kernel: tap1e991676-90: entered promiscuous mode
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:37 np0005465987 NetworkManager[44910]: <info>  [1759406437.8573] manager: (tap1e991676-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.860 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e991676-90, col_values=(('external_ids', {'iface-id': 'a0e52ac7-beb4-4d4b-863a-9b95be4c9e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:37Z|00066|binding|INFO|Releasing lport a0e52ac7-beb4-4d4b-863a-9b95be4c9e74 from this chassis (sb_readonly=0)
Oct  2 08:00:37 np0005465987 nova_compute[230713]: 2025-10-02 12:00:37.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.884 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.885 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5e449652-3fbc-4a85-88ab-8a5815ec9ff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.886 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:00:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:37.887 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'env', 'PROCESS_TAG=haproxy-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e991676-99e8-43d9-8575-4a21f50b0ed5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:00:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.056 2 DEBUG nova.compute.manager [req-c3b9f725-25cb-433c-9815-9364221d7347 req-eb29596c-2da2-43c9-997b-a7e67fd5fb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.056 2 DEBUG oslo_concurrency.lockutils [req-c3b9f725-25cb-433c-9815-9364221d7347 req-eb29596c-2da2-43c9-997b-a7e67fd5fb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.056 2 DEBUG oslo_concurrency.lockutils [req-c3b9f725-25cb-433c-9815-9364221d7347 req-eb29596c-2da2-43c9-997b-a7e67fd5fb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.056 2 DEBUG oslo_concurrency.lockutils [req-c3b9f725-25cb-433c-9815-9364221d7347 req-eb29596c-2da2-43c9-997b-a7e67fd5fb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.057 2 DEBUG nova.compute.manager [req-c3b9f725-25cb-433c-9815-9364221d7347 req-eb29596c-2da2-43c9-997b-a7e67fd5fb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Processing event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:00:38 np0005465987 podman[236473]: 2025-10-02 12:00:38.325308455 +0000 UTC m=+0.063288619 container create e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:00:38 np0005465987 systemd[1]: Started libpod-conmon-e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007.scope.
Oct  2 08:00:38 np0005465987 podman[236473]: 2025-10-02 12:00:38.289235674 +0000 UTC m=+0.027215878 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.385 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406438.3846753, cd9deba5-2505-4edd-ac7c-483615217473 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.386 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Started (Lifecycle Event)#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.389 2 DEBUG nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.393 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:00:38 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.398 2 INFO nova.virt.libvirt.driver [-] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Instance spawned successfully.#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.399 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:00:38 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb79bf9e857125e101d9cc52234e9e3135638b4a7f81ee7bb54107220136be7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.414 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.419 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:00:38 np0005465987 podman[236473]: 2025-10-02 12:00:38.43100815 +0000 UTC m=+0.168988374 container init e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.442 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:00:38 np0005465987 podman[236473]: 2025-10-02 12:00:38.443194105 +0000 UTC m=+0.181174299 container start e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.443 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.444 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.445 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.446 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.447 2 DEBUG nova.virt.libvirt.driver [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.457 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.457 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406438.3848293, cd9deba5-2505-4edd-ac7c-483615217473 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.458 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:00:38 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[236488]: [NOTICE]   (236492) : New worker (236494) forked
Oct  2 08:00:38 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[236488]: [NOTICE]   (236492) : Loading success.
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.493 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.498 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406438.3919103, cd9deba5-2505-4edd-ac7c-483615217473 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.499 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:00:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:38.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.561 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:00:38 np0005465987 nova_compute[230713]: 2025-10-02 12:00:38.565 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:00:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:39.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:40 np0005465987 nova_compute[230713]: 2025-10-02 12:00:40.351 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:00:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:40.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:40 np0005465987 nova_compute[230713]: 2025-10-02 12:00:40.786 2 INFO nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Took 9.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:00:40 np0005465987 nova_compute[230713]: 2025-10-02 12:00:40.786 2 DEBUG nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:00:40 np0005465987 nova_compute[230713]: 2025-10-02 12:00:40.873 2 DEBUG nova.compute.manager [req-b37d75a6-8e47-48d2-8bae-9bc3cbe4100f req-59e216ad-810f-4f85-8be7-99074efde22a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:00:40 np0005465987 nova_compute[230713]: 2025-10-02 12:00:40.873 2 DEBUG oslo_concurrency.lockutils [req-b37d75a6-8e47-48d2-8bae-9bc3cbe4100f req-59e216ad-810f-4f85-8be7-99074efde22a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:40 np0005465987 nova_compute[230713]: 2025-10-02 12:00:40.874 2 DEBUG oslo_concurrency.lockutils [req-b37d75a6-8e47-48d2-8bae-9bc3cbe4100f req-59e216ad-810f-4f85-8be7-99074efde22a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:40 np0005465987 nova_compute[230713]: 2025-10-02 12:00:40.874 2 DEBUG oslo_concurrency.lockutils [req-b37d75a6-8e47-48d2-8bae-9bc3cbe4100f req-59e216ad-810f-4f85-8be7-99074efde22a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:40 np0005465987 nova_compute[230713]: 2025-10-02 12:00:40.874 2 DEBUG nova.compute.manager [req-b37d75a6-8e47-48d2-8bae-9bc3cbe4100f req-59e216ad-810f-4f85-8be7-99074efde22a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:00:40 np0005465987 nova_compute[230713]: 2025-10-02 12:00:40.874 2 WARNING nova.compute.manager [req-b37d75a6-8e47-48d2-8bae-9bc3cbe4100f req-59e216ad-810f-4f85-8be7-99074efde22a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:00:41 np0005465987 nova_compute[230713]: 2025-10-02 12:00:41.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:41 np0005465987 nova_compute[230713]: 2025-10-02 12:00:41.142 2 INFO nova.compute.manager [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Took 12.68 seconds to build instance.#033[00m
Oct  2 08:00:41 np0005465987 nova_compute[230713]: 2025-10-02 12:00:41.213 2 DEBUG oslo_concurrency.lockutils [None req-a5508bf6-1aa4-408f-b897-53c13342cd8b b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:41 np0005465987 nova_compute[230713]: 2025-10-02 12:00:41.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:41.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:42.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:43.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:44.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:46 np0005465987 nova_compute[230713]: 2025-10-02 12:00:46.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:46.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:46 np0005465987 nova_compute[230713]: 2025-10-02 12:00:46.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:47.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:48 np0005465987 nova_compute[230713]: 2025-10-02 12:00:48.114 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Check if temp file /var/lib/nova/instances/tmpxwrhq_3i exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Oct  2 08:00:48 np0005465987 nova_compute[230713]: 2025-10-02 12:00:48.115 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxwrhq_3i',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Oct  2 08:00:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:48.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:49.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:49 np0005465987 podman[236503]: 2025-10-02 12:00:49.880538962 +0000 UTC m=+0.081864200 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:00:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:50.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:51Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:73:25 10.100.0.4
Oct  2 08:00:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:51Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:73:25 10.100.0.4
Oct  2 08:00:51 np0005465987 nova_compute[230713]: 2025-10-02 12:00:51.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:51 np0005465987 nova_compute[230713]: 2025-10-02 12:00:51.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:51.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:00:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:52.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:00:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:53.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:54.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:54 np0005465987 nova_compute[230713]: 2025-10-02 12:00:54.598 2 DEBUG nova.compute.manager [req-dbca0bf3-4d9b-4247-ba8b-0182c54c12f5 req-8aa258cc-0bd6-459d-9a8d-7be522ecc7f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:00:54 np0005465987 nova_compute[230713]: 2025-10-02 12:00:54.598 2 DEBUG oslo_concurrency.lockutils [req-dbca0bf3-4d9b-4247-ba8b-0182c54c12f5 req-8aa258cc-0bd6-459d-9a8d-7be522ecc7f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:54 np0005465987 nova_compute[230713]: 2025-10-02 12:00:54.599 2 DEBUG oslo_concurrency.lockutils [req-dbca0bf3-4d9b-4247-ba8b-0182c54c12f5 req-8aa258cc-0bd6-459d-9a8d-7be522ecc7f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:54 np0005465987 nova_compute[230713]: 2025-10-02 12:00:54.599 2 DEBUG oslo_concurrency.lockutils [req-dbca0bf3-4d9b-4247-ba8b-0182c54c12f5 req-8aa258cc-0bd6-459d-9a8d-7be522ecc7f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:54 np0005465987 nova_compute[230713]: 2025-10-02 12:00:54.599 2 DEBUG nova.compute.manager [req-dbca0bf3-4d9b-4247-ba8b-0182c54c12f5 req-8aa258cc-0bd6-459d-9a8d-7be522ecc7f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:00:54 np0005465987 nova_compute[230713]: 2025-10-02 12:00:54.599 2 DEBUG nova.compute.manager [req-dbca0bf3-4d9b-4247-ba8b-0182c54c12f5 req-8aa258cc-0bd6-459d-9a8d-7be522ecc7f3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.158 2 INFO nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Took 5.99 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.158 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.195 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxwrhq_3i',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(1eb19c1d-ea19-41e7-a75d-0bcff027382d),old_vol_attachment_ids={134b86f3-5f3c-45b7-9cd8-0ead1590c407='829db4f7-64a6-478e-8284-7771fae0f3c1'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.197 2 DEBUG nova.objects.instance [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lazy-loading 'migration_context' on Instance uuid cd9deba5-2505-4edd-ac7c-483615217473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.198 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.200 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.200 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.214 2 DEBUG nova.virt.libvirt.migration [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Find same serial number: pos=1, serial=134b86f3-5f3c-45b7-9cd8-0ead1590c407 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.215 2 DEBUG nova.virt.libvirt.vif [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1192256805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1192256805',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:00:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-rhco07g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:00:41Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=cd9deba5-2505-4edd-ac7c-483615217473,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.215 2 DEBUG nova.network.os_vif_util [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converting VIF {"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.216 2 DEBUG nova.network.os_vif_util [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.216 2 DEBUG nova.virt.libvirt.migration [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating guest XML with vif config: <interface type="ethernet">
Oct  2 08:00:55 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:1b:73:25"/>
Oct  2 08:00:55 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:00:55 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:00:55 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:00:55 np0005465987 nova_compute[230713]:  <target dev="tapcc377346-4a"/>
Oct  2 08:00:55 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:00:55 np0005465987 nova_compute[230713]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.217 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.701 2 DEBUG nova.virt.libvirt.migration [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.702 2 INFO nova.virt.libvirt.migration [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Oct  2 08:00:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:55.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:55 np0005465987 nova_compute[230713]: 2025-10-02 12:00:55.955 2 INFO nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.458 2 DEBUG nova.virt.libvirt.migration [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.459 2 DEBUG nova.virt.libvirt.migration [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:00:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:56.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.672 2 DEBUG nova.compute.manager [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.672 2 DEBUG oslo_concurrency.lockutils [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.672 2 DEBUG oslo_concurrency.lockutils [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.673 2 DEBUG oslo_concurrency.lockutils [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.673 2 DEBUG nova.compute.manager [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.673 2 WARNING nova.compute.manager [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.673 2 DEBUG nova.compute.manager [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-changed-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.673 2 DEBUG nova.compute.manager [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Refreshing instance network info cache due to event network-changed-cc377346-4a61-4e38-b682-f2111cc47ffd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.674 2 DEBUG oslo_concurrency.lockutils [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.674 2 DEBUG oslo_concurrency.lockutils [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.674 2 DEBUG nova.network.neutron [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Refreshing network info cache for port cc377346-4a61-4e38-b682-f2111cc47ffd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.962 2 DEBUG nova.virt.libvirt.migration [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:00:56 np0005465987 nova_compute[230713]: 2025-10-02 12:00:56.963 2 DEBUG nova.virt.libvirt.migration [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.008 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406457.0064657, cd9deba5-2505-4edd-ac7c-483615217473 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.008 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.035 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.038 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.060 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Oct  2 08:00:57 np0005465987 kernel: tapcc377346-4a (unregistering): left promiscuous mode
Oct  2 08:00:57 np0005465987 NetworkManager[44910]: <info>  [1759406457.1862] device (tapcc377346-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:00:57 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:57Z|00067|binding|INFO|Releasing lport cc377346-4a61-4e38-b682-f2111cc47ffd from this chassis (sb_readonly=0)
Oct  2 08:00:57 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:57Z|00068|binding|INFO|Setting lport cc377346-4a61-4e38-b682-f2111cc47ffd down in Southbound
Oct  2 08:00:57 np0005465987 ovn_controller[129172]: 2025-10-02T12:00:57Z|00069|binding|INFO|Removing iface tapcc377346-4a ovn-installed in OVS
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.246 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:73:25 10.100.0.4'], port_security=['fa:16:3e:1b:73:25 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '6718a9ec-e13c-42f0-978a-6c44c48d0d54'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cd9deba5-2505-4edd-ac7c-483615217473', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '8', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=cc377346-4a61-4e38-b682-f2111cc47ffd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.250 138325 INFO neutron.agent.ovn.metadata.agent [-] Port cc377346-4a61-4e38-b682-f2111cc47ffd in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 unbound from our chassis#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.254 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e991676-99e8-43d9-8575-4a21f50b0ed5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.257 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cc52b746-646b-4433-868d-1436e10b0948]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.258 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 namespace which is not needed anymore#033[00m
Oct  2 08:00:57 np0005465987 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct  2 08:00:57 np0005465987 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Consumed 13.674s CPU time.
Oct  2 08:00:57 np0005465987 systemd-machined[188335]: Machine qemu-5-instance-00000009 terminated.
Oct  2 08:00:57 np0005465987 virtqemud[230442]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volume-134b86f3-5f3c-45b7-9cd8-0ead1590c407: No such file or directory
Oct  2 08:00:57 np0005465987 virtqemud[230442]: Unable to get XATTR trusted.libvirt.security.ref_dac on volume-134b86f3-5f3c-45b7-9cd8-0ead1590c407: No such file or directory
Oct  2 08:00:57 np0005465987 NetworkManager[44910]: <info>  [1759406457.3370] manager: (tapcc377346-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.356 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.356 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.357 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Oct  2 08:00:57 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[236488]: [NOTICE]   (236492) : haproxy version is 2.8.14-c23fe91
Oct  2 08:00:57 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[236488]: [NOTICE]   (236492) : path to executable is /usr/sbin/haproxy
Oct  2 08:00:57 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[236488]: [WARNING]  (236492) : Exiting Master process...
Oct  2 08:00:57 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[236488]: [WARNING]  (236492) : Exiting Master process...
Oct  2 08:00:57 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[236488]: [ALERT]    (236492) : Current worker (236494) exited with code 143 (Terminated)
Oct  2 08:00:57 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[236488]: [WARNING]  (236492) : All workers exited. Exiting... (0)
Oct  2 08:00:57 np0005465987 systemd[1]: libpod-e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007.scope: Deactivated successfully.
Oct  2 08:00:57 np0005465987 podman[236558]: 2025-10-02 12:00:57.430333312 +0000 UTC m=+0.060854014 container died e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:00:57 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007-userdata-shm.mount: Deactivated successfully.
Oct  2 08:00:57 np0005465987 systemd[1]: var-lib-containers-storage-overlay-bb79bf9e857125e101d9cc52234e9e3135638b4a7f81ee7bb54107220136be7e-merged.mount: Deactivated successfully.
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.466 2 DEBUG nova.virt.libvirt.guest [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'cd9deba5-2505-4edd-ac7c-483615217473' (instance-00000009) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.466 2 INFO nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migration operation has completed#033[00m
Oct  2 08:00:57 np0005465987 podman[236558]: 2025-10-02 12:00:57.466955298 +0000 UTC m=+0.097475990 container cleanup e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.466 2 INFO nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] _post_live_migration() is started..#033[00m
Oct  2 08:00:57 np0005465987 systemd[1]: libpod-conmon-e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007.scope: Deactivated successfully.
Oct  2 08:00:57 np0005465987 podman[236584]: 2025-10-02 12:00:57.535032529 +0000 UTC m=+0.046011946 container remove e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.541 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[73f301af-a9ca-489a-bf4e-f2089704db31]: (4, ('Thu Oct  2 12:00:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 (e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007)\ne45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007\nThu Oct  2 12:00:57 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 (e45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007)\ne45f4106958436773a05dcdc384a74a7615017c12ffa82c9d806321783a9c007\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.543 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a6764884-2dca-404c-bca8-d78c0ba2fa37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.544 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:57 np0005465987 kernel: tap1e991676-90: left promiscuous mode
Oct  2 08:00:57 np0005465987 nova_compute[230713]: 2025-10-02 12:00:57.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.575 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[105054c4-46ee-4cb5-9051-1a336cecec4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.609 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3732f061-83d5-467d-ad02-d9c39155acdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.610 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d19722-b39f-4ab5-b4e9-14b67891cc07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.623 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f056951d-51c2-47c3-a2b8-144e805fcf14]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450618, 'reachable_time': 20954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236603, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:57 np0005465987 systemd[1]: run-netns-ovnmeta\x2d1e991676\x2d99e8\x2d43d9\x2d8575\x2d4a21f50b0ed5.mount: Deactivated successfully.
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.627 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:00:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:00:57.627 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[97976d26-ab15-49fc-910d-37c4caf1a6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:00:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:57.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:00:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:58.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:00:58 np0005465987 nova_compute[230713]: 2025-10-02 12:00:58.594 2 DEBUG nova.network.neutron [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updated VIF entry in instance network info cache for port cc377346-4a61-4e38-b682-f2111cc47ffd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:00:58 np0005465987 nova_compute[230713]: 2025-10-02 12:00:58.595 2 DEBUG nova.network.neutron [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:00:58 np0005465987 nova_compute[230713]: 2025-10-02 12:00:58.620 2 DEBUG oslo_concurrency.lockutils [req-7cd55ada-629f-4fff-b608-899c23b495ae req-00a23282-273f-495e-bf86-3ff775d20d59 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:00:58 np0005465987 nova_compute[230713]: 2025-10-02 12:00:58.742 2 DEBUG nova.compute.manager [req-56d44d30-1b97-45c9-871c-1633e78c8ab5 req-9cb5eea4-ebb1-4bf2-a4eb-4fa11751cd8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:00:58 np0005465987 nova_compute[230713]: 2025-10-02 12:00:58.742 2 DEBUG oslo_concurrency.lockutils [req-56d44d30-1b97-45c9-871c-1633e78c8ab5 req-9cb5eea4-ebb1-4bf2-a4eb-4fa11751cd8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:58 np0005465987 nova_compute[230713]: 2025-10-02 12:00:58.742 2 DEBUG oslo_concurrency.lockutils [req-56d44d30-1b97-45c9-871c-1633e78c8ab5 req-9cb5eea4-ebb1-4bf2-a4eb-4fa11751cd8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:58 np0005465987 nova_compute[230713]: 2025-10-02 12:00:58.743 2 DEBUG oslo_concurrency.lockutils [req-56d44d30-1b97-45c9-871c-1633e78c8ab5 req-9cb5eea4-ebb1-4bf2-a4eb-4fa11751cd8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:58 np0005465987 nova_compute[230713]: 2025-10-02 12:00:58.743 2 DEBUG nova.compute.manager [req-56d44d30-1b97-45c9-871c-1633e78c8ab5 req-9cb5eea4-ebb1-4bf2-a4eb-4fa11751cd8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:00:58 np0005465987 nova_compute[230713]: 2025-10-02 12:00:58.743 2 DEBUG nova.compute.manager [req-56d44d30-1b97-45c9-871c-1633e78c8ab5 req-9cb5eea4-ebb1-4bf2-a4eb-4fa11751cd8a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:00:58 np0005465987 podman[236605]: 2025-10-02 12:00:58.835633349 +0000 UTC m=+0.050274802 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid)
Oct  2 08:00:58 np0005465987 podman[236606]: 2025-10-02 12:00:58.842698503 +0000 UTC m=+0.052963366 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:00:58 np0005465987 podman[236604]: 2025-10-02 12:00:58.868908184 +0000 UTC m=+0.086132808 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.559 2 DEBUG nova.network.neutron [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Activated binding for port cc377346-4a61-4e38-b682-f2111cc47ffd and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.560 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.560 2 DEBUG nova.virt.libvirt.vif [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1192256805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1192256805',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:00:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-rhco07g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:00:47Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=cd9deba5-2505-4edd-ac7c-483615217473,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.560 2 DEBUG nova.network.os_vif_util [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converting VIF {"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.561 2 DEBUG nova.network.os_vif_util [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.561 2 DEBUG os_vif [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc377346-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.569 2 INFO os_vif [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a')#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.569 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.570 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.570 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.570 2 DEBUG nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.570 2 INFO nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Deleting instance files /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473_del#033[00m
Oct  2 08:00:59 np0005465987 nova_compute[230713]: 2025-10-02 12:00:59.571 2 INFO nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Deletion of /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473_del complete#033[00m
Oct  2 08:00:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:00:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:59.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:00.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:01 np0005465987 nova_compute[230713]: 2025-10-02 12:01:01.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:01.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:02.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:02 np0005465987 nova_compute[230713]: 2025-10-02 12:01:02.656 2 DEBUG nova.compute.manager [req-78d17b29-8bb4-494c-b2db-1e7ac9cd01b7 req-9677f490-b533-454c-aa20-ab7cdd7cce34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:01:02 np0005465987 nova_compute[230713]: 2025-10-02 12:01:02.657 2 DEBUG oslo_concurrency.lockutils [req-78d17b29-8bb4-494c-b2db-1e7ac9cd01b7 req-9677f490-b533-454c-aa20-ab7cdd7cce34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:02 np0005465987 nova_compute[230713]: 2025-10-02 12:01:02.657 2 DEBUG oslo_concurrency.lockutils [req-78d17b29-8bb4-494c-b2db-1e7ac9cd01b7 req-9677f490-b533-454c-aa20-ab7cdd7cce34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:02 np0005465987 nova_compute[230713]: 2025-10-02 12:01:02.657 2 DEBUG oslo_concurrency.lockutils [req-78d17b29-8bb4-494c-b2db-1e7ac9cd01b7 req-9677f490-b533-454c-aa20-ab7cdd7cce34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:02 np0005465987 nova_compute[230713]: 2025-10-02 12:01:02.657 2 DEBUG nova.compute.manager [req-78d17b29-8bb4-494c-b2db-1e7ac9cd01b7 req-9677f490-b533-454c-aa20-ab7cdd7cce34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:01:02 np0005465987 nova_compute[230713]: 2025-10-02 12:01:02.657 2 WARNING nova.compute.manager [req-78d17b29-8bb4-494c-b2db-1e7ac9cd01b7 req-9677f490-b533-454c-aa20-ab7cdd7cce34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:01:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:04.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.830 2 DEBUG nova.compute.manager [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.831 2 DEBUG oslo_concurrency.lockutils [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.831 2 DEBUG oslo_concurrency.lockutils [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.831 2 DEBUG oslo_concurrency.lockutils [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.831 2 DEBUG nova.compute.manager [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.831 2 WARNING nova.compute.manager [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.832 2 DEBUG nova.compute.manager [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.832 2 DEBUG oslo_concurrency.lockutils [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.832 2 DEBUG oslo_concurrency.lockutils [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.832 2 DEBUG oslo_concurrency.lockutils [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.833 2 DEBUG nova.compute.manager [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:01:04 np0005465987 nova_compute[230713]: 2025-10-02 12:01:04.833 2 WARNING nova.compute.manager [req-d68105c3-bb28-4b25-bdba-89966166b3d0 req-f45ce17b-c5fb-43ee-b107-2c59de20f2be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:01:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:05.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:06.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:06 np0005465987 nova_compute[230713]: 2025-10-02 12:01:06.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:07.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:09 np0005465987 nova_compute[230713]: 2025-10-02 12:01:09.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:09.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:10.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:11 np0005465987 nova_compute[230713]: 2025-10-02 12:01:11.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:11.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:12 np0005465987 nova_compute[230713]: 2025-10-02 12:01:12.354 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406457.353569, cd9deba5-2505-4edd-ac7c-483615217473 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:01:12 np0005465987 nova_compute[230713]: 2025-10-02 12:01:12.355 2 INFO nova.compute.manager [-] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:01:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:12.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:13.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.185 2 DEBUG nova.compute.manager [None req-d19eda50-82ff-422b-ace1-dc9c71e3c96a - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.351 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.352 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.352 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:14.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.602 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.602 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.603 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.603 2 DEBUG nova.compute.resource_tracker [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:01:14 np0005465987 nova_compute[230713]: 2025-10-02 12:01:14.603 2 DEBUG oslo_concurrency.processutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:01:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:01:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2384123232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:01:15 np0005465987 nova_compute[230713]: 2025-10-02 12:01:15.020 2 DEBUG oslo_concurrency.processutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:01:15 np0005465987 nova_compute[230713]: 2025-10-02 12:01:15.632 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:01:15 np0005465987 nova_compute[230713]: 2025-10-02 12:01:15.633 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:01:15 np0005465987 nova_compute[230713]: 2025-10-02 12:01:15.809 2 WARNING nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:01:15 np0005465987 nova_compute[230713]: 2025-10-02 12:01:15.810 2 DEBUG nova.compute.resource_tracker [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4692MB free_disk=20.785457611083984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:01:15 np0005465987 nova_compute[230713]: 2025-10-02 12:01:15.810 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:15 np0005465987 nova_compute[230713]: 2025-10-02 12:01:15.810 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:16 np0005465987 nova_compute[230713]: 2025-10-02 12:01:16.504 2 DEBUG nova.compute.resource_tracker [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Migration for instance cd9deba5-2505-4edd-ac7c-483615217473 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:01:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:16.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:16 np0005465987 nova_compute[230713]: 2025-10-02 12:01:16.743 2 DEBUG nova.compute.resource_tracker [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct  2 08:01:16 np0005465987 nova_compute[230713]: 2025-10-02 12:01:16.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:17 np0005465987 nova_compute[230713]: 2025-10-02 12:01:17.000 2 DEBUG nova.compute.resource_tracker [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Instance c5bc8428-581c-4ea6-85d8-6f153a1ee723 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:01:17 np0005465987 nova_compute[230713]: 2025-10-02 12:01:17.001 2 DEBUG nova.compute.resource_tracker [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Migration 1eb19c1d-ea19-41e7-a75d-0bcff027382d is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:01:17 np0005465987 nova_compute[230713]: 2025-10-02 12:01:17.001 2 DEBUG nova.compute.resource_tracker [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:01:17 np0005465987 nova_compute[230713]: 2025-10-02 12:01:17.002 2 DEBUG nova.compute.resource_tracker [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:01:17 np0005465987 nova_compute[230713]: 2025-10-02 12:01:17.066 2 DEBUG oslo_concurrency.processutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:01:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:01:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/345505956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:01:17 np0005465987 nova_compute[230713]: 2025-10-02 12:01:17.492 2 DEBUG oslo_concurrency.processutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:01:17 np0005465987 nova_compute[230713]: 2025-10-02 12:01:17.500 2 DEBUG nova.compute.provider_tree [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:01:17 np0005465987 nova_compute[230713]: 2025-10-02 12:01:17.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:17.647 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:01:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:17.648 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:01:17 np0005465987 nova_compute[230713]: 2025-10-02 12:01:17.686 2 DEBUG nova.scheduler.client.report [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:01:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:17.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:18.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:18 np0005465987 nova_compute[230713]: 2025-10-02 12:01:18.669 2 DEBUG nova.compute.resource_tracker [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:01:18 np0005465987 nova_compute[230713]: 2025-10-02 12:01:18.670 2 DEBUG oslo_concurrency.lockutils [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:18 np0005465987 nova_compute[230713]: 2025-10-02 12:01:18.680 2 INFO nova.compute.manager [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Oct  2 08:01:19 np0005465987 nova_compute[230713]: 2025-10-02 12:01:19.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:19.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:19 np0005465987 nova_compute[230713]: 2025-10-02 12:01:19.893 2 INFO nova.scheduler.client.report [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Deleted allocation for migration 1eb19c1d-ea19-41e7-a75d-0bcff027382d#033[00m
Oct  2 08:01:19 np0005465987 nova_compute[230713]: 2025-10-02 12:01:19.894 2 DEBUG nova.virt.libvirt.driver [None req-28746ef2-d501-41e1-82cf-0530efc9f82b 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct  2 08:01:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Oct  2 08:01:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:20.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:20 np0005465987 podman[236727]: 2025-10-02 12:01:20.855173032 +0000 UTC m=+0.061111700 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:01:20 np0005465987 nova_compute[230713]: 2025-10-02 12:01:20.876 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Creating tmpfile /var/lib/nova/instances/tmp3_47x7bn to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct  2 08:01:21 np0005465987 nova_compute[230713]: 2025-10-02 12:01:21.088 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3_47x7bn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct  2 08:01:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:21.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:21 np0005465987 nova_compute[230713]: 2025-10-02 12:01:21.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:22.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:22 np0005465987 nova_compute[230713]: 2025-10-02 12:01:22.726 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3_47x7bn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct  2 08:01:22 np0005465987 nova_compute[230713]: 2025-10-02 12:01:22.790 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:01:22 np0005465987 nova_compute[230713]: 2025-10-02 12:01:22.790 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquired lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:01:22 np0005465987 nova_compute[230713]: 2025-10-02 12:01:22.791 2 DEBUG nova.network.neutron [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:01:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:23.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.520 2 DEBUG nova.network.neutron [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.564 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Releasing lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.565 2 DEBUG os_brick.utils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.566 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.576 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.576 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1269ad-bfe1-4af7-ae29-030ff632b9d9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.578 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.591 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.591 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[a106bfaa-f912-49f4-8b68-6c723d03ce74]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.593 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:01:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:24.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.607 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.607 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[418c7fb2-1dda-418f-a850-3c486793be5e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.609 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[21224457-c811-49ad-b895-b15ba04f3cc4]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.609 2 DEBUG oslo_concurrency.processutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.634 2 DEBUG oslo_concurrency.processutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.636 2 DEBUG os_brick.initiator.connectors.lightos [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.636 2 DEBUG os_brick.initiator.connectors.lightos [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.636 2 DEBUG os_brick.initiator.connectors.lightos [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:01:24 np0005465987 nova_compute[230713]: 2025-10-02 12:01:24.637 2 DEBUG os_brick.utils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:01:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:24.650 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.740 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3_47x7bn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={134b86f3-5f3c-45b7-9cd8-0ead1590c407='de5c36e5-1c4e-4ee2-9ef9-4f685e9dc836'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.741 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Creating instance directory: /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.741 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Ensure instance console log exists: /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.742 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.745 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.746 2 DEBUG nova.virt.libvirt.vif [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1192256805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1192256805',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:00:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-rhco07g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:01:04Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=cd9deba5-2505-4edd-ac7c-483615217473,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.746 2 DEBUG nova.network.os_vif_util [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converting VIF {"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.747 2 DEBUG nova.network.os_vif_util [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.747 2 DEBUG os_vif [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.749 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc377346-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.752 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc377346-4a, col_values=(('external_ids', {'iface-id': 'cc377346-4a61-4e38-b682-f2111cc47ffd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:73:25', 'vm-uuid': 'cd9deba5-2505-4edd-ac7c-483615217473'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:25 np0005465987 NetworkManager[44910]: <info>  [1759406485.7544] manager: (tapcc377346-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.760 2 INFO os_vif [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a')#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.762 2 DEBUG nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.762 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3_47x7bn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={134b86f3-5f3c-45b7-9cd8-0ead1590c407='de5c36e5-1c4e-4ee2-9ef9-4f685e9dc836'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct  2 08:01:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:25.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:25 np0005465987 nova_compute[230713]: 2025-10-02 12:01:25.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:26.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:26 np0005465987 nova_compute[230713]: 2025-10-02 12:01:26.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:26 np0005465987 nova_compute[230713]: 2025-10-02 12:01:26.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:26 np0005465987 nova_compute[230713]: 2025-10-02 12:01:26.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:01:26 np0005465987 nova_compute[230713]: 2025-10-02 12:01:26.968 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.126 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.126 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.126 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.127 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:01:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:01:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.343 2 DEBUG nova.network.neutron [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Port cc377346-4a61-4e38-b682-f2111cc47ffd updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.347 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.644 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.657 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3_47x7bn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cd9deba5-2505-4edd-ac7c-483615217473',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={134b86f3-5f3c-45b7-9cd8-0ead1590c407='de5c36e5-1c4e-4ee2-9ef9-4f685e9dc836'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.676 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.676 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:01:27 np0005465987 systemd[1]: Starting libvirt proxy daemon...
Oct  2 08:01:27 np0005465987 systemd[1]: Started libvirt proxy daemon.
Oct  2 08:01:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:27.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:27.927 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:27.928 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:27.928 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:27 np0005465987 NetworkManager[44910]: <info>  [1759406487.9613] manager: (tapcc377346-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct  2 08:01:27 np0005465987 kernel: tapcc377346-4a: entered promiscuous mode
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:27Z|00070|binding|INFO|Claiming lport cc377346-4a61-4e38-b682-f2111cc47ffd for this additional chassis.
Oct  2 08:01:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:27Z|00071|binding|INFO|cc377346-4a61-4e38-b682-f2111cc47ffd: Claiming fa:16:3e:1b:73:25 10.100.0.4
Oct  2 08:01:27 np0005465987 nova_compute[230713]: 2025-10-02 12:01:27.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:28 np0005465987 nova_compute[230713]: 2025-10-02 12:01:28.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:28Z|00072|binding|INFO|Setting lport cc377346-4a61-4e38-b682-f2111cc47ffd ovn-installed in OVS
Oct  2 08:01:28 np0005465987 nova_compute[230713]: 2025-10-02 12:01:28.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:28 np0005465987 systemd-machined[188335]: New machine qemu-6-instance-00000009.
Oct  2 08:01:28 np0005465987 systemd-udevd[236918]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:01:28 np0005465987 nova_compute[230713]: 2025-10-02 12:01:28.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:28 np0005465987 systemd[1]: Started Virtual Machine qemu-6-instance-00000009.
Oct  2 08:01:28 np0005465987 NetworkManager[44910]: <info>  [1759406488.0251] device (tapcc377346-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:01:28 np0005465987 NetworkManager[44910]: <info>  [1759406488.0260] device (tapcc377346-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:01:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:01:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:01:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:01:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:28.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:28 np0005465987 nova_compute[230713]: 2025-10-02 12:01:28.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Oct  2 08:01:29 np0005465987 nova_compute[230713]: 2025-10-02 12:01:29.501 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406489.5006332, cd9deba5-2505-4edd-ac7c-483615217473 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:01:29 np0005465987 nova_compute[230713]: 2025-10-02 12:01:29.501 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Started (Lifecycle Event)#033[00m
Oct  2 08:01:29 np0005465987 nova_compute[230713]: 2025-10-02 12:01:29.534 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:01:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:29.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:29 np0005465987 podman[236970]: 2025-10-02 12:01:29.863490769 +0000 UTC m=+0.070792807 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:01:29 np0005465987 podman[236976]: 2025-10-02 12:01:29.887500138 +0000 UTC m=+0.078917839 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:01:29 np0005465987 podman[236969]: 2025-10-02 12:01:29.888789374 +0000 UTC m=+0.101748768 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:01:29 np0005465987 nova_compute[230713]: 2025-10-02 12:01:29.926 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406489.9257004, cd9deba5-2505-4edd-ac7c-483615217473 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:01:29 np0005465987 nova_compute[230713]: 2025-10-02 12:01:29.926 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:01:29 np0005465987 nova_compute[230713]: 2025-10-02 12:01:29.951 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:01:29 np0005465987 nova_compute[230713]: 2025-10-02 12:01:29.955 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:01:29 np0005465987 nova_compute[230713]: 2025-10-02 12:01:29.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:29 np0005465987 nova_compute[230713]: 2025-10-02 12:01:29.984 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.005 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.005 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.005 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.006 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:01:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:01:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3086966703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.427 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.525 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.526 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.529 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.530 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:01:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:30.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.706 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.708 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4540MB free_disk=20.785137176513672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.708 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.708 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.778 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Migration for instance cd9deba5-2505-4edd-ac7c-483615217473 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.831 2 INFO nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating resource usage from migration 8a32ad27-767e-4f4b-aadf-beebd224c85c#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.832 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Starting to track incoming migration 8a32ad27-767e-4f4b-aadf-beebd224c85c with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.888 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance c5bc8428-581c-4ea6-85d8-6f153a1ee723 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.934 2 WARNING nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance cd9deba5-2505-4edd-ac7c-483615217473 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}.#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.935 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:01:30 np0005465987 nova_compute[230713]: 2025-10-02 12:01:30.935 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.012 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:01:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:31Z|00073|binding|INFO|Claiming lport cc377346-4a61-4e38-b682-f2111cc47ffd for this chassis.
Oct  2 08:01:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:31Z|00074|binding|INFO|cc377346-4a61-4e38-b682-f2111cc47ffd: Claiming fa:16:3e:1b:73:25 10.100.0.4
Oct  2 08:01:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:31Z|00075|binding|INFO|Setting lport cc377346-4a61-4e38-b682-f2111cc47ffd up in Southbound
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.452 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:73:25 10.100.0.4'], port_security=['fa:16:3e:1b:73:25 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cd9deba5-2505-4edd-ac7c-483615217473', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '19', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=cc377346-4a61-4e38-b682-f2111cc47ffd) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.454 138325 INFO neutron.agent.ovn.metadata.agent [-] Port cc377346-4a61-4e38-b682-f2111cc47ffd in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 bound to our chassis#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.456 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e991676-99e8-43d9-8575-4a21f50b0ed5#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.467 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7e777d25-fd54-44ee-8fb9-96a0e9b1a679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.468 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e991676-91 in ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.471 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e991676-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.471 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fa823c46-439f-4367-9e8b-15f23d316916]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.472 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ecb4c0-ad81-4ad6-99bc-7ec97767b1d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.481 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[732f4d13-6884-4fb3-ae6d-099ef7c165b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:01:31 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2450892318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.499 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2308374f-7a68-4e60-ac06-ecf1e38bcbfb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.525 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.529 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a3f859-5c9a-456e-85e6-ab8e6440c8eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.531 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.535 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[611a5942-811d-44e7-be13-1d595df4a056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 NetworkManager[44910]: <info>  [1759406491.5363] manager: (tap1e991676-90): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct  2 08:01:31 np0005465987 systemd-udevd[237084]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.567 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[90281f43-1d29-4359-9fda-4dc8ab3c1644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.570 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e01af6d2-0d57-4fb7-95ec-28f9d104e4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 NetworkManager[44910]: <info>  [1759406491.5946] device (tap1e991676-90): carrier: link connected
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.600 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8560c6-1ff1-4bb2-8670-39ee62648a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.622 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3adf805e-8072-4fcc-8da7-832b54bc813e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456016, 'reachable_time': 16718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237103, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.637 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef6869e-379b-4b15-ac20-057b8a1b8e10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:ef6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456016, 'tstamp': 456016}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237104, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.658 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[42c55a6a-b014-4d08-ab23-01bfd823480d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e991676-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:0e:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456016, 'reachable_time': 16718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237105, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.688 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[922bc230-9916-40e9-b240-ef4bb3ca60ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.737 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.746 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fb98d5-0f9f-4d06-bbda-3499b87053c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.747 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.747 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.747 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e991676-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:01:31 np0005465987 NetworkManager[44910]: <info>  [1759406491.7497] manager: (tap1e991676-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:31 np0005465987 kernel: tap1e991676-90: entered promiscuous mode
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.753 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e991676-90, col_values=(('external_ids', {'iface-id': 'a0e52ac7-beb4-4d4b-863a-9b95be4c9e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:01:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:31Z|00076|binding|INFO|Releasing lport a0e52ac7-beb4-4d4b-863a-9b95be4c9e74 from this chassis (sb_readonly=0)
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.777 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.778 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c103f3-dc42-42bf-b615-f375773c464b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.778 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/1e991676-99e8-43d9-8575-4a21f50b0ed5.pid.haproxy
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 1e991676-99e8-43d9-8575-4a21f50b0ed5
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:01:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:31.779 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'env', 'PROCESS_TAG=haproxy-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e991676-99e8-43d9-8575-4a21f50b0ed5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:31.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.838 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:01:31 np0005465987 nova_compute[230713]: 2025-10-02 12:01:31.838 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:32 np0005465987 nova_compute[230713]: 2025-10-02 12:01:32.025 2 INFO nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Post operation of migration started#033[00m
Oct  2 08:01:32 np0005465987 podman[237138]: 2025-10-02 12:01:32.173494797 +0000 UTC m=+0.051151076 container create aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:01:32 np0005465987 systemd[1]: Started libpod-conmon-aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94.scope.
Oct  2 08:01:32 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:01:32 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e19da93c44387fca05fed86e44d480478981ee3f332508a48fa6a4b6102882/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:01:32 np0005465987 podman[237138]: 2025-10-02 12:01:32.145047276 +0000 UTC m=+0.022703585 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:01:32 np0005465987 podman[237138]: 2025-10-02 12:01:32.247923823 +0000 UTC m=+0.125580132 container init aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 08:01:32 np0005465987 podman[237138]: 2025-10-02 12:01:32.253414304 +0000 UTC m=+0.131070573 container start aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:01:32 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[237153]: [NOTICE]   (237157) : New worker (237159) forked
Oct  2 08:01:32 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[237153]: [NOTICE]   (237157) : Loading success.
Oct  2 08:01:32 np0005465987 nova_compute[230713]: 2025-10-02 12:01:32.517 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:01:32 np0005465987 nova_compute[230713]: 2025-10-02 12:01:32.517 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquired lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:01:32 np0005465987 nova_compute[230713]: 2025-10-02 12:01:32.518 2 DEBUG nova.network.neutron [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:01:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:32.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:33.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.066 2 DEBUG nova.network.neutron [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [{"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.098 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Releasing lock "refresh_cache-cd9deba5-2505-4edd-ac7c-483615217473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.121 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.121 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.122 2 DEBUG oslo_concurrency.lockutils [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.128 2 INFO nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct  2 08:01:34 np0005465987 virtqemud[230442]: Domain id=6 name='instance-00000009' uuid=cd9deba5-2505-4edd-ac7c-483615217473 is tainted: custom-monitor
Oct  2 08:01:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:34.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.832 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.875 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.876 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.876 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:01:34 np0005465987 nova_compute[230713]: 2025-10-02 12:01:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:01:35 np0005465987 nova_compute[230713]: 2025-10-02 12:01:35.137 2 INFO nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct  2 08:01:35 np0005465987 nova_compute[230713]: 2025-10-02 12:01:35.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:35.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:01:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:01:36 np0005465987 nova_compute[230713]: 2025-10-02 12:01:36.143 2 INFO nova.virt.libvirt.driver [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct  2 08:01:36 np0005465987 nova_compute[230713]: 2025-10-02 12:01:36.148 2 DEBUG nova.compute.manager [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:01:36 np0005465987 nova_compute[230713]: 2025-10-02 12:01:36.170 2 DEBUG nova.objects.instance [None req-6ecb735b-f6f2-4928-a75a-355507f438df 32f8fed535f24050a59db923cb598449 c62476b2f92d43e69b75e06947d0e894 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:01:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:36.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:36 np0005465987 nova_compute[230713]: 2025-10-02 12:01:36.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:37.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Oct  2 08:01:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:38.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:39.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:40.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:40 np0005465987 nova_compute[230713]: 2025-10-02 12:01:40.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:41 np0005465987 nova_compute[230713]: 2025-10-02 12:01:41.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:41.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:42.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.698 2 DEBUG oslo_concurrency.lockutils [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.699 2 DEBUG oslo_concurrency.lockutils [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.699 2 DEBUG oslo_concurrency.lockutils [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.699 2 DEBUG oslo_concurrency.lockutils [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.700 2 DEBUG oslo_concurrency.lockutils [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.701 2 INFO nova.compute.manager [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Terminating instance#033[00m
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.702 2 DEBUG nova.compute.manager [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:01:42 np0005465987 kernel: tapcc377346-4a (unregistering): left promiscuous mode
Oct  2 08:01:42 np0005465987 NetworkManager[44910]: <info>  [1759406502.8733] device (tapcc377346-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:42Z|00077|binding|INFO|Releasing lport cc377346-4a61-4e38-b682-f2111cc47ffd from this chassis (sb_readonly=0)
Oct  2 08:01:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:42Z|00078|binding|INFO|Setting lport cc377346-4a61-4e38-b682-f2111cc47ffd down in Southbound
Oct  2 08:01:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:01:42Z|00079|binding|INFO|Removing iface tapcc377346-4a ovn-installed in OVS
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:42.894 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:73:25 10.100.0.4'], port_security=['fa:16:3e:1b:73:25 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cd9deba5-2505-4edd-ac7c-483615217473', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd977ad6a90874946819537242925a8f0', 'neutron:revision_number': '21', 'neutron:security_group_ids': '74859182-61ec-4a12-bb02-0c8684ac9234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2c64df1-281c-4dee-b0ae-55c6f99caa2e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=cc377346-4a61-4e38-b682-f2111cc47ffd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:01:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:42.896 138325 INFO neutron.agent.ovn.metadata.agent [-] Port cc377346-4a61-4e38-b682-f2111cc47ffd in datapath 1e991676-99e8-43d9-8575-4a21f50b0ed5 unbound from our chassis#033[00m
Oct  2 08:01:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:42.898 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e991676-99e8-43d9-8575-4a21f50b0ed5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:01:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:42.899 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[be0050e7-d91a-4e75-8c1f-38f5419673da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:42.901 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 namespace which is not needed anymore#033[00m
Oct  2 08:01:42 np0005465987 nova_compute[230713]: 2025-10-02 12:01:42.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:42 np0005465987 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct  2 08:01:42 np0005465987 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000009.scope: Consumed 2.292s CPU time.
Oct  2 08:01:42 np0005465987 systemd-machined[188335]: Machine qemu-6-instance-00000009 terminated.
Oct  2 08:01:43 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[237153]: [NOTICE]   (237157) : haproxy version is 2.8.14-c23fe91
Oct  2 08:01:43 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[237153]: [NOTICE]   (237157) : path to executable is /usr/sbin/haproxy
Oct  2 08:01:43 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[237153]: [WARNING]  (237157) : Exiting Master process...
Oct  2 08:01:43 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[237153]: [WARNING]  (237157) : Exiting Master process...
Oct  2 08:01:43 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[237153]: [ALERT]    (237157) : Current worker (237159) exited with code 143 (Terminated)
Oct  2 08:01:43 np0005465987 neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5[237153]: [WARNING]  (237157) : All workers exited. Exiting... (0)
Oct  2 08:01:43 np0005465987 systemd[1]: libpod-aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94.scope: Deactivated successfully.
Oct  2 08:01:43 np0005465987 podman[237242]: 2025-10-02 12:01:43.110917738 +0000 UTC m=+0.101489451 container died aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:01:43 np0005465987 kernel: tapcc377346-4a: entered promiscuous mode
Oct  2 08:01:43 np0005465987 kernel: tapcc377346-4a (unregistering): left promiscuous mode
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.136 2 INFO nova.virt.libvirt.driver [-] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Instance destroyed successfully.#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.136 2 DEBUG nova.objects.instance [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lazy-loading 'resources' on Instance uuid cd9deba5-2505-4edd-ac7c-483615217473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.146 2 DEBUG nova.compute.manager [req-80ac3c45-cd20-4859-a2f5-3349563b05bf req-c5ec0cbc-468c-4947-8ee4-2268f41e5ad0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.147 2 DEBUG oslo_concurrency.lockutils [req-80ac3c45-cd20-4859-a2f5-3349563b05bf req-c5ec0cbc-468c-4947-8ee4-2268f41e5ad0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.147 2 DEBUG oslo_concurrency.lockutils [req-80ac3c45-cd20-4859-a2f5-3349563b05bf req-c5ec0cbc-468c-4947-8ee4-2268f41e5ad0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.147 2 DEBUG oslo_concurrency.lockutils [req-80ac3c45-cd20-4859-a2f5-3349563b05bf req-c5ec0cbc-468c-4947-8ee4-2268f41e5ad0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.148 2 DEBUG nova.compute.manager [req-80ac3c45-cd20-4859-a2f5-3349563b05bf req-c5ec0cbc-468c-4947-8ee4-2268f41e5ad0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.148 2 DEBUG nova.compute.manager [req-80ac3c45-cd20-4859-a2f5-3349563b05bf req-c5ec0cbc-468c-4947-8ee4-2268f41e5ad0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-unplugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.150 2 DEBUG nova.virt.libvirt.vif [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1192256805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1192256805',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:00:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d977ad6a90874946819537242925a8f0',ramdisk_id='',reservation_id='r-rhco07g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-959655280',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-959655280-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:01:36Z,user_data=None,user_id='b54b5e15e4c94d1f95a272981e9d9a89',uuid=cd9deba5-2505-4edd-ac7c-483615217473,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.151 2 DEBUG nova.network.os_vif_util [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converting VIF {"id": "cc377346-4a61-4e38-b682-f2111cc47ffd", "address": "fa:16:3e:1b:73:25", "network": {"id": "1e991676-99e8-43d9-8575-4a21f50b0ed5", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-398880819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d977ad6a90874946819537242925a8f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc377346-4a", "ovs_interfaceid": "cc377346-4a61-4e38-b682-f2111cc47ffd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.152 2 DEBUG nova.network.os_vif_util [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.153 2 DEBUG os_vif [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.154 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc377346-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.159 2 INFO os_vif [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:73:25,bridge_name='br-int',has_traffic_filtering=True,id=cc377346-4a61-4e38-b682-f2111cc47ffd,network=Network(1e991676-99e8-43d9-8575-4a21f50b0ed5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc377346-4a')#033[00m
Oct  2 08:01:43 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94-userdata-shm.mount: Deactivated successfully.
Oct  2 08:01:43 np0005465987 systemd[1]: var-lib-containers-storage-overlay-f0e19da93c44387fca05fed86e44d480478981ee3f332508a48fa6a4b6102882-merged.mount: Deactivated successfully.
Oct  2 08:01:43 np0005465987 podman[237242]: 2025-10-02 12:01:43.476824163 +0000 UTC m=+0.467395876 container cleanup aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:01:43 np0005465987 systemd[1]: libpod-conmon-aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94.scope: Deactivated successfully.
Oct  2 08:01:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:43.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:43 np0005465987 podman[237299]: 2025-10-02 12:01:43.863606242 +0000 UTC m=+0.346738839 container remove aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:01:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:43.875 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1887fe1c-87ec-49ea-987b-333241c9c7cc]: (4, ('Thu Oct  2 12:01:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 (aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94)\naa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94\nThu Oct  2 12:01:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 (aa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94)\naa4301d55fc6aee71262201a45fb17a1193ab86a60ac539dde8b6591cb515d94\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:43.877 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[422924b5-42f9-48cb-bfaf-40c6dc3c2248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:43.879 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e991676-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:43 np0005465987 kernel: tap1e991676-90: left promiscuous mode
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:43.889 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[99e06da8-96ca-4081-aa89-43af741e04b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:43 np0005465987 nova_compute[230713]: 2025-10-02 12:01:43.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:43.923 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[414258ae-d685-4a2d-80e8-f8ce25a76e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:43.925 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[eadeb88a-b02e-45a4-98d4-6cc914f4a810]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:43.947 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ac36e071-fc79-48fc-b348-c23b2202741d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456009, 'reachable_time': 19621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237314, 'error': None, 'target': 'ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:43 np0005465987 systemd[1]: run-netns-ovnmeta\x2d1e991676\x2d99e8\x2d43d9\x2d8575\x2d4a21f50b0ed5.mount: Deactivated successfully.
Oct  2 08:01:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:43.951 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e991676-99e8-43d9-8575-4a21f50b0ed5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:01:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:01:43.951 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[18eb38c9-0355-4a23-9041-8e238770d8c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:01:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:44.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:45 np0005465987 nova_compute[230713]: 2025-10-02 12:01:45.239 2 DEBUG nova.compute.manager [req-5da9ae17-6fd2-4ba3-9762-93e5d328ca55 req-bbb652cd-a968-44b9-b7de-e7edd47d5e6e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:01:45 np0005465987 nova_compute[230713]: 2025-10-02 12:01:45.240 2 DEBUG oslo_concurrency.lockutils [req-5da9ae17-6fd2-4ba3-9762-93e5d328ca55 req-bbb652cd-a968-44b9-b7de-e7edd47d5e6e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cd9deba5-2505-4edd-ac7c-483615217473-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:45 np0005465987 nova_compute[230713]: 2025-10-02 12:01:45.240 2 DEBUG oslo_concurrency.lockutils [req-5da9ae17-6fd2-4ba3-9762-93e5d328ca55 req-bbb652cd-a968-44b9-b7de-e7edd47d5e6e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:45 np0005465987 nova_compute[230713]: 2025-10-02 12:01:45.241 2 DEBUG oslo_concurrency.lockutils [req-5da9ae17-6fd2-4ba3-9762-93e5d328ca55 req-bbb652cd-a968-44b9-b7de-e7edd47d5e6e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:45 np0005465987 nova_compute[230713]: 2025-10-02 12:01:45.241 2 DEBUG nova.compute.manager [req-5da9ae17-6fd2-4ba3-9762-93e5d328ca55 req-bbb652cd-a968-44b9-b7de-e7edd47d5e6e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] No waiting events found dispatching network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:01:45 np0005465987 nova_compute[230713]: 2025-10-02 12:01:45.241 2 WARNING nova.compute.manager [req-5da9ae17-6fd2-4ba3-9762-93e5d328ca55 req-bbb652cd-a968-44b9-b7de-e7edd47d5e6e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received unexpected event network-vif-plugged-cc377346-4a61-4e38-b682-f2111cc47ffd for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:01:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:45.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.297 2 INFO nova.virt.libvirt.driver [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Deleting instance files /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473_del#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.298 2 INFO nova.virt.libvirt.driver [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Deletion of /var/lib/nova/instances/cd9deba5-2505-4edd-ac7c-483615217473_del complete#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.366 2 INFO nova.compute.manager [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Took 3.66 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.367 2 DEBUG oslo.service.loopingcall [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.367 2 DEBUG nova.compute.manager [-] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.367 2 DEBUG nova.network.neutron [-] [instance: cd9deba5-2505-4edd-ac7c-483615217473] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:01:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:46.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.837 2 DEBUG nova.compute.manager [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.929 2 DEBUG oslo_concurrency.lockutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.930 2 DEBUG oslo_concurrency.lockutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.967 2 DEBUG nova.objects.instance [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'pci_requests' on Instance uuid b6a5f3ca-c662-41a0-ac02-78f9fba82bba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.980 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.980 2 INFO nova.compute.claims [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:01:46 np0005465987 nova_compute[230713]: 2025-10-02 12:01:46.981 2 DEBUG nova.objects.instance [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'resources' on Instance uuid b6a5f3ca-c662-41a0-ac02-78f9fba82bba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.001 2 DEBUG nova.objects.instance [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'numa_topology' on Instance uuid b6a5f3ca-c662-41a0-ac02-78f9fba82bba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.125 2 DEBUG nova.objects.instance [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6a5f3ca-c662-41a0-ac02-78f9fba82bba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.479 2 INFO nova.compute.resource_tracker [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Updating resource usage from migration b6b47536-f923-491d-aa49-8278d9a20c83#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.479 2 DEBUG nova.compute.resource_tracker [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Starting to track incoming migration b6b47536-f923-491d-aa49-8278d9a20c83 with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.533 2 DEBUG nova.compute.manager [req-52178f5c-6eb7-4acd-a1df-e8b2dca4a62a req-dcb276af-89a2-4302-b1f4-b29dac3f62d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Received event network-vif-deleted-cc377346-4a61-4e38-b682-f2111cc47ffd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.533 2 INFO nova.compute.manager [req-52178f5c-6eb7-4acd-a1df-e8b2dca4a62a req-dcb276af-89a2-4302-b1f4-b29dac3f62d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Neutron deleted interface cc377346-4a61-4e38-b682-f2111cc47ffd; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.533 2 DEBUG nova.network.neutron [req-52178f5c-6eb7-4acd-a1df-e8b2dca4a62a req-dcb276af-89a2-4302-b1f4-b29dac3f62d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.535 2 DEBUG nova.network.neutron [-] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.607 2 DEBUG oslo_concurrency.processutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.756 2 INFO nova.compute.manager [-] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct  2 08:01:47 np0005465987 nova_compute[230713]: 2025-10-02 12:01:47.769 2 DEBUG nova.compute.manager [req-52178f5c-6eb7-4acd-a1df-e8b2dca4a62a req-dcb276af-89a2-4302-b1f4-b29dac3f62d7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Detach interface failed, port_id=cc377346-4a61-4e38-b682-f2111cc47ffd, reason: Instance cd9deba5-2505-4edd-ac7c-483615217473 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:01:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:47.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.075 2 INFO nova.compute.manager [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Took 0.32 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.078 2 DEBUG nova.compute.manager [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Deleting volume: 134b86f3-5f3c-45b7-9cd8-0ead1590c407 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 08:01:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:01:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1140293501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.120 2 DEBUG oslo_concurrency.processutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.129 2 DEBUG nova.compute.provider_tree [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.168 2 DEBUG nova.scheduler.client.report [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.231 2 DEBUG oslo_concurrency.lockutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.232 2 INFO nova.compute.manager [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Migrating#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.318 2 DEBUG oslo_concurrency.lockutils [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.319 2 DEBUG oslo_concurrency.lockutils [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.325 2 DEBUG oslo_concurrency.lockutils [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.373 2 INFO nova.scheduler.client.report [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Deleted allocations for instance cd9deba5-2505-4edd-ac7c-483615217473#033[00m
Oct  2 08:01:48 np0005465987 nova_compute[230713]: 2025-10-02 12:01:48.471 2 DEBUG oslo_concurrency.lockutils [None req-87e8c5be-52d7-490f-8108-64f11cf27d7f b54b5e15e4c94d1f95a272981e9d9a89 d977ad6a90874946819537242925a8f0 - - default default] Lock "cd9deba5-2505-4edd-ac7c-483615217473" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:48.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:49 np0005465987 systemd-logind[794]: New session 56 of user nova.
Oct  2 08:01:49 np0005465987 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 08:01:49 np0005465987 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 08:01:49 np0005465987 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 08:01:49 np0005465987 systemd[1]: Starting User Manager for UID 42436...
Oct  2 08:01:49 np0005465987 systemd[237343]: Queued start job for default target Main User Target.
Oct  2 08:01:49 np0005465987 systemd[237343]: Created slice User Application Slice.
Oct  2 08:01:49 np0005465987 systemd[237343]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:01:49 np0005465987 systemd[237343]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:01:49 np0005465987 systemd[237343]: Reached target Paths.
Oct  2 08:01:49 np0005465987 systemd[237343]: Reached target Timers.
Oct  2 08:01:49 np0005465987 systemd[237343]: Starting D-Bus User Message Bus Socket...
Oct  2 08:01:49 np0005465987 systemd[237343]: Starting Create User's Volatile Files and Directories...
Oct  2 08:01:49 np0005465987 systemd[237343]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:01:49 np0005465987 systemd[237343]: Reached target Sockets.
Oct  2 08:01:49 np0005465987 systemd[237343]: Finished Create User's Volatile Files and Directories.
Oct  2 08:01:49 np0005465987 systemd[237343]: Reached target Basic System.
Oct  2 08:01:49 np0005465987 systemd[237343]: Reached target Main User Target.
Oct  2 08:01:49 np0005465987 systemd[237343]: Startup finished in 158ms.
Oct  2 08:01:49 np0005465987 systemd[1]: Started User Manager for UID 42436.
Oct  2 08:01:49 np0005465987 systemd[1]: Started Session 56 of User nova.
Oct  2 08:01:49 np0005465987 systemd[1]: session-56.scope: Deactivated successfully.
Oct  2 08:01:49 np0005465987 systemd-logind[794]: Session 56 logged out. Waiting for processes to exit.
Oct  2 08:01:49 np0005465987 systemd-logind[794]: Removed session 56.
Oct  2 08:01:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:01:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540367777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:01:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:01:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540367777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:01:49 np0005465987 systemd-logind[794]: New session 58 of user nova.
Oct  2 08:01:49 np0005465987 systemd[1]: Started Session 58 of User nova.
Oct  2 08:01:49 np0005465987 systemd[1]: session-58.scope: Deactivated successfully.
Oct  2 08:01:49 np0005465987 systemd-logind[794]: Session 58 logged out. Waiting for processes to exit.
Oct  2 08:01:49 np0005465987 systemd-logind[794]: Removed session 58.
Oct  2 08:01:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:49.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:01:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:50.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:01:51 np0005465987 nova_compute[230713]: 2025-10-02 12:01:51.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:01:51 np0005465987 podman[237365]: 2025-10-02 12:01:51.860838397 +0000 UTC m=+0.072388010 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:01:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:51.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:01:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:52.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:53 np0005465987 nova_compute[230713]: 2025-10-02 12:01:53.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:53.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:54.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:55.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:56.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:56 np0005465987 nova_compute[230713]: 2025-10-02 12:01:56.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:01:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:57.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:58 np0005465987 nova_compute[230713]: 2025-10-02 12:01:58.135 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406503.133971, cd9deba5-2505-4edd-ac7c-483615217473 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:01:58 np0005465987 nova_compute[230713]: 2025-10-02 12:01:58.136 2 INFO nova.compute.manager [-] [instance: cd9deba5-2505-4edd-ac7c-483615217473] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:01:58 np0005465987 nova_compute[230713]: 2025-10-02 12:01:58.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:01:58 np0005465987 nova_compute[230713]: 2025-10-02 12:01:58.165 2 DEBUG nova.compute.manager [None req-3f40d9bf-44d5-408a-aebb-8018c30f5b96 - - - - - -] [instance: cd9deba5-2505-4edd-ac7c-483615217473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:01:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:59 np0005465987 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 08:01:59 np0005465987 systemd[237343]: Activating special unit Exit the Session...
Oct  2 08:01:59 np0005465987 systemd[237343]: Stopped target Main User Target.
Oct  2 08:01:59 np0005465987 systemd[237343]: Stopped target Basic System.
Oct  2 08:01:59 np0005465987 systemd[237343]: Stopped target Paths.
Oct  2 08:01:59 np0005465987 systemd[237343]: Stopped target Sockets.
Oct  2 08:01:59 np0005465987 systemd[237343]: Stopped target Timers.
Oct  2 08:01:59 np0005465987 systemd[237343]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:01:59 np0005465987 systemd[237343]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:01:59 np0005465987 systemd[237343]: Closed D-Bus User Message Bus Socket.
Oct  2 08:01:59 np0005465987 systemd[237343]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:01:59 np0005465987 systemd[237343]: Removed slice User Application Slice.
Oct  2 08:01:59 np0005465987 systemd[237343]: Reached target Shutdown.
Oct  2 08:01:59 np0005465987 systemd[237343]: Finished Exit the Session.
Oct  2 08:01:59 np0005465987 systemd[237343]: Reached target Exit the Session.
Oct  2 08:01:59 np0005465987 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 08:01:59 np0005465987 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 08:01:59 np0005465987 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 08:01:59 np0005465987 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 08:01:59 np0005465987 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 08:01:59 np0005465987 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 08:01:59 np0005465987 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 08:01:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:01:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:59.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:00.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:00 np0005465987 podman[237389]: 2025-10-02 12:02:00.877907021 +0000 UTC m=+0.075907525 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:02:00 np0005465987 podman[237387]: 2025-10-02 12:02:00.881704722 +0000 UTC m=+0.094163436 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:02:00 np0005465987 podman[237388]: 2025-10-02 12:02:00.883407319 +0000 UTC m=+0.081842195 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 08:02:01 np0005465987 nova_compute[230713]: 2025-10-02 12:02:01.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:01.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:02.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:03 np0005465987 nova_compute[230713]: 2025-10-02 12:02:03.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:03 np0005465987 nova_compute[230713]: 2025-10-02 12:02:03.314 2 DEBUG oslo_concurrency.lockutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquiring lock "refresh_cache-b6a5f3ca-c662-41a0-ac02-78f9fba82bba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:02:03 np0005465987 nova_compute[230713]: 2025-10-02 12:02:03.314 2 DEBUG oslo_concurrency.lockutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquired lock "refresh_cache-b6a5f3ca-c662-41a0-ac02-78f9fba82bba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:02:03 np0005465987 nova_compute[230713]: 2025-10-02 12:02:03.314 2 DEBUG nova.network.neutron [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:02:03 np0005465987 nova_compute[230713]: 2025-10-02 12:02:03.855 2 DEBUG nova.network.neutron [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:02:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:03.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.342 2 DEBUG nova.network.neutron [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.380 2 DEBUG oslo_concurrency.lockutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Releasing lock "refresh_cache-b6a5f3ca-c662-41a0-ac02-78f9fba82bba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.488 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.490 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.490 2 INFO nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Creating image(s)#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.544 2 DEBUG nova.storage.rbd_utils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] creating snapshot(nova-resize) on rbd image(b6a5f3ca-c662-41a0-ac02-78f9fba82bba_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:02:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Oct  2 08:02:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:04.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.717 2 DEBUG nova.objects.instance [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b6a5f3ca-c662-41a0-ac02-78f9fba82bba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.881 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.881 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Ensure instance console log exists: /var/lib/nova/instances/b6a5f3ca-c662-41a0-ac02-78f9fba82bba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.882 2 DEBUG oslo_concurrency.lockutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.882 2 DEBUG oslo_concurrency.lockutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.882 2 DEBUG oslo_concurrency.lockutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.883 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.887 2 WARNING nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.892 2 DEBUG nova.virt.libvirt.host [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.892 2 DEBUG nova.virt.libvirt.host [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.895 2 DEBUG nova.virt.libvirt.host [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.895 2 DEBUG nova.virt.libvirt.host [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.896 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.896 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.897 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.897 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.897 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.897 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.898 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.898 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.898 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.898 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.899 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.899 2 DEBUG nova.virt.hardware [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.899 2 DEBUG nova.objects.instance [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b6a5f3ca-c662-41a0-ac02-78f9fba82bba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.936 2 DEBUG oslo_concurrency.processutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:04 np0005465987 nova_compute[230713]: 2025-10-02 12:02:04.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:02:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1882819893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:02:05 np0005465987 nova_compute[230713]: 2025-10-02 12:02:05.382 2 DEBUG oslo_concurrency.processutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:05 np0005465987 nova_compute[230713]: 2025-10-02 12:02:05.415 2 DEBUG oslo_concurrency.processutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:02:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2563263679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:02:05 np0005465987 nova_compute[230713]: 2025-10-02 12:02:05.861 2 DEBUG oslo_concurrency.processutils [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:05 np0005465987 nova_compute[230713]: 2025-10-02 12:02:05.867 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <uuid>b6a5f3ca-c662-41a0-ac02-78f9fba82bba</uuid>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <name>instance-0000000d</name>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <nova:name>tempest-MigrationsAdminTest-server-1708097480</nova:name>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:02:04</nova:creationTime>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <nova:user uuid="1a06819bf8cc4ff7bccbbb2616ff2d21">tempest-MigrationsAdminTest-819597356-project-member</nova:user>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <nova:project uuid="f1ce36070fb047479c3a083f36733f63">tempest-MigrationsAdminTest-819597356</nova:project>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <entry name="serial">b6a5f3ca-c662-41a0-ac02-78f9fba82bba</entry>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <entry name="uuid">b6a5f3ca-c662-41a0-ac02-78f9fba82bba</entry>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/b6a5f3ca-c662-41a0-ac02-78f9fba82bba_disk">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/b6a5f3ca-c662-41a0-ac02-78f9fba82bba_disk.config">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/b6a5f3ca-c662-41a0-ac02-78f9fba82bba/console.log" append="off"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:02:05 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:02:05 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:02:05 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:02:05 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:02:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:05.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:05 np0005465987 nova_compute[230713]: 2025-10-02 12:02:05.981 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:02:05 np0005465987 nova_compute[230713]: 2025-10-02 12:02:05.981 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:02:05 np0005465987 nova_compute[230713]: 2025-10-02 12:02:05.982 2 INFO nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Using config drive#033[00m
Oct  2 08:02:06 np0005465987 systemd-machined[188335]: New machine qemu-7-instance-0000000d.
Oct  2 08:02:06 np0005465987 systemd[1]: Started Virtual Machine qemu-7-instance-0000000d.
Oct  2 08:02:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:06.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:06 np0005465987 nova_compute[230713]: 2025-10-02 12:02:06.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:06 np0005465987 nova_compute[230713]: 2025-10-02 12:02:06.916 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406526.9155648, b6a5f3ca-c662-41a0-ac02-78f9fba82bba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:06 np0005465987 nova_compute[230713]: 2025-10-02 12:02:06.917 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:02:06 np0005465987 nova_compute[230713]: 2025-10-02 12:02:06.922 2 DEBUG nova.compute.manager [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:02:06 np0005465987 nova_compute[230713]: 2025-10-02 12:02:06.927 2 INFO nova.virt.libvirt.driver [-] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Instance running successfully.#033[00m
Oct  2 08:02:06 np0005465987 virtqemud[230442]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:02:06 np0005465987 nova_compute[230713]: 2025-10-02 12:02:06.931 2 DEBUG nova.virt.libvirt.guest [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:02:06 np0005465987 nova_compute[230713]: 2025-10-02 12:02:06.932 2 DEBUG nova.virt.libvirt.driver [None req-efbbd1d4-f09b-489a-b37f-706f6984cc69 4b9a52c4eb834c90b054f480d8de237c 453cdd07db3b4f158d23674cf6281d92 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:02:06 np0005465987 nova_compute[230713]: 2025-10-02 12:02:06.953 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:06 np0005465987 nova_compute[230713]: 2025-10-02 12:02:06.958 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:02:07 np0005465987 nova_compute[230713]: 2025-10-02 12:02:07.084 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:02:07 np0005465987 nova_compute[230713]: 2025-10-02 12:02:07.085 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406526.9172618, b6a5f3ca-c662-41a0-ac02-78f9fba82bba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:07 np0005465987 nova_compute[230713]: 2025-10-02 12:02:07.085 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] VM Started (Lifecycle Event)#033[00m
Oct  2 08:02:07 np0005465987 nova_compute[230713]: 2025-10-02 12:02:07.132 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:07 np0005465987 nova_compute[230713]: 2025-10-02 12:02:07.137 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:02:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:07.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:08 np0005465987 nova_compute[230713]: 2025-10-02 12:02:08.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:08.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:08 np0005465987 nova_compute[230713]: 2025-10-02 12:02:08.796 2 DEBUG oslo_concurrency.lockutils [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "refresh_cache-b6a5f3ca-c662-41a0-ac02-78f9fba82bba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:02:08 np0005465987 nova_compute[230713]: 2025-10-02 12:02:08.797 2 DEBUG oslo_concurrency.lockutils [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquired lock "refresh_cache-b6a5f3ca-c662-41a0-ac02-78f9fba82bba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:02:08 np0005465987 nova_compute[230713]: 2025-10-02 12:02:08.797 2 DEBUG nova.network.neutron [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:02:08 np0005465987 nova_compute[230713]: 2025-10-02 12:02:08.968 2 DEBUG nova.network.neutron [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:02:09 np0005465987 nova_compute[230713]: 2025-10-02 12:02:09.293 2 DEBUG nova.network.neutron [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:09 np0005465987 nova_compute[230713]: 2025-10-02 12:02:09.310 2 DEBUG oslo_concurrency.lockutils [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Releasing lock "refresh_cache-b6a5f3ca-c662-41a0-ac02-78f9fba82bba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:02:09 np0005465987 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct  2 08:02:09 np0005465987 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000d.scope: Consumed 3.308s CPU time.
Oct  2 08:02:09 np0005465987 systemd-machined[188335]: Machine qemu-7-instance-0000000d terminated.
Oct  2 08:02:09 np0005465987 nova_compute[230713]: 2025-10-02 12:02:09.547 2 INFO nova.virt.libvirt.driver [-] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Instance destroyed successfully.#033[00m
Oct  2 08:02:09 np0005465987 nova_compute[230713]: 2025-10-02 12:02:09.548 2 DEBUG nova.objects.instance [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'resources' on Instance uuid b6a5f3ca-c662-41a0-ac02-78f9fba82bba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:09 np0005465987 nova_compute[230713]: 2025-10-02 12:02:09.592 2 DEBUG oslo_concurrency.lockutils [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:09 np0005465987 nova_compute[230713]: 2025-10-02 12:02:09.592 2 DEBUG oslo_concurrency.lockutils [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:09 np0005465987 nova_compute[230713]: 2025-10-02 12:02:09.605 2 DEBUG nova.objects.instance [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'migration_context' on Instance uuid b6a5f3ca-c662-41a0-ac02-78f9fba82bba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:09 np0005465987 nova_compute[230713]: 2025-10-02 12:02:09.689 2 DEBUG oslo_concurrency.processutils [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:09.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:02:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/301315600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:02:10 np0005465987 nova_compute[230713]: 2025-10-02 12:02:10.095 2 DEBUG oslo_concurrency.processutils [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:10 np0005465987 nova_compute[230713]: 2025-10-02 12:02:10.101 2 DEBUG nova.compute.provider_tree [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:02:10 np0005465987 nova_compute[230713]: 2025-10-02 12:02:10.120 2 DEBUG nova.scheduler.client.report [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:02:10 np0005465987 nova_compute[230713]: 2025-10-02 12:02:10.169 2 DEBUG oslo_concurrency.lockutils [None req-a91bc5ff-6405-448e-86b6-50c6130b3dbf 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:10.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:11 np0005465987 nova_compute[230713]: 2025-10-02 12:02:11.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:11.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Oct  2 08:02:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:12.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:13 np0005465987 nova_compute[230713]: 2025-10-02 12:02:13.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:13.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:14.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:15.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:16.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:16 np0005465987 nova_compute[230713]: 2025-10-02 12:02:16.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:17.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:18 np0005465987 nova_compute[230713]: 2025-10-02 12:02:18.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:18.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:19.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:20.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:21 np0005465987 nova_compute[230713]: 2025-10-02 12:02:21.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:21.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:22 np0005465987 nova_compute[230713]: 2025-10-02 12:02:22.768 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Acquiring lock "7e6da351-456f-4a2c-9712-605de99b1008" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:22 np0005465987 nova_compute[230713]: 2025-10-02 12:02:22.769 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "7e6da351-456f-4a2c-9712-605de99b1008" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:22 np0005465987 nova_compute[230713]: 2025-10-02 12:02:22.784 2 DEBUG nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:02:22 np0005465987 nova_compute[230713]: 2025-10-02 12:02:22.876 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:22 np0005465987 nova_compute[230713]: 2025-10-02 12:02:22.878 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:22 np0005465987 nova_compute[230713]: 2025-10-02 12:02:22.885 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:02:22 np0005465987 nova_compute[230713]: 2025-10-02 12:02:22.886 2 INFO nova.compute.claims [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:02:22 np0005465987 podman[237688]: 2025-10-02 12:02:22.896577575 +0000 UTC m=+0.101018371 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:02:22 np0005465987 nova_compute[230713]: 2025-10-02 12:02:22.980 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:22 np0005465987 nova_compute[230713]: 2025-10-02 12:02:22.981 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.000 2 DEBUG nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.032602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543032672, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2289, "num_deletes": 254, "total_data_size": 4948547, "memory_usage": 5026368, "flush_reason": "Manual Compaction"}
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.039 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543060245, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3248815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23115, "largest_seqno": 25398, "table_properties": {"data_size": 3239813, "index_size": 5496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19953, "raw_average_key_size": 20, "raw_value_size": 3221253, "raw_average_value_size": 3334, "num_data_blocks": 242, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406368, "oldest_key_time": 1759406368, "file_creation_time": 1759406543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 27701 microseconds, and 9056 cpu microseconds.
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.060309) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3248815 bytes OK
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.060332) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.063135) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.063164) EVENT_LOG_v1 {"time_micros": 1759406543063156, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.063187) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 4938438, prev total WAL file size 4938438, number of live WAL files 2.
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.065021) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(3172KB)], [48(7639KB)]
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543065065, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 11071906, "oldest_snapshot_seqno": -1}
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.086 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 4880 keys, 9040774 bytes, temperature: kUnknown
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543144184, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 9040774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9007120, "index_size": 20360, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123255, "raw_average_key_size": 25, "raw_value_size": 8917858, "raw_average_value_size": 1827, "num_data_blocks": 833, "num_entries": 4880, "num_filter_entries": 4880, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759406543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.144374) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9040774 bytes
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.147542) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.8 rd, 114.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.5 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5405, records dropped: 525 output_compression: NoCompression
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.147599) EVENT_LOG_v1 {"time_micros": 1759406543147588, "job": 28, "event": "compaction_finished", "compaction_time_micros": 79176, "compaction_time_cpu_micros": 18425, "output_level": 6, "num_output_files": 1, "total_output_size": 9040774, "num_input_records": 5405, "num_output_records": 4880, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543148440, "job": 28, "event": "table_file_deletion", "file_number": 50}
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406543149603, "job": 28, "event": "table_file_deletion", "file_number": 48}
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.064930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.149677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.149686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.149688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.149690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:02:23.149692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:02:23 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3779598959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.462 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.468 2 DEBUG nova.compute.provider_tree [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.501 2 DEBUG nova.scheduler.client.report [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.587 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.588 2 DEBUG nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.593 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.601 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.602 2 INFO nova.compute.claims [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.693 2 DEBUG nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.693 2 DEBUG nova.network.neutron [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.747 2 INFO nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.787 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:23 np0005465987 nova_compute[230713]: 2025-10-02 12:02:23.821 2 DEBUG nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:02:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:23.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.020 2 DEBUG nova.network.neutron [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.021 2 DEBUG nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.040 2 DEBUG nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.042 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.042 2 INFO nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Creating image(s)#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.080 2 DEBUG nova.storage.rbd_utils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] rbd image 7e6da351-456f-4a2c-9712-605de99b1008_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.120 2 DEBUG nova.storage.rbd_utils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] rbd image 7e6da351-456f-4a2c-9712-605de99b1008_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.161 2 DEBUG nova.storage.rbd_utils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] rbd image 7e6da351-456f-4a2c-9712-605de99b1008_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.166 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.255 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.257 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.258 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.259 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:02:24 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/93389438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.308 2 DEBUG nova.storage.rbd_utils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] rbd image 7e6da351-456f-4a2c-9712-605de99b1008_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.312 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7e6da351-456f-4a2c-9712-605de99b1008_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.332 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.339 2 DEBUG nova.compute.provider_tree [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.373 2 DEBUG nova.scheduler.client.report [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.429 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.430 2 DEBUG nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.545 2 DEBUG nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.545 2 DEBUG nova.network.neutron [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.551 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406529.5440974, b6a5f3ca-c662-41a0-ac02-78f9fba82bba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.551 2 INFO nova.compute.manager [-] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.583 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7e6da351-456f-4a2c-9712-605de99b1008_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.617 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.618 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.618 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.618 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.618 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.619 2 INFO nova.compute.manager [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Terminating instance#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.620 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.620 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquired lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.620 2 DEBUG nova.network.neutron [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.655 2 INFO nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.667 2 DEBUG nova.storage.rbd_utils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] resizing rbd image 7e6da351-456f-4a2c-9712-605de99b1008_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:02:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:24.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.722 2 DEBUG nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.783 2 DEBUG nova.compute.manager [None req-a1ae5029-264d-4eb3-8143-3e8a4e728e9e - - - - - -] [instance: b6a5f3ca-c662-41a0-ac02-78f9fba82bba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.789 2 DEBUG nova.objects.instance [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e6da351-456f-4a2c-9712-605de99b1008 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.800 2 INFO nova.virt.block_device [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Booting with volume b3df5dc9-9a56-4922-8b65-4162deb6be93 at /dev/vda#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.805 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.806 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Ensure instance console log exists: /var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.806 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.806 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.806 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.807 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.811 2 WARNING nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.819 2 DEBUG nova.virt.libvirt.host [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.820 2 DEBUG nova.virt.libvirt.host [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.823 2 DEBUG nova.virt.libvirt.host [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.823 2 DEBUG nova.virt.libvirt.host [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.824 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.824 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.825 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.825 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.825 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.825 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.825 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.826 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.826 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.826 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.826 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.827 2 DEBUG nova.virt.hardware [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.829 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.847 2 DEBUG nova.network.neutron [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:02:24 np0005465987 nova_compute[230713]: 2025-10-02 12:02:24.962 2 DEBUG nova.policy [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'efb31eeadee34403b1ab7a584f3616f7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.029 2 DEBUG os_brick.utils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.030 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.046 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.047 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec0845f-be4c-4056-b114-1e583e57f89b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.048 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.061 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.062 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[720d53b1-c2d0-48e2-bfa8-603b52619ee3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.063 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.072 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.072 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[f5761428-da77-4ff1-9ee0-4d1ef9ea0c1b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.074 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[fb991538-3fb4-441b-913a-37fe73367c73]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.074 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.094 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.097 2 DEBUG os_brick.initiator.connectors.lightos [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.097 2 DEBUG os_brick.initiator.connectors.lightos [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.097 2 DEBUG os_brick.initiator.connectors.lightos [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.098 2 DEBUG os_brick.utils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.098 2 DEBUG nova.virt.block_device [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating existing volume attachment record: 5c71f9a3-182a-4066-9e3d-88ac9417292f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.160 2 DEBUG nova.network.neutron [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.185 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Releasing lock "refresh_cache-c5bc8428-581c-4ea6-85d8-6f153a1ee723" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.186 2 DEBUG nova.compute.manager [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:02:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:02:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3379174599' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.243 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.269 2 DEBUG nova.storage.rbd_utils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] rbd image 7e6da351-456f-4a2c-9712-605de99b1008_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.273 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:25 np0005465987 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct  2 08:02:25 np0005465987 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Consumed 18.426s CPU time.
Oct  2 08:02:25 np0005465987 systemd-machined[188335]: Machine qemu-4-instance-00000008 terminated.
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.605 2 INFO nova.virt.libvirt.driver [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance destroyed successfully.#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.606 2 DEBUG nova.objects.instance [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lazy-loading 'resources' on Instance uuid c5bc8428-581c-4ea6-85d8-6f153a1ee723 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.617 2 DEBUG nova.network.neutron [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Successfully created port: 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:02:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:02:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3502400963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.748 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.749 2 DEBUG nova.objects.instance [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e6da351-456f-4a2c-9712-605de99b1008 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.771 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <uuid>7e6da351-456f-4a2c-9712-605de99b1008</uuid>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <name>instance-0000000f</name>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <nova:name>tempest-LiveMigrationNegativeTest-server-785287059</nova:name>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:02:24</nova:creationTime>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <nova:user uuid="bea607da06554a67af45c6df851f7c86">tempest-LiveMigrationNegativeTest-766829789-project-member</nova:user>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <nova:project uuid="1b8418ff78264e3292f5cd5b736866f0">tempest-LiveMigrationNegativeTest-766829789</nova:project>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <entry name="serial">7e6da351-456f-4a2c-9712-605de99b1008</entry>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <entry name="uuid">7e6da351-456f-4a2c-9712-605de99b1008</entry>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/7e6da351-456f-4a2c-9712-605de99b1008_disk">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/7e6da351-456f-4a2c-9712-605de99b1008_disk.config">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008/console.log" append="off"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:02:25 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:02:25 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:02:25 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:02:25 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.860 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.860 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.861 2 INFO nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Using config drive#033[00m
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.884 2 DEBUG nova.storage.rbd_utils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] rbd image 7e6da351-456f-4a2c-9712-605de99b1008_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:25.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:25 np0005465987 nova_compute[230713]: 2025-10-02 12:02:25.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.054 2 INFO nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Creating config drive at /var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008/disk.config#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.059 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp86bw5nli execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.192 2 DEBUG nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.194 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.194 2 INFO nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Creating image(s)#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.195 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.195 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Ensure instance console log exists: /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.195 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.196 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.196 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.203 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp86bw5nli" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.236 2 DEBUG nova.storage.rbd_utils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] rbd image 7e6da351-456f-4a2c-9712-605de99b1008_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.240 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008/disk.config 7e6da351-456f-4a2c-9712-605de99b1008_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.264 2 INFO nova.virt.libvirt.driver [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Deleting instance files /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723_del#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.265 2 INFO nova.virt.libvirt.driver [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Deletion of /var/lib/nova/instances/c5bc8428-581c-4ea6-85d8-6f153a1ee723_del complete#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.352 2 INFO nova.compute.manager [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.353 2 DEBUG oslo.service.loopingcall [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.353 2 DEBUG nova.compute.manager [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.353 2 DEBUG nova.network.neutron [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.391 2 DEBUG nova.network.neutron [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Successfully updated port: 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.458 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.459 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquired lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.459 2 DEBUG nova.network.neutron [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.464 2 DEBUG oslo_concurrency.processutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008/disk.config 7e6da351-456f-4a2c-9712-605de99b1008_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.464 2 INFO nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Deleting local config drive /var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008/disk.config because it was imported into RBD.#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.492 2 DEBUG nova.network.neutron [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:02:26 np0005465987 systemd-machined[188335]: New machine qemu-8-instance-0000000f.
Oct  2 08:02:26 np0005465987 systemd[1]: Started Virtual Machine qemu-8-instance-0000000f.
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.527 2 DEBUG nova.compute.manager [req-0a57de23-98ca-4291-be58-51745c90317e req-9f6769e0-1449-4a88-a001-8f0bf62e72ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-changed-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.529 2 DEBUG nova.compute.manager [req-0a57de23-98ca-4291-be58-51745c90317e req-9f6769e0-1449-4a88-a001-8f0bf62e72ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Refreshing instance network info cache due to event network-changed-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.529 2 DEBUG oslo_concurrency.lockutils [req-0a57de23-98ca-4291-be58-51745c90317e req-9f6769e0-1449-4a88-a001-8f0bf62e72ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.531 2 DEBUG nova.network.neutron [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.550 2 INFO nova.compute.manager [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Took 0.20 seconds to deallocate network for instance.#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.590 2 DEBUG nova.network.neutron [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:02:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:26.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.700 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.701 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.770 2 DEBUG oslo_concurrency.processutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:26 np0005465987 nova_compute[230713]: 2025-10-02 12:02:26.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:02:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1048423683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.260 2 DEBUG oslo_concurrency.processutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.268 2 DEBUG nova.compute.provider_tree [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.291 2 DEBUG nova.scheduler.client.report [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.320 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.385 2 INFO nova.scheduler.client.report [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Deleted allocations for instance c5bc8428-581c-4ea6-85d8-6f153a1ee723#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.465 2 DEBUG oslo_concurrency.lockutils [None req-3bd1b094-5ca2-4aaf-801f-8555c50c9482 1a06819bf8cc4ff7bccbbb2616ff2d21 f1ce36070fb047479c3a083f36733f63 - - default default] Lock "c5bc8428-581c-4ea6-85d8-6f153a1ee723" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.501 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406547.5010557, 7e6da351-456f-4a2c-9712-605de99b1008 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.501 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.503 2 DEBUG nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.503 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.506 2 INFO nova.virt.libvirt.driver [-] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Instance spawned successfully.#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.506 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.533 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.537 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.538 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.538 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.539 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.539 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.540 2 DEBUG nova.virt.libvirt.driver [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.544 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.580 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.580 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406547.5019586, 7e6da351-456f-4a2c-9712-605de99b1008 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.581 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] VM Started (Lifecycle Event)#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.618 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.623 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.634 2 INFO nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Took 3.59 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.635 2 DEBUG nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.647 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.696 2 INFO nova.compute.manager [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Took 4.85 seconds to build instance.#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.712 2 DEBUG oslo_concurrency.lockutils [None req-7c6d70d2-34ff-4fba-99b8-067afa39b583 bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "7e6da351-456f-4a2c-9712-605de99b1008" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:27.928 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:27.928 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:27.929 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:02:27 np0005465987 nova_compute[230713]: 2025-10-02 12:02:27.985 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.425 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-7e6da351-456f-4a2c-9712-605de99b1008" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.426 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-7e6da351-456f-4a2c-9712-605de99b1008" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.426 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.426 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7e6da351-456f-4a2c-9712-605de99b1008 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.471 2 DEBUG nova.network.neutron [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating instance_info_cache with network_info: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.492 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Releasing lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.493 2 DEBUG nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Instance network_info: |[{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.493 2 DEBUG oslo_concurrency.lockutils [req-0a57de23-98ca-4291-be58-51745c90317e req-9f6769e0-1449-4a88-a001-8f0bf62e72ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.494 2 DEBUG nova.network.neutron [req-0a57de23-98ca-4291-be58-51745c90317e req-9f6769e0-1449-4a88-a001-8f0bf62e72ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Refreshing network info cache for port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.499 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Start _get_guest_xml network_info=[{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '5c71f9a3-182a-4066-9e3d-88ac9417292f', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b3df5dc9-9a56-4922-8b65-4162deb6be93', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b3df5dc9-9a56-4922-8b65-4162deb6be93', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6febae1b-d70f-43e6-8aba-1e913541fec2', 'attached_at': '', 'detached_at': '', 'volume_id': 'b3df5dc9-9a56-4922-8b65-4162deb6be93', 'serial': 'b3df5dc9-9a56-4922-8b65-4162deb6be93'}, 'guest_format': None, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.503 2 WARNING nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.514 2 DEBUG nova.virt.libvirt.host [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.515 2 DEBUG nova.virt.libvirt.host [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.530 2 DEBUG nova.virt.libvirt.host [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.531 2 DEBUG nova.virt.libvirt.host [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.532 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.532 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.532 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.533 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.533 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.533 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.533 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.533 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.534 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.534 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.534 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.534 2 DEBUG nova.virt.hardware [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.563 2 DEBUG nova.storage.rbd_utils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] rbd image 6febae1b-d70f-43e6-8aba-1e913541fec2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.567 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.628 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:02:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:02:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:02:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:02:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/127373874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:02:28 np0005465987 nova_compute[230713]: 2025-10-02 12:02:28.998 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.057 2 DEBUG nova.virt.libvirt.vif [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:02:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-304906255',display_name='tempest-LiveMigrationTest-server-304906255',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-304906255',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-utis6e3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:02:24Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=6febae1b-d70f-43e6-8aba-1e913541fec2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.058 2 DEBUG nova.network.os_vif_util [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Converting VIF {"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.059 2 DEBUG nova.network.os_vif_util [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.060 2 DEBUG nova.objects.instance [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6febae1b-d70f-43e6-8aba-1e913541fec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.075 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <uuid>6febae1b-d70f-43e6-8aba-1e913541fec2</uuid>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <name>instance-00000010</name>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <nova:name>tempest-LiveMigrationTest-server-304906255</nova:name>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:02:28</nova:creationTime>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <nova:user uuid="efb31eeadee34403b1ab7a584f3616f7">tempest-LiveMigrationTest-1876533760-project-member</nova:user>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <nova:project uuid="3f2b3ac7d7504c9c96f0d4a67e0243c9">tempest-LiveMigrationTest-1876533760</nova:project>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <nova:port uuid="4a9c114c-7ca6-4f80-ab5d-d38765e4c15c">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <entry name="serial">6febae1b-d70f-43e6-8aba-1e913541fec2</entry>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <entry name="uuid">6febae1b-d70f-43e6-8aba-1e913541fec2</entry>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6febae1b-d70f-43e6-8aba-1e913541fec2_disk.config">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-b3df5dc9-9a56-4922-8b65-4162deb6be93">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <serial>b3df5dc9-9a56-4922-8b65-4162deb6be93</serial>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:87:34:a2"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <target dev="tap4a9c114c-7c"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/console.log" append="off"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:02:29 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:02:29 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:02:29 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:02:29 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.077 2 DEBUG nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Preparing to wait for external event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.077 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.077 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.077 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.078 2 DEBUG nova.virt.libvirt.vif [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:02:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-304906255',display_name='tempest-LiveMigrationTest-server-304906255',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-304906255',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-utis6e3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:02:24Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=6febae1b-d70f-43e6-8aba-1e913541fec2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.078 2 DEBUG nova.network.os_vif_util [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Converting VIF {"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.079 2 DEBUG nova.network.os_vif_util [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.079 2 DEBUG os_vif [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.080 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.080 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a9c114c-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a9c114c-7c, col_values=(('external_ids', {'iface-id': '4a9c114c-7ca6-4f80-ab5d-d38765e4c15c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:34:a2', 'vm-uuid': '6febae1b-d70f-43e6-8aba-1e913541fec2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:29 np0005465987 NetworkManager[44910]: <info>  [1759406549.0876] manager: (tap4a9c114c-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.095 2 INFO os_vif [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c')#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.157 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.157 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.157 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] No VIF found with MAC fa:16:3e:87:34:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.158 2 INFO nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Using config drive#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.185 2 DEBUG nova.storage.rbd_utils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] rbd image 6febae1b-d70f-43e6-8aba-1e913541fec2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.325 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.342 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-7e6da351-456f-4a2c-9712-605de99b1008" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.342 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.342 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.343 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.579 2 INFO nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Creating config drive at /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/disk.config#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.584 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuucem7f6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.720 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuucem7f6" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.751 2 DEBUG nova.storage.rbd_utils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] rbd image 6febae1b-d70f-43e6-8aba-1e913541fec2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.755 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/disk.config 6febae1b-d70f-43e6-8aba-1e913541fec2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:29.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:29 np0005465987 nova_compute[230713]: 2025-10-02 12:02:29.998 2 DEBUG oslo_concurrency.processutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/disk.config 6febae1b-d70f-43e6-8aba-1e913541fec2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.000 2 INFO nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Deleting local config drive /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/disk.config because it was imported into RBD.#033[00m
Oct  2 08:02:30 np0005465987 NetworkManager[44910]: <info>  [1759406550.0406] manager: (tap4a9c114c-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Oct  2 08:02:30 np0005465987 kernel: tap4a9c114c-7c: entered promiscuous mode
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:30Z|00080|binding|INFO|Claiming lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for this chassis.
Oct  2 08:02:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:30Z|00081|binding|INFO|4a9c114c-7ca6-4f80-ab5d-d38765e4c15c: Claiming fa:16:3e:87:34:a2 10.100.0.11
Oct  2 08:02:30 np0005465987 systemd-machined[188335]: New machine qemu-9-instance-00000010.
Oct  2 08:02:30 np0005465987 systemd[1]: Started Virtual Machine qemu-9-instance-00000010.
Oct  2 08:02:30 np0005465987 systemd-udevd[238260]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:02:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:30Z|00082|binding|INFO|Setting lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c ovn-installed in OVS
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:30 np0005465987 NetworkManager[44910]: <info>  [1759406550.1253] device (tap4a9c114c-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:02:30 np0005465987 NetworkManager[44910]: <info>  [1759406550.1263] device (tap4a9c114c-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:30Z|00083|binding|INFO|Setting lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c up in Southbound
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.152 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:34:a2 10.100.0.11'], port_security=['fa:16:3e:87:34:a2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6febae1b-d70f-43e6-8aba-1e913541fec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58eebdcc-6c12-4ff3-b6bc-0fe1fb3af6b6, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.153 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c in datapath 5bd66e63-9399-4ab1-bcda-a761f2c44b1d bound to our chassis#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.154 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bd66e63-9399-4ab1-bcda-a761f2c44b1d#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.168 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[91c49f90-8310-477c-acd8-e0808534d9ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.169 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5bd66e63-91 in ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.171 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5bd66e63-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.171 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e48bcd95-9cbc-4eba-bf10-1b549ed576bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.172 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1a2a18-224c-4a7e-ac90-e2e1044c0e88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.182 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[87c9953b-345a-4a3e-97b7-8e0684d8e777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.198 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f66612c6-c114-4a55-a6df-0dc4dfac596b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.228 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc6d51a-00a6-447d-a8b4-e77ace035c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 NetworkManager[44910]: <info>  [1759406550.2345] manager: (tap5bd66e63-90): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.235 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ec9a39-f751-44c3-87f3-a9554f3bbb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.267 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[132ae6bf-647e-41e3-8bc2-79c8fb8d545b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.269 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[56307653-1563-4cd4-ba84-d513699f8388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 NetworkManager[44910]: <info>  [1759406550.3013] device (tap5bd66e63-90): carrier: link connected
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.308 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9e3d92-1009-49a3-9284-dcd54c933152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.335 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.339 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[adb0dcdf-7e2f-4db6-bc0e-c17408b3e865]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bd66e63-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:10:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461887, 'reachable_time': 34910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238307, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.363 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4e33334b-3317-443d-bc47-a9f995c45dfa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:100f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461887, 'tstamp': 461887}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238311, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.385 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[507c1d4e-4d55-456e-a0aa-07f13c85f2dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bd66e63-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:10:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461887, 'reachable_time': 34910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238319, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.405 2 DEBUG nova.network.neutron [req-0a57de23-98ca-4291-be58-51745c90317e req-9f6769e0-1449-4a88-a001-8f0bf62e72ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updated VIF entry in instance network info cache for port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.406 2 DEBUG nova.network.neutron [req-0a57de23-98ca-4291-be58-51745c90317e req-9f6769e0-1449-4a88-a001-8f0bf62e72ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating instance_info_cache with network_info: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.419 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d30fe458-6f15-4080-9d95-257cd9d2ec89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.449 2 DEBUG oslo_concurrency.lockutils [req-0a57de23-98ca-4291-be58-51745c90317e req-9f6769e0-1449-4a88-a001-8f0bf62e72ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.514 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e8b80b-43b0-4f83-a544-718f8e978621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.518 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bd66e63-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.518 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.520 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bd66e63-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:02:30 np0005465987 kernel: tap5bd66e63-90: entered promiscuous mode
Oct  2 08:02:30 np0005465987 NetworkManager[44910]: <info>  [1759406550.5269] manager: (tap5bd66e63-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.535 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bd66e63-90, col_values=(('external_ids', {'iface-id': '5a25c40a-77b7-400c-afc3-f6cb920420cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:02:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:30Z|00084|binding|INFO|Releasing lport 5a25c40a-77b7-400c-afc3-f6cb920420cb from this chassis (sb_readonly=0)
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.542 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.543 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1e194f-6f2f-4885-a821-d5bf00027733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.544 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-5bd66e63-9399-4ab1-bcda-a761f2c44b1d
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.pid.haproxy
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 5bd66e63-9399-4ab1-bcda-a761f2c44b1d
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.545 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'env', 'PROCESS_TAG=haproxy-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:30.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:30.722 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.780 2 DEBUG nova.compute.manager [req-4dd68bb8-c4b5-4276-b1b1-2f18907a498e req-c1ef325a-fea3-4f58-b2b6-234b67181c2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.781 2 DEBUG oslo_concurrency.lockutils [req-4dd68bb8-c4b5-4276-b1b1-2f18907a498e req-c1ef325a-fea3-4f58-b2b6-234b67181c2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.782 2 DEBUG oslo_concurrency.lockutils [req-4dd68bb8-c4b5-4276-b1b1-2f18907a498e req-c1ef325a-fea3-4f58-b2b6-234b67181c2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.783 2 DEBUG oslo_concurrency.lockutils [req-4dd68bb8-c4b5-4276-b1b1-2f18907a498e req-c1ef325a-fea3-4f58-b2b6-234b67181c2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:30 np0005465987 nova_compute[230713]: 2025-10-02 12:02:30.783 2 DEBUG nova.compute.manager [req-4dd68bb8-c4b5-4276-b1b1-2f18907a498e req-c1ef325a-fea3-4f58-b2b6-234b67181c2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Processing event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:02:30 np0005465987 podman[238368]: 2025-10-02 12:02:30.961675114 +0000 UTC m=+0.068701032 container create f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:02:31 np0005465987 systemd[1]: Started libpod-conmon-f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a.scope.
Oct  2 08:02:31 np0005465987 podman[238368]: 2025-10-02 12:02:30.918802319 +0000 UTC m=+0.025828247 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:02:31 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:02:31 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6cebea872b3ee88b434d834d3da87d961fadbbe95394b57ac904984c7c08307/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:02:31 np0005465987 podman[238368]: 2025-10-02 12:02:31.049887489 +0000 UTC m=+0.156913437 container init f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:02:31 np0005465987 podman[238368]: 2025-10-02 12:02:31.05588823 +0000 UTC m=+0.162914138 container start f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:02:31 np0005465987 podman[238385]: 2025-10-02 12:02:31.071927762 +0000 UTC m=+0.069127352 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:02:31 np0005465987 podman[238384]: 2025-10-02 12:02:31.074989825 +0000 UTC m=+0.074769565 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:02:31 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[238401]: [NOTICE]   (238440) : New worker (238448) forked
Oct  2 08:02:31 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[238401]: [NOTICE]   (238440) : Loading success.
Oct  2 08:02:31 np0005465987 podman[238381]: 2025-10-02 12:02:31.097248964 +0000 UTC m=+0.098715209 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:02:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:31.121 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.133 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406551.1332624, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.134 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Started (Lifecycle Event)#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.135 2 DEBUG nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.138 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.140 2 INFO nova.virt.libvirt.driver [-] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Instance spawned successfully.#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.141 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.189 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.194 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.199 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.200 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.200 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.200 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.201 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.201 2 DEBUG nova.virt.libvirt.driver [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.239 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.240 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406551.1334145, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.240 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.310 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.313 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406551.1375253, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.314 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.332 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.336 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.340 2 INFO nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Took 5.15 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.340 2 DEBUG nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.383 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.428 2 INFO nova.compute.manager [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Took 8.36 seconds to build instance.#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.454 2 DEBUG oslo_concurrency.lockutils [None req-69c6eb91-efc2-4b49-9e3a-e821e9e3861c efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:31.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:31 np0005465987 nova_compute[230713]: 2025-10-02 12:02:31.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.005 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.005 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.005 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:02:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1488732752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.460 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.540 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.541 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.544 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.545 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.687 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.689 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4676MB free_disk=20.846904754638672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.689 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.689 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:32.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.776 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 7e6da351-456f-4a2c-9712-605de99b1008 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.777 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 6febae1b-d70f-43e6-8aba-1e913541fec2 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.777 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.777 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.850 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.949 2 DEBUG nova.compute.manager [req-d0ef833d-ec5a-4935-bd73-8178c1b9a3e0 req-f7c25594-9d5a-41cf-8e37-79efc41b49e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.950 2 DEBUG oslo_concurrency.lockutils [req-d0ef833d-ec5a-4935-bd73-8178c1b9a3e0 req-f7c25594-9d5a-41cf-8e37-79efc41b49e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.950 2 DEBUG oslo_concurrency.lockutils [req-d0ef833d-ec5a-4935-bd73-8178c1b9a3e0 req-f7c25594-9d5a-41cf-8e37-79efc41b49e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.950 2 DEBUG oslo_concurrency.lockutils [req-d0ef833d-ec5a-4935-bd73-8178c1b9a3e0 req-f7c25594-9d5a-41cf-8e37-79efc41b49e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.950 2 DEBUG nova.compute.manager [req-d0ef833d-ec5a-4935-bd73-8178c1b9a3e0 req-f7c25594-9d5a-41cf-8e37-79efc41b49e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:02:32 np0005465987 nova_compute[230713]: 2025-10-02 12:02:32.951 2 WARNING nova.compute.manager [req-d0ef833d-ec5a-4935-bd73-8178c1b9a3e0 req-f7c25594-9d5a-41cf-8e37-79efc41b49e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state None.#033[00m
Oct  2 08:02:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:02:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3368811766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:02:33 np0005465987 nova_compute[230713]: 2025-10-02 12:02:33.251 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:33 np0005465987 nova_compute[230713]: 2025-10-02 12:02:33.257 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:02:33 np0005465987 nova_compute[230713]: 2025-10-02 12:02:33.286 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:02:33 np0005465987 nova_compute[230713]: 2025-10-02 12:02:33.357 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:02:33 np0005465987 nova_compute[230713]: 2025-10-02 12:02:33.358 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:33.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:34 np0005465987 nova_compute[230713]: 2025-10-02 12:02:34.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:34.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:35 np0005465987 nova_compute[230713]: 2025-10-02 12:02:35.359 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:02:35 np0005465987 nova_compute[230713]: 2025-10-02 12:02:35.359 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:02:35 np0005465987 nova_compute[230713]: 2025-10-02 12:02:35.359 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:02:35 np0005465987 nova_compute[230713]: 2025-10-02 12:02:35.360 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:02:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:36.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:37 np0005465987 nova_compute[230713]: 2025-10-02 12:02:37.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:37 np0005465987 nova_compute[230713]: 2025-10-02 12:02:37.007 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Check if temp file /var/lib/nova/instances/tmp69s6w50k exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Oct  2 08:02:37 np0005465987 nova_compute[230713]: 2025-10-02 12:02:37.008 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp69s6w50k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Oct  2 08:02:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:37.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:02:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:02:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:02:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:02:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:02:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:02:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:02:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:38.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:39 np0005465987 nova_compute[230713]: 2025-10-02 12:02:39.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:39.123 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:02:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:39.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:40 np0005465987 nova_compute[230713]: 2025-10-02 12:02:40.605 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406545.603104, c5bc8428-581c-4ea6-85d8-6f153a1ee723 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:40 np0005465987 nova_compute[230713]: 2025-10-02 12:02:40.605 2 INFO nova.compute.manager [-] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:02:40 np0005465987 nova_compute[230713]: 2025-10-02 12:02:40.624 2 DEBUG nova.compute.manager [None req-bc3e0532-56b2-403d-bc5f-a3df5181fab5 - - - - - -] [instance: c5bc8428-581c-4ea6-85d8-6f153a1ee723] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:40.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:41.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:42 np0005465987 nova_compute[230713]: 2025-10-02 12:02:42.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:42.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:43 np0005465987 nova_compute[230713]: 2025-10-02 12:02:43.078 2 DEBUG nova.compute.manager [req-dfd8f302-7e58-41ae-b7ad-d85581116273 req-8b4ca192-6ae4-4584-9dcc-6d2d37884b1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:43 np0005465987 nova_compute[230713]: 2025-10-02 12:02:43.078 2 DEBUG oslo_concurrency.lockutils [req-dfd8f302-7e58-41ae-b7ad-d85581116273 req-8b4ca192-6ae4-4584-9dcc-6d2d37884b1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:43 np0005465987 nova_compute[230713]: 2025-10-02 12:02:43.079 2 DEBUG oslo_concurrency.lockutils [req-dfd8f302-7e58-41ae-b7ad-d85581116273 req-8b4ca192-6ae4-4584-9dcc-6d2d37884b1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:43 np0005465987 nova_compute[230713]: 2025-10-02 12:02:43.079 2 DEBUG oslo_concurrency.lockutils [req-dfd8f302-7e58-41ae-b7ad-d85581116273 req-8b4ca192-6ae4-4584-9dcc-6d2d37884b1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:43 np0005465987 nova_compute[230713]: 2025-10-02 12:02:43.079 2 DEBUG nova.compute.manager [req-dfd8f302-7e58-41ae-b7ad-d85581116273 req-8b4ca192-6ae4-4584-9dcc-6d2d37884b1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:02:43 np0005465987 nova_compute[230713]: 2025-10-02 12:02:43.079 2 DEBUG nova.compute.manager [req-dfd8f302-7e58-41ae-b7ad-d85581116273 req-8b4ca192-6ae4-4584-9dcc-6d2d37884b1a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:02:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:43.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:44 np0005465987 nova_compute[230713]: 2025-10-02 12:02:44.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:44Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:34:a2 10.100.0.11
Oct  2 08:02:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:44Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:34:a2 10.100.0.11
Oct  2 08:02:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.026 2 INFO nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Took 6.28 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.027 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.060 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp69s6w50k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(b057e5f2-6ec0-4868-84e3-0821ef86c9af),old_vol_attachment_ids={b3df5dc9-9a56-4922-8b65-4162deb6be93='5c71f9a3-182a-4066-9e3d-88ac9417292f'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.064 2 DEBUG nova.objects.instance [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lazy-loading 'migration_context' on Instance uuid 6febae1b-d70f-43e6-8aba-1e913541fec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.066 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.068 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.069 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.092 2 DEBUG nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Find same serial number: pos=1, serial=b3df5dc9-9a56-4922-8b65-4162deb6be93 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.095 2 DEBUG nova.virt.libvirt.vif [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:02:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-304906255',display_name='tempest-LiveMigrationTest-server-304906255',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-304906255',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-utis6e3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:02:31Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=6febae1b-d70f-43e6-8aba-1e913541fec2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.095 2 DEBUG nova.network.os_vif_util [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converting VIF {"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.097 2 DEBUG nova.network.os_vif_util [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.097 2 DEBUG nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating guest XML with vif config: <interface type="ethernet">
Oct  2 08:02:45 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:87:34:a2"/>
Oct  2 08:02:45 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:02:45 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:02:45 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:02:45 np0005465987 nova_compute[230713]:  <target dev="tap4a9c114c-7c"/>
Oct  2 08:02:45 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:02:45 np0005465987 nova_compute[230713]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.099 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.220 2 DEBUG nova.compute.manager [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.221 2 DEBUG oslo_concurrency.lockutils [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.221 2 DEBUG oslo_concurrency.lockutils [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.221 2 DEBUG oslo_concurrency.lockutils [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.221 2 DEBUG nova.compute.manager [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.222 2 WARNING nova.compute.manager [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.222 2 DEBUG nova.compute.manager [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-changed-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.222 2 DEBUG nova.compute.manager [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Refreshing instance network info cache due to event network-changed-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.222 2 DEBUG oslo_concurrency.lockutils [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.222 2 DEBUG oslo_concurrency.lockutils [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.223 2 DEBUG nova.network.neutron [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Refreshing network info cache for port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:02:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:02:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.572 2 DEBUG nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.572 2 INFO nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Oct  2 08:02:45 np0005465987 nova_compute[230713]: 2025-10-02 12:02:45.733 2 INFO nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Oct  2 08:02:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:45.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:46 np0005465987 nova_compute[230713]: 2025-10-02 12:02:46.236 2 DEBUG nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:02:46 np0005465987 nova_compute[230713]: 2025-10-02 12:02:46.236 2 DEBUG nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:02:46 np0005465987 nova_compute[230713]: 2025-10-02 12:02:46.530 2 DEBUG nova.network.neutron [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updated VIF entry in instance network info cache for port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:02:46 np0005465987 nova_compute[230713]: 2025-10-02 12:02:46.531 2 DEBUG nova.network.neutron [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating instance_info_cache with network_info: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:46.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:46 np0005465987 nova_compute[230713]: 2025-10-02 12:02:46.725 2 DEBUG oslo_concurrency.lockutils [req-235a25aa-4427-4f8f-91c4-3d2b1a2c92b4 req-95347c66-82e7-4967-b9a2-d1d40f98c35b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:02:46 np0005465987 nova_compute[230713]: 2025-10-02 12:02:46.738 2 DEBUG nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:02:46 np0005465987 nova_compute[230713]: 2025-10-02 12:02:46.739 2 DEBUG nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:02:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.242 2 DEBUG nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.242 2 DEBUG nova.virt.libvirt.migration [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.261 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406567.261419, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.262 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.302 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.305 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.391 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Oct  2 08:02:47 np0005465987 kernel: tap4a9c114c-7c (unregistering): left promiscuous mode
Oct  2 08:02:47 np0005465987 NetworkManager[44910]: <info>  [1759406567.6232] device (tap4a9c114c-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:02:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:47Z|00085|binding|INFO|Releasing lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c from this chassis (sb_readonly=0)
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:47Z|00086|binding|INFO|Setting lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c down in Southbound
Oct  2 08:02:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:02:47Z|00087|binding|INFO|Removing iface tap4a9c114c-7c ovn-installed in OVS
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:47.655 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:34:a2 10.100.0.11'], port_security=['fa:16:3e:87:34:a2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '6718a9ec-e13c-42f0-978a-6c44c48d0d54'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6febae1b-d70f-43e6-8aba-1e913541fec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58eebdcc-6c12-4ff3-b6bc-0fe1fb3af6b6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:02:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:47.656 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c in datapath 5bd66e63-9399-4ab1-bcda-a761f2c44b1d unbound from our chassis#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:47.657 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bd66e63-9399-4ab1-bcda-a761f2c44b1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:02:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:47.658 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[be2e312a-5d6d-4168-b98c-38a88b817c08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:47.659 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d namespace which is not needed anymore#033[00m
Oct  2 08:02:47 np0005465987 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct  2 08:02:47 np0005465987 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000010.scope: Consumed 13.850s CPU time.
Oct  2 08:02:47 np0005465987 systemd-machined[188335]: Machine qemu-9-instance-00000010 terminated.
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.698 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Acquiring lock "7e6da351-456f-4a2c-9712-605de99b1008" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.699 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "7e6da351-456f-4a2c-9712-605de99b1008" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.699 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Acquiring lock "7e6da351-456f-4a2c-9712-605de99b1008-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.699 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "7e6da351-456f-4a2c-9712-605de99b1008-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.700 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "7e6da351-456f-4a2c-9712-605de99b1008-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.701 2 INFO nova.compute.manager [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Terminating instance#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.702 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Acquiring lock "refresh_cache-7e6da351-456f-4a2c-9712-605de99b1008" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.702 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Acquired lock "refresh_cache-7e6da351-456f-4a2c-9712-605de99b1008" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.702 2 DEBUG nova.network.neutron [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:02:47 np0005465987 virtqemud[230442]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volume-b3df5dc9-9a56-4922-8b65-4162deb6be93: No such file or directory
Oct  2 08:02:47 np0005465987 virtqemud[230442]: Unable to get XATTR trusted.libvirt.security.ref_dac on volume-b3df5dc9-9a56-4922-8b65-4162deb6be93: No such file or directory
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.742 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.742 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.742 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.744 2 DEBUG nova.virt.libvirt.guest [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '6febae1b-d70f-43e6-8aba-1e913541fec2' (instance-00000010) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.745 2 INFO nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migration operation has completed#033[00m
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.745 2 INFO nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] _post_live_migration() is started..#033[00m
Oct  2 08:02:47 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[238401]: [NOTICE]   (238440) : haproxy version is 2.8.14-c23fe91
Oct  2 08:02:47 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[238401]: [NOTICE]   (238440) : path to executable is /usr/sbin/haproxy
Oct  2 08:02:47 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[238401]: [WARNING]  (238440) : Exiting Master process...
Oct  2 08:02:47 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[238401]: [ALERT]    (238440) : Current worker (238448) exited with code 143 (Terminated)
Oct  2 08:02:47 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[238401]: [WARNING]  (238440) : All workers exited. Exiting... (0)
Oct  2 08:02:47 np0005465987 systemd[1]: libpod-f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a.scope: Deactivated successfully.
Oct  2 08:02:47 np0005465987 podman[238837]: 2025-10-02 12:02:47.802332762 +0000 UTC m=+0.056386949 container died f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:02:47 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:02:47 np0005465987 systemd[1]: var-lib-containers-storage-overlay-b6cebea872b3ee88b434d834d3da87d961fadbbe95394b57ac904984c7c08307-merged.mount: Deactivated successfully.
Oct  2 08:02:47 np0005465987 podman[238837]: 2025-10-02 12:02:47.899333605 +0000 UTC m=+0.153387762 container cleanup f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 08:02:47 np0005465987 systemd[1]: libpod-conmon-f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a.scope: Deactivated successfully.
Oct  2 08:02:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:47 np0005465987 nova_compute[230713]: 2025-10-02 12:02:47.982 2 DEBUG nova.network.neutron [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:02:48 np0005465987 podman[238872]: 2025-10-02 12:02:48.014161817 +0000 UTC m=+0.092117242 container remove f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:48.019 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[86fcdb11-d88f-401d-a5ce-14d9d435b81e]: (4, ('Thu Oct  2 12:02:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d (f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a)\nf536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a\nThu Oct  2 12:02:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d (f536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a)\nf536262e40602355d0b558e444d9e739858d67ed9fe5362105a409b4f59f2c2a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:48.020 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d32ace7b-e62a-4460-9cfc-a8fd07a0106e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:48.021 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bd66e63-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:48 np0005465987 kernel: tap5bd66e63-90: left promiscuous mode
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:48.041 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca5adae-f78e-4e0d-aaf7-cf7872984e9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:48.070 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c4255334-7657-42e3-ae32-0fb1c34fe1c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:48.071 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9bef970c-f83f-4555-b9e5-4bc8db4dfed8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:48.087 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dde79d00-81ca-4c8c-b90b-cef38a95721c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461879, 'reachable_time': 26170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238890, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:48.089 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:02:48.089 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[2d87827e-b5fd-45e5-aa21-35f98ee28358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:02:48 np0005465987 systemd[1]: run-netns-ovnmeta\x2d5bd66e63\x2d9399\x2d4ab1\x2dbcda\x2da761f2c44b1d.mount: Deactivated successfully.
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.108 2 DEBUG nova.compute.manager [req-c8bd066a-5f89-4920-b480-0c36e0526606 req-72277d28-a98b-418b-bc91-7e8d91d4ef81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.108 2 DEBUG oslo_concurrency.lockutils [req-c8bd066a-5f89-4920-b480-0c36e0526606 req-72277d28-a98b-418b-bc91-7e8d91d4ef81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.109 2 DEBUG oslo_concurrency.lockutils [req-c8bd066a-5f89-4920-b480-0c36e0526606 req-72277d28-a98b-418b-bc91-7e8d91d4ef81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.109 2 DEBUG oslo_concurrency.lockutils [req-c8bd066a-5f89-4920-b480-0c36e0526606 req-72277d28-a98b-418b-bc91-7e8d91d4ef81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.109 2 DEBUG nova.compute.manager [req-c8bd066a-5f89-4920-b480-0c36e0526606 req-72277d28-a98b-418b-bc91-7e8d91d4ef81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.109 2 DEBUG nova.compute.manager [req-c8bd066a-5f89-4920-b480-0c36e0526606 req-72277d28-a98b-418b-bc91-7e8d91d4ef81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.340 2 DEBUG nova.network.neutron [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.382 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Releasing lock "refresh_cache-7e6da351-456f-4a2c-9712-605de99b1008" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:02:48 np0005465987 nova_compute[230713]: 2025-10-02 12:02:48.382 2 DEBUG nova.compute.manager [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:02:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:48.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:48 np0005465987 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct  2 08:02:48 np0005465987 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000f.scope: Consumed 13.554s CPU time.
Oct  2 08:02:48 np0005465987 systemd-machined[188335]: Machine qemu-8-instance-0000000f terminated.
Oct  2 08:02:49 np0005465987 nova_compute[230713]: 2025-10-02 12:02:49.008 2 INFO nova.virt.libvirt.driver [-] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Instance destroyed successfully.#033[00m
Oct  2 08:02:49 np0005465987 nova_compute[230713]: 2025-10-02 12:02:49.009 2 DEBUG nova.objects.instance [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lazy-loading 'resources' on Instance uuid 7e6da351-456f-4a2c-9712-605de99b1008 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:02:49 np0005465987 nova_compute[230713]: 2025-10-02 12:02:49.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:49.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:50.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.866 2 DEBUG nova.compute.manager [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.867 2 DEBUG oslo_concurrency.lockutils [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.868 2 DEBUG oslo_concurrency.lockutils [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.869 2 DEBUG oslo_concurrency.lockutils [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.869 2 DEBUG nova.compute.manager [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.870 2 WARNING nova.compute.manager [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.870 2 DEBUG nova.compute.manager [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.871 2 DEBUG oslo_concurrency.lockutils [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.871 2 DEBUG oslo_concurrency.lockutils [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.872 2 DEBUG oslo_concurrency.lockutils [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.872 2 DEBUG nova.compute.manager [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.873 2 WARNING nova.compute.manager [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.873 2 DEBUG nova.compute.manager [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.874 2 DEBUG oslo_concurrency.lockutils [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.875 2 DEBUG oslo_concurrency.lockutils [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.875 2 DEBUG oslo_concurrency.lockutils [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.876 2 DEBUG nova.compute.manager [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.876 2 DEBUG nova.compute.manager [req-f8bd02ab-e681-4731-8317-4c024027edb4 req-2bbbea67-2730-46b6-a6a9-3a6914c414b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.923 2 DEBUG nova.network.neutron [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Activated binding for port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.923 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.924 2 DEBUG nova.virt.libvirt.vif [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:02:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-304906255',display_name='tempest-LiveMigrationTest-server-304906255',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-304906255',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-utis6e3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:02:36Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=6febae1b-d70f-43e6-8aba-1e913541fec2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.924 2 DEBUG nova.network.os_vif_util [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converting VIF {"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.925 2 DEBUG nova.network.os_vif_util [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.925 2 DEBUG os_vif [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.927 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a9c114c-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.933 2 INFO os_vif [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c')#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.933 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.934 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.934 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.934 2 DEBUG nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.935 2 INFO nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Deleting instance files /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2_del#033[00m
Oct  2 08:02:50 np0005465987 nova_compute[230713]: 2025-10-02 12:02:50.935 2 INFO nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Deletion of /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2_del complete#033[00m
Oct  2 08:02:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:51.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.970 2 DEBUG nova.compute.manager [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.970 2 DEBUG oslo_concurrency.lockutils [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.970 2 DEBUG oslo_concurrency.lockutils [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.971 2 DEBUG oslo_concurrency.lockutils [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.971 2 DEBUG nova.compute.manager [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.971 2 WARNING nova.compute.manager [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.972 2 DEBUG nova.compute.manager [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.972 2 DEBUG oslo_concurrency.lockutils [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.972 2 DEBUG oslo_concurrency.lockutils [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.972 2 DEBUG oslo_concurrency.lockutils [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.973 2 DEBUG nova.compute.manager [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:02:52 np0005465987 nova_compute[230713]: 2025-10-02 12:02:52.973 2 WARNING nova.compute.manager [req-a21f05b3-764f-417f-aca5-7d06b35bb5bd req-9866f38c-fe78-4bae-ba41-e4bab88bed40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:02:53 np0005465987 podman[238911]: 2025-10-02 12:02:53.851580417 +0000 UTC m=+0.064722273 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:02:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:53.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:54.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:54 np0005465987 nova_compute[230713]: 2025-10-02 12:02:54.859 2 INFO nova.virt.libvirt.driver [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Deleting instance files /var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008_del#033[00m
Oct  2 08:02:54 np0005465987 nova_compute[230713]: 2025-10-02 12:02:54.861 2 INFO nova.virt.libvirt.driver [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Deletion of /var/lib/nova/instances/7e6da351-456f-4a2c-9712-605de99b1008_del complete#033[00m
Oct  2 08:02:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:02:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 47K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3104 syncs, 3.77 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3393 writes, 14K keys, 3393 commit groups, 1.0 writes per commit group, ingest: 12.38 MB, 0.02 MB/s#012Interval WAL: 3393 writes, 1320 syncs, 2.57 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:02:54 np0005465987 nova_compute[230713]: 2025-10-02 12:02:54.925 2 INFO nova.compute.manager [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Took 6.54 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:02:54 np0005465987 nova_compute[230713]: 2025-10-02 12:02:54.926 2 DEBUG oslo.service.loopingcall [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:02:54 np0005465987 nova_compute[230713]: 2025-10-02 12:02:54.927 2 DEBUG nova.compute.manager [-] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:02:54 np0005465987 nova_compute[230713]: 2025-10-02 12:02:54.927 2 DEBUG nova.network.neutron [-] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:02:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:02:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3579900210' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:02:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:02:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3579900210' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.065 2 DEBUG nova.network.neutron [-] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.081 2 DEBUG nova.network.neutron [-] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.095 2 INFO nova.compute.manager [-] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Took 0.17 seconds to deallocate network for instance.#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.141 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.141 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.202 2 DEBUG oslo_concurrency.processutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:02:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/716255890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.708 2 DEBUG oslo_concurrency.processutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.713 2 DEBUG nova.compute.provider_tree [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.730 2 DEBUG nova.scheduler.client.report [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.768 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.790 2 INFO nova.scheduler.client.report [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Deleted allocations for instance 7e6da351-456f-4a2c-9712-605de99b1008#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.895 2 DEBUG oslo_concurrency.lockutils [None req-749852ad-fe3f-4b40-91e7-9282edbcbc3d bea607da06554a67af45c6df851f7c86 1b8418ff78264e3292f5cd5b736866f0 - - default default] Lock "7e6da351-456f-4a2c-9712-605de99b1008" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:55 np0005465987 nova_compute[230713]: 2025-10-02 12:02:55.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:55.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:56.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:56 np0005465987 nova_compute[230713]: 2025-10-02 12:02:56.821 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:56 np0005465987 nova_compute[230713]: 2025-10-02 12:02:56.822 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:56 np0005465987 nova_compute[230713]: 2025-10-02 12:02:56.823 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:56 np0005465987 nova_compute[230713]: 2025-10-02 12:02:56.899 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:56 np0005465987 nova_compute[230713]: 2025-10-02 12:02:56.901 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:56 np0005465987 nova_compute[230713]: 2025-10-02 12:02:56.901 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:56 np0005465987 nova_compute[230713]: 2025-10-02 12:02:56.901 2 DEBUG nova.compute.resource_tracker [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:02:56 np0005465987 nova_compute[230713]: 2025-10-02 12:02:56.902 2 DEBUG oslo_concurrency.processutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:02:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:02:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4123010262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.384 2 DEBUG oslo_concurrency.processutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.568 2 WARNING nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.570 2 DEBUG nova.compute.resource_tracker [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4926MB free_disk=20.942455291748047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.570 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.571 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.613 2 DEBUG nova.compute.resource_tracker [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Migration for instance 6febae1b-d70f-43e6-8aba-1e913541fec2 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.640 2 DEBUG nova.compute.resource_tracker [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.657 2 DEBUG nova.compute.resource_tracker [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Migration b057e5f2-6ec0-4868-84e3-0821ef86c9af is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.657 2 DEBUG nova.compute.resource_tracker [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.657 2 DEBUG nova.compute.resource_tracker [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:02:57 np0005465987 nova_compute[230713]: 2025-10-02 12:02:57.703 2 DEBUG oslo_concurrency.processutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:02:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:57.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:02:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/258884514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.133 2 DEBUG oslo_concurrency.processutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.141 2 DEBUG nova.compute.provider_tree [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.161 2 DEBUG nova.scheduler.client.report [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.199 2 DEBUG nova.compute.resource_tracker [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.200 2 DEBUG oslo_concurrency.lockutils [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.206 2 INFO nova.compute.manager [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.349 2 INFO nova.scheduler.client.report [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Deleted allocation for migration b057e5f2-6ec0-4868-84e3-0821ef86c9af#033[00m
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.350 2 DEBUG nova.virt.libvirt.driver [None req-ce9831b7-0a86-4beb-8ad8-70673087e452 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.593 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Creating tmpfile /var/lib/nova/instances/tmps5_fkoiu to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct  2 08:02:58 np0005465987 nova_compute[230713]: 2025-10-02 12:02:58.594 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps5_fkoiu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct  2 08:02:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:02:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:58.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:02:59 np0005465987 nova_compute[230713]: 2025-10-02 12:02:59.860 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps5_fkoiu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct  2 08:02:59 np0005465987 nova_compute[230713]: 2025-10-02 12:02:59.890 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:02:59 np0005465987 nova_compute[230713]: 2025-10-02 12:02:59.891 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquired lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:02:59 np0005465987 nova_compute[230713]: 2025-10-02 12:02:59.891 2 DEBUG nova.network.neutron [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:02:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:02:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:59.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:00.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:00 np0005465987 nova_compute[230713]: 2025-10-02 12:03:00.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:01 np0005465987 podman[239002]: 2025-10-02 12:03:01.887075339 +0000 UTC m=+0.093482568 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:03:01 np0005465987 podman[239001]: 2025-10-02 12:03:01.903415669 +0000 UTC m=+0.111866853 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 08:03:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:01 np0005465987 podman[239003]: 2025-10-02 12:03:01.920787697 +0000 UTC m=+0.113390214 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:03:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:01.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.465 2 DEBUG nova.network.neutron [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating instance_info_cache with network_info: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.500 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Releasing lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.502 2 DEBUG os_brick.utils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.504 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.518 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.518 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[3bba2135-28ee-4eab-b329-b3712f4ea915]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.520 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.526 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.526 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[18e81088-213c-4dee-bf29-3fa2305c313a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.528 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.537 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.537 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[9d428c01-991f-45fd-96f8-5b37f3ef5a2e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.539 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[7557c3d5-fb55-4541-b98b-b047803a002b]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.540 2 DEBUG oslo_concurrency.processutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.560 2 DEBUG oslo_concurrency.processutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.563 2 DEBUG os_brick.initiator.connectors.lightos [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.563 2 DEBUG os_brick.initiator.connectors.lightos [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.563 2 DEBUG os_brick.initiator.connectors.lightos [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.564 2 DEBUG os_brick.utils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:03:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:02.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.740 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406567.7392368, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.741 2 INFO nova.compute.manager [-] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:03:02 np0005465987 nova_compute[230713]: 2025-10-02 12:03:02.795 2 DEBUG nova.compute.manager [None req-79466da5-19b7-4f6a-b0be-0e92359531ea - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:03:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4290698394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:03:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:03.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.007 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406569.0060413, 7e6da351-456f-4a2c-9712-605de99b1008 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.008 2 INFO nova.compute.manager [-] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.029 2 DEBUG nova.compute.manager [None req-fbe727f0-41d9-4416-bacf-5c61e7c6f3ef - - - - - -] [instance: 7e6da351-456f-4a2c-9712-605de99b1008] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.056 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "d0395d7b-b82b-4a36-8cd5-8fa9b00807c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.057 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "d0395d7b-b82b-4a36-8cd5-8fa9b00807c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.083 2 DEBUG nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.198 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.198 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.204 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.205 2 INFO nova.compute.claims [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.338 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps5_fkoiu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b3df5dc9-9a56-4922-8b65-4162deb6be93='3d957787-f9d3-4665-bd82-80e6c561747f'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.339 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Creating instance directory: /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.339 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Ensure instance console log exists: /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.339 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.342 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.343 2 DEBUG nova.virt.libvirt.vif [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:02:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-304906255',display_name='tempest-LiveMigrationTest-server-304906255',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-304906255',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-utis6e3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:02:55Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=6febae1b-d70f-43e6-8aba-1e913541fec2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.343 2 DEBUG nova.network.os_vif_util [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converting VIF {"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.344 2 DEBUG nova.network.os_vif_util [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.344 2 DEBUG os_vif [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a9c114c-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a9c114c-7c, col_values=(('external_ids', {'iface-id': '4a9c114c-7ca6-4f80-ab5d-d38765e4c15c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:34:a2', 'vm-uuid': '6febae1b-d70f-43e6-8aba-1e913541fec2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:03:04 np0005465987 NetworkManager[44910]: <info>  [1759406584.3518] manager: (tap4a9c114c-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.356 2 INFO os_vif [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c')#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.358 2 DEBUG nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.359 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps5_fkoiu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b3df5dc9-9a56-4922-8b65-4162deb6be93='3d957787-f9d3-4665-bd82-80e6c561747f'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.401 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:03:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:04.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:03:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:03:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1161370037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.824 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.832 2 DEBUG nova.compute.provider_tree [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.854 2 DEBUG nova.scheduler.client.report [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.940 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:04 np0005465987 nova_compute[230713]: 2025-10-02 12:03:04.941 2 DEBUG nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.005 2 DEBUG nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.006 2 DEBUG nova.network.neutron [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.026 2 INFO nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.047 2 DEBUG nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.202 2 DEBUG nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.204 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.204 2 INFO nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Creating image(s)#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.248 2 DEBUG nova.storage.rbd_utils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.290 2 DEBUG nova.storage.rbd_utils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.331 2 DEBUG nova.storage.rbd_utils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.336 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.413 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.414 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.414 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.415 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.445 2 DEBUG nova.storage.rbd_utils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.450 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.508 2 DEBUG nova.network.neutron [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.509 2 DEBUG nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.728 2 DEBUG nova.network.neutron [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct  2 08:03:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:03:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:05.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:03:05 np0005465987 nova_compute[230713]: 2025-10-02 12:03:05.967 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.076 2 DEBUG nova.storage.rbd_utils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] resizing rbd image d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.141 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps5_fkoiu',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6febae1b-d70f-43e6-8aba-1e913541fec2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b3df5dc9-9a56-4922-8b65-4162deb6be93='3d957787-f9d3-4665-bd82-80e6c561747f'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.236 2 DEBUG nova.objects.instance [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lazy-loading 'migration_context' on Instance uuid d0395d7b-b82b-4a36-8cd5-8fa9b00807c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.249 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.250 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Ensure instance console log exists: /var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.250 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.251 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.251 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.253 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.258 2 WARNING nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.265 2 DEBUG nova.virt.libvirt.host [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.267 2 DEBUG nova.virt.libvirt.host [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.275 2 DEBUG nova.virt.libvirt.host [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.276 2 DEBUG nova.virt.libvirt.host [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.278 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.278 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.279 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.279 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.280 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.280 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.280 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.280 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.281 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.281 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.281 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.282 2 DEBUG nova.virt.hardware [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.285 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:06 np0005465987 kernel: tap4a9c114c-7c: entered promiscuous mode
Oct  2 08:03:06 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:06Z|00088|binding|INFO|Claiming lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for this additional chassis.
Oct  2 08:03:06 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:06Z|00089|binding|INFO|4a9c114c-7ca6-4f80-ab5d-d38765e4c15c: Claiming fa:16:3e:87:34:a2 10.100.0.11
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:06 np0005465987 NetworkManager[44910]: <info>  [1759406586.3805] manager: (tap4a9c114c-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Oct  2 08:03:06 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:06Z|00090|binding|INFO|Setting lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c ovn-installed in OVS
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:06 np0005465987 systemd-udevd[239277]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:03:06 np0005465987 systemd-machined[188335]: New machine qemu-10-instance-00000010.
Oct  2 08:03:06 np0005465987 systemd[1]: Started Virtual Machine qemu-10-instance-00000010.
Oct  2 08:03:06 np0005465987 NetworkManager[44910]: <info>  [1759406586.4425] device (tap4a9c114c-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:03:06 np0005465987 NetworkManager[44910]: <info>  [1759406586.4437] device (tap4a9c114c-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:03:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:03:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3046991153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:03:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:06.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.748 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.784 2 DEBUG nova.storage.rbd_utils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:06 np0005465987 nova_compute[230713]: 2025-10-02 12:03:06.789 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:03:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1093490568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.241 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.244 2 DEBUG nova.objects.instance [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lazy-loading 'pci_devices' on Instance uuid d0395d7b-b82b-4a36-8cd5-8fa9b00807c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.281 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <uuid>d0395d7b-b82b-4a36-8cd5-8fa9b00807c8</uuid>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <name>instance-00000012</name>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-75009323</nova:name>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:03:06</nova:creationTime>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <nova:user uuid="e829ca31274b48b9b656df1ba321832c">tempest-DeleteServersAdminTestJSON-200716909-project-member</nova:user>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <nova:project uuid="612d5f60edd74e79a420378579b416ea">tempest-DeleteServersAdminTestJSON-200716909</nova:project>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <entry name="serial">d0395d7b-b82b-4a36-8cd5-8fa9b00807c8</entry>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <entry name="uuid">d0395d7b-b82b-4a36-8cd5-8fa9b00807c8</entry>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk.config">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8/console.log" append="off"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:03:07 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:03:07 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:03:07 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:03:07 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.355 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.356 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.357 2 INFO nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Using config drive#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.391 2 DEBUG nova.storage.rbd_utils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.534 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406587.5343094, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.535 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Started (Lifecycle Event)#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.558 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.909 2 INFO nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Creating config drive at /var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8/disk.config#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.914 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpct9t76yg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:07.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.967 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406587.96653, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.967 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.991 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:07 np0005465987 nova_compute[230713]: 2025-10-02 12:03:07.995 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:03:08 np0005465987 nova_compute[230713]: 2025-10-02 12:03:08.017 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:03:08 np0005465987 nova_compute[230713]: 2025-10-02 12:03:08.042 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpct9t76yg" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:08 np0005465987 nova_compute[230713]: 2025-10-02 12:03:08.074 2 DEBUG nova.storage.rbd_utils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:08 np0005465987 nova_compute[230713]: 2025-10-02 12:03:08.079 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8/disk.config d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:08 np0005465987 nova_compute[230713]: 2025-10-02 12:03:08.696 2 DEBUG oslo_concurrency.processutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8/disk.config d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:08 np0005465987 nova_compute[230713]: 2025-10-02 12:03:08.698 2 INFO nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Deleting local config drive /var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8/disk.config because it was imported into RBD.#033[00m
Oct  2 08:03:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:03:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:08.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:03:08 np0005465987 systemd-machined[188335]: New machine qemu-11-instance-00000012.
Oct  2 08:03:08 np0005465987 systemd[1]: Started Virtual Machine qemu-11-instance-00000012.
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:09Z|00091|binding|INFO|Claiming lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for this chassis.
Oct  2 08:03:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:09Z|00092|binding|INFO|4a9c114c-7ca6-4f80-ab5d-d38765e4c15c: Claiming fa:16:3e:87:34:a2 10.100.0.11
Oct  2 08:03:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:09Z|00093|binding|INFO|Setting lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c up in Southbound
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.425 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:34:a2 10.100.0.11'], port_security=['fa:16:3e:87:34:a2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6febae1b-d70f-43e6-8aba-1e913541fec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '21', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58eebdcc-6c12-4ff3-b6bc-0fe1fb3af6b6, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.426 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c in datapath 5bd66e63-9399-4ab1-bcda-a761f2c44b1d bound to our chassis#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.428 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bd66e63-9399-4ab1-bcda-a761f2c44b1d#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.446 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[11c88eae-97b1-4cf1-a948-fc3a00d547a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.447 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5bd66e63-91 in ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.454 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5bd66e63-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.454 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[893ceee2-e045-47c7-86a4-ce8158412fa3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.456 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9526f191-97cb-4946-b8d6-f4579905e7d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.471 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[47e47db0-70f9-4914-85a7-983d62b5aeef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.501 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b7f16d-ee63-4f26-9f9d-03cf14c916f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.539 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[869d7fb4-5f5e-4692-9b30-59033ab966fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 NetworkManager[44910]: <info>  [1759406589.5478] manager: (tap5bd66e63-90): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.547 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cf35db10-3797-46c2-a0b2-bb218337a2ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 systemd-udevd[239509]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.587 2 INFO nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Post operation of migration started#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.596 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b767b48b-2f5b-45d6-8492-389dee979a44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.604 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c531e017-3405-4274-bdad-70dcfa22552b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 NetworkManager[44910]: <info>  [1759406589.6395] device (tap5bd66e63-90): carrier: link connected
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.648 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd325aa-6d43-41c7-9cea-9271728aedaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.670 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c2230057-d7e3-4b86-8e21-7cc8cfda551a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bd66e63-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:10:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465821, 'reachable_time': 31174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239528, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.689 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[40266224-3d0f-4714-b2ac-64c66a61a582]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:100f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465821, 'tstamp': 465821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239529, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.711 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[78191d0e-b672-4ea9-a746-07c5433c8b00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bd66e63-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:10:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465821, 'reachable_time': 31174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239530, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.748 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[337cca2c-4725-4a08-b44e-fd94e784b2e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.765 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406589.7646806, d0395d7b-b82b-4a36-8cd5-8fa9b00807c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.765 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.767 2 DEBUG nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.768 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.772 2 INFO nova.virt.libvirt.driver [-] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Instance spawned successfully.#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.772 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.787 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.792 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.796 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.796 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.797 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.797 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.798 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.798 2 DEBUG nova.virt.libvirt.driver [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.824 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.824 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406589.7654765, d0395d7b-b82b-4a36-8cd5-8fa9b00807c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.824 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[320d8af5-9079-4016-9def-95b5c7e60d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.825 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] VM Started (Lifecycle Event)#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.825 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bd66e63-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.825 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.826 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bd66e63-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:09 np0005465987 NetworkManager[44910]: <info>  [1759406589.8291] manager: (tap5bd66e63-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct  2 08:03:09 np0005465987 kernel: tap5bd66e63-90: entered promiscuous mode
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.831 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bd66e63-90, col_values=(('external_ids', {'iface-id': '5a25c40a-77b7-400c-afc3-f6cb920420cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:09Z|00094|binding|INFO|Releasing lport 5a25c40a-77b7-400c-afc3-f6cb920420cb from this chassis (sb_readonly=0)
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.834 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.835 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[84c25caf-32b1-4f7a-801c-8b5754beaceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.836 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-5bd66e63-9399-4ab1-bcda-a761f2c44b1d
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.pid.haproxy
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 5bd66e63-9399-4ab1-bcda-a761f2c44b1d
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:03:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:09.836 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'env', 'PROCESS_TAG=haproxy-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5bd66e63-9399-4ab1-bcda-a761f2c44b1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.861 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.866 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.871 2 INFO nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Took 4.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.872 2 DEBUG nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.896 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.942 2 INFO nova.compute.manager [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Took 5.77 seconds to build instance.#033[00m
Oct  2 08:03:09 np0005465987 nova_compute[230713]: 2025-10-02 12:03:09.960 2 DEBUG oslo_concurrency.lockutils [None req-f9f2dd6f-4565-4630-b9df-cda9325a34fe e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "d0395d7b-b82b-4a36-8cd5-8fa9b00807c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:09.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:10 np0005465987 podman[239563]: 2025-10-02 12:03:10.169590514 +0000 UTC m=+0.022379723 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:03:10 np0005465987 podman[239563]: 2025-10-02 12:03:10.292624167 +0000 UTC m=+0.145413356 container create 27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:03:10 np0005465987 systemd[1]: Started libpod-conmon-27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8.scope.
Oct  2 08:03:10 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:03:10 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512d87c96b7758644ead0b64255200f4b1178c429486d9489bd5422f3c783308/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:10 np0005465987 podman[239563]: 2025-10-02 12:03:10.414198751 +0000 UTC m=+0.266987960 container init 27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:03:10 np0005465987 podman[239563]: 2025-10-02 12:03:10.420700455 +0000 UTC m=+0.273489644 container start 27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:03:10 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[239578]: [NOTICE]   (239582) : New worker (239584) forked
Oct  2 08:03:10 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[239578]: [NOTICE]   (239582) : Loading success.
Oct  2 08:03:10 np0005465987 nova_compute[230713]: 2025-10-02 12:03:10.678 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:03:10 np0005465987 nova_compute[230713]: 2025-10-02 12:03:10.679 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquired lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:03:10 np0005465987 nova_compute[230713]: 2025-10-02 12:03:10.679 2 DEBUG nova.network.neutron [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:03:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:10.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:11 np0005465987 nova_compute[230713]: 2025-10-02 12:03:11.896 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Acquiring lock "d0395d7b-b82b-4a36-8cd5-8fa9b00807c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:11 np0005465987 nova_compute[230713]: 2025-10-02 12:03:11.897 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Lock "d0395d7b-b82b-4a36-8cd5-8fa9b00807c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:11 np0005465987 nova_compute[230713]: 2025-10-02 12:03:11.898 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Acquiring lock "d0395d7b-b82b-4a36-8cd5-8fa9b00807c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:11 np0005465987 nova_compute[230713]: 2025-10-02 12:03:11.898 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Lock "d0395d7b-b82b-4a36-8cd5-8fa9b00807c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:11 np0005465987 nova_compute[230713]: 2025-10-02 12:03:11.898 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Lock "d0395d7b-b82b-4a36-8cd5-8fa9b00807c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:11 np0005465987 nova_compute[230713]: 2025-10-02 12:03:11.900 2 INFO nova.compute.manager [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Terminating instance#033[00m
Oct  2 08:03:11 np0005465987 nova_compute[230713]: 2025-10-02 12:03:11.901 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Acquiring lock "refresh_cache-d0395d7b-b82b-4a36-8cd5-8fa9b00807c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:03:11 np0005465987 nova_compute[230713]: 2025-10-02 12:03:11.901 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Acquired lock "refresh_cache-d0395d7b-b82b-4a36-8cd5-8fa9b00807c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:03:11 np0005465987 nova_compute[230713]: 2025-10-02 12:03:11.901 2 DEBUG nova.network.neutron [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:03:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:11.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.001 2 DEBUG nova.network.neutron [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating instance_info_cache with network_info: [{"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.032 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Releasing lock "refresh_cache-6febae1b-d70f-43e6-8aba-1e913541fec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.080 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.080 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.080 2 DEBUG oslo_concurrency.lockutils [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.084 2 INFO nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct  2 08:03:12 np0005465987 virtqemud[230442]: Domain id=10 name='instance-00000010' uuid=6febae1b-d70f-43e6-8aba-1e913541fec2 is tainted: custom-monitor
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.200 2 DEBUG nova.network.neutron [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.578 2 DEBUG nova.network.neutron [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.599 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Releasing lock "refresh_cache-d0395d7b-b82b-4a36-8cd5-8fa9b00807c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.599 2 DEBUG nova.compute.manager [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:03:12 np0005465987 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct  2 08:03:12 np0005465987 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000012.scope: Consumed 3.680s CPU time.
Oct  2 08:03:12 np0005465987 systemd-machined[188335]: Machine qemu-11-instance-00000012 terminated.
Oct  2 08:03:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:12.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.821 2 INFO nova.virt.libvirt.driver [-] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Instance destroyed successfully.#033[00m
Oct  2 08:03:12 np0005465987 nova_compute[230713]: 2025-10-02 12:03:12.821 2 DEBUG nova.objects.instance [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Lazy-loading 'resources' on Instance uuid d0395d7b-b82b-4a36-8cd5-8fa9b00807c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:03:13 np0005465987 nova_compute[230713]: 2025-10-02 12:03:13.091 2 INFO nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct  2 08:03:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:13.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.097 2 INFO nova.virt.libvirt.driver [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.103 2 DEBUG nova.compute.manager [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.130 2 DEBUG nova.objects.instance [None req-d50220d4-e174-4d1a-8106-0d74755b2601 c087f94b944f499086fc90cd7a8c6ed2 a45f15d82e6e4e95a422d53281506176 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:03:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:14.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.927 2 INFO nova.virt.libvirt.driver [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Deleting instance files /var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_del#033[00m
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.929 2 INFO nova.virt.libvirt.driver [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Deletion of /var/lib/nova/instances/d0395d7b-b82b-4a36-8cd5-8fa9b00807c8_del complete#033[00m
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.995 2 INFO nova.compute.manager [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Took 2.40 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.996 2 DEBUG oslo.service.loopingcall [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.997 2 DEBUG nova.compute.manager [-] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:03:14 np0005465987 nova_compute[230713]: 2025-10-02 12:03:14.997 2 DEBUG nova.network.neutron [-] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.146 2 DEBUG nova.network.neutron [-] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.166 2 DEBUG nova.network.neutron [-] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.180 2 INFO nova.compute.manager [-] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Took 0.18 seconds to deallocate network for instance.#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.239 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.240 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.336 2 DEBUG oslo_concurrency.processutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:03:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3184905191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.803 2 DEBUG oslo_concurrency.processutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.810 2 DEBUG nova.compute.provider_tree [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.827 2 DEBUG nova.scheduler.client.report [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.851 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.879 2 INFO nova.scheduler.client.report [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Deleted allocations for instance d0395d7b-b82b-4a36-8cd5-8fa9b00807c8#033[00m
Oct  2 08:03:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:15.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:15 np0005465987 nova_compute[230713]: 2025-10-02 12:03:15.983 2 DEBUG oslo_concurrency.lockutils [None req-05de879e-6a1f-4453-8fb6-4b8be7c5eea6 f90361c3428c4fbd99169c837f32615c a9b443d03bd547af96ec032ee39540b9 - - default default] Lock "d0395d7b-b82b-4a36-8cd5-8fa9b00807c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:03:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:16.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:03:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.558 2 DEBUG oslo_concurrency.lockutils [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.559 2 DEBUG oslo_concurrency.lockutils [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.559 2 DEBUG oslo_concurrency.lockutils [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.560 2 DEBUG oslo_concurrency.lockutils [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.561 2 DEBUG oslo_concurrency.lockutils [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.563 2 INFO nova.compute.manager [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Terminating instance#033[00m
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.565 2 DEBUG nova.compute.manager [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:03:17 np0005465987 kernel: tap4a9c114c-7c (unregistering): left promiscuous mode
Oct  2 08:03:17 np0005465987 NetworkManager[44910]: <info>  [1759406597.7114] device (tap4a9c114c-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:03:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:17Z|00095|binding|INFO|Releasing lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c from this chassis (sb_readonly=0)
Oct  2 08:03:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:17Z|00096|binding|INFO|Setting lport 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c down in Southbound
Oct  2 08:03:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:03:17Z|00097|binding|INFO|Removing iface tap4a9c114c-7c ovn-installed in OVS
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:17.772 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:34:a2 10.100.0.11'], port_security=['fa:16:3e:87:34:a2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6febae1b-d70f-43e6-8aba-1e913541fec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f2b3ac7d7504c9c96f0d4a67e0243c9', 'neutron:revision_number': '23', 'neutron:security_group_ids': 'c39f970f-1ead-4030-a775-b7ca9942094a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58eebdcc-6c12-4ff3-b6bc-0fe1fb3af6b6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:03:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:17.774 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4a9c114c-7ca6-4f80-ab5d-d38765e4c15c in datapath 5bd66e63-9399-4ab1-bcda-a761f2c44b1d unbound from our chassis#033[00m
Oct  2 08:03:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:17.775 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bd66e63-9399-4ab1-bcda-a761f2c44b1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:03:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:17.776 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ec790222-f122-4406-ba12-2d4f41431105]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:17.777 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d namespace which is not needed anymore#033[00m
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:17 np0005465987 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct  2 08:03:17 np0005465987 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000010.scope: Consumed 1.794s CPU time.
Oct  2 08:03:17 np0005465987 systemd-machined[188335]: Machine qemu-10-instance-00000010 terminated.
Oct  2 08:03:17 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[239578]: [NOTICE]   (239582) : haproxy version is 2.8.14-c23fe91
Oct  2 08:03:17 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[239578]: [NOTICE]   (239582) : path to executable is /usr/sbin/haproxy
Oct  2 08:03:17 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[239578]: [WARNING]  (239582) : Exiting Master process...
Oct  2 08:03:17 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[239578]: [ALERT]    (239582) : Current worker (239584) exited with code 143 (Terminated)
Oct  2 08:03:17 np0005465987 neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d[239578]: [WARNING]  (239582) : All workers exited. Exiting... (0)
Oct  2 08:03:17 np0005465987 systemd[1]: libpod-27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8.scope: Deactivated successfully.
Oct  2 08:03:17 np0005465987 podman[239662]: 2025-10-02 12:03:17.944699573 +0000 UTC m=+0.073101710 container died 27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:03:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:03:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:17.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.997 2 INFO nova.virt.libvirt.driver [-] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Instance destroyed successfully.#033[00m
Oct  2 08:03:17 np0005465987 nova_compute[230713]: 2025-10-02 12:03:17.998 2 DEBUG nova.objects.instance [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lazy-loading 'resources' on Instance uuid 6febae1b-d70f-43e6-8aba-1e913541fec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.009 2 DEBUG nova.virt.libvirt.vif [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:02:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-304906255',display_name='tempest-LiveMigrationTest-server-304906255',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-304906255',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:02:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f2b3ac7d7504c9c96f0d4a67e0243c9',ramdisk_id='',reservation_id='r-utis6e3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1876533760',owner_user_name='tempest-LiveMigrationTest-1876533760-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:03:14Z,user_data=None,user_id='efb31eeadee34403b1ab7a584f3616f7',uuid=6febae1b-d70f-43e6-8aba-1e913541fec2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.010 2 DEBUG nova.network.os_vif_util [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Converting VIF {"id": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "address": "fa:16:3e:87:34:a2", "network": {"id": "5bd66e63-9399-4ab1-bcda-a761f2c44b1d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-205544999-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f2b3ac7d7504c9c96f0d4a67e0243c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a9c114c-7c", "ovs_interfaceid": "4a9c114c-7ca6-4f80-ab5d-d38765e4c15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.011 2 DEBUG nova.network.os_vif_util [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.011 2 DEBUG os_vif [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a9c114c-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.021 2 INFO os_vif [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=4a9c114c-7ca6-4f80-ab5d-d38765e4c15c,network=Network(5bd66e63-9399-4ab1-bcda-a761f2c44b1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a9c114c-7c')#033[00m
Oct  2 08:03:18 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8-userdata-shm.mount: Deactivated successfully.
Oct  2 08:03:18 np0005465987 systemd[1]: var-lib-containers-storage-overlay-512d87c96b7758644ead0b64255200f4b1178c429486d9489bd5422f3c783308-merged.mount: Deactivated successfully.
Oct  2 08:03:18 np0005465987 podman[239662]: 2025-10-02 12:03:18.048025985 +0000 UTC m=+0.176428122 container cleanup 27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:03:18 np0005465987 systemd[1]: libpod-conmon-27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8.scope: Deactivated successfully.
Oct  2 08:03:18 np0005465987 podman[239717]: 2025-10-02 12:03:18.25060597 +0000 UTC m=+0.180323707 container remove 27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:03:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:18.256 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[49d0a694-fc89-41f2-bf61-d97c438aa758]: (4, ('Thu Oct  2 12:03:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d (27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8)\n27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8\nThu Oct  2 12:03:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d (27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8)\n27fdb441ff71fe1f7603476da1914d15baae0d12d9702910ed7dcbc011b4bde8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:18.257 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ee277c8b-97ef-4052-b4a6-e97052b4815e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:18.258 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bd66e63-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:18 np0005465987 kernel: tap5bd66e63-90: left promiscuous mode
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:18.282 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[39a4660c-f695-43af-9fc0-ef8b5cc8d3fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:18.308 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b73cbcc7-244d-49c9-93d1-cd83fadbd10a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:18.309 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[76ce2f7b-2336-4734-bc7e-f051042b5542]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:18.329 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0ce427-04cb-4dea-9726-0b21a7f8ada7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465810, 'reachable_time': 38886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239734, 'error': None, 'target': 'ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:18.332 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5bd66e63-9399-4ab1-bcda-a761f2c44b1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:03:18 np0005465987 systemd[1]: run-netns-ovnmeta\x2d5bd66e63\x2d9399\x2d4ab1\x2dbcda\x2da761f2c44b1d.mount: Deactivated successfully.
Oct  2 08:03:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:18.332 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[cef8b3ec-619f-40a5-8483-215d1d249186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.484 2 DEBUG nova.compute.manager [req-9c1e7c37-3056-4aa6-8b21-2861f064bfb9 req-eb7d94d0-d677-4d4e-a9e0-137cbbd0a004 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.484 2 DEBUG oslo_concurrency.lockutils [req-9c1e7c37-3056-4aa6-8b21-2861f064bfb9 req-eb7d94d0-d677-4d4e-a9e0-137cbbd0a004 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.485 2 DEBUG oslo_concurrency.lockutils [req-9c1e7c37-3056-4aa6-8b21-2861f064bfb9 req-eb7d94d0-d677-4d4e-a9e0-137cbbd0a004 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.485 2 DEBUG oslo_concurrency.lockutils [req-9c1e7c37-3056-4aa6-8b21-2861f064bfb9 req-eb7d94d0-d677-4d4e-a9e0-137cbbd0a004 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.486 2 DEBUG nova.compute.manager [req-9c1e7c37-3056-4aa6-8b21-2861f064bfb9 req-eb7d94d0-d677-4d4e-a9e0-137cbbd0a004 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.486 2 DEBUG nova.compute.manager [req-9c1e7c37-3056-4aa6-8b21-2861f064bfb9 req-eb7d94d0-d677-4d4e-a9e0-137cbbd0a004 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-unplugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.597 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "30cf48f4-4b75-4aca-894f-5165b1943f3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.597 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "30cf48f4-4b75-4aca-894f-5165b1943f3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.609 2 DEBUG nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.660 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.660 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.667 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.667 2 INFO nova.compute.claims [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:03:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:18.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:18 np0005465987 nova_compute[230713]: 2025-10-02 12:03:18.796 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:03:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4209692166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.211 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.217 2 DEBUG nova.compute.provider_tree [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.232 2 DEBUG nova.scheduler.client.report [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.250 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.250 2 DEBUG nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.320 2 DEBUG nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.321 2 DEBUG nova.network.neutron [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.330 2 INFO nova.virt.libvirt.driver [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Deleting instance files /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2_del#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.331 2 INFO nova.virt.libvirt.driver [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Deletion of /var/lib/nova/instances/6febae1b-d70f-43e6-8aba-1e913541fec2_del complete#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.367 2 INFO nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.395 2 DEBUG nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.403 2 INFO nova.compute.manager [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Took 1.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.404 2 DEBUG oslo.service.loopingcall [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.404 2 DEBUG nova.compute.manager [-] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.404 2 DEBUG nova.network.neutron [-] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.485 2 DEBUG nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.486 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.486 2 INFO nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Creating image(s)#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.510 2 DEBUG nova.storage.rbd_utils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.540 2 DEBUG nova.storage.rbd_utils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.572 2 DEBUG nova.storage.rbd_utils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.577 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.654 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.655 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.655 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.656 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.681 2 DEBUG nova.storage.rbd_utils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:19 np0005465987 nova_compute[230713]: 2025-10-02 12:03:19.684 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:19.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.189 2 DEBUG nova.network.neutron [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.190 2 DEBUG nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.331 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.440 2 DEBUG nova.storage.rbd_utils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] resizing rbd image 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.641 2 DEBUG nova.compute.manager [req-7361e6b1-1a82-4521-83cd-3f326b48c0ce req-eae30968-148e-487b-bf6b-eba9a2815496 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.642 2 DEBUG oslo_concurrency.lockutils [req-7361e6b1-1a82-4521-83cd-3f326b48c0ce req-eae30968-148e-487b-bf6b-eba9a2815496 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.642 2 DEBUG oslo_concurrency.lockutils [req-7361e6b1-1a82-4521-83cd-3f326b48c0ce req-eae30968-148e-487b-bf6b-eba9a2815496 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.643 2 DEBUG oslo_concurrency.lockutils [req-7361e6b1-1a82-4521-83cd-3f326b48c0ce req-eae30968-148e-487b-bf6b-eba9a2815496 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.643 2 DEBUG nova.compute.manager [req-7361e6b1-1a82-4521-83cd-3f326b48c0ce req-eae30968-148e-487b-bf6b-eba9a2815496 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] No waiting events found dispatching network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.643 2 WARNING nova.compute.manager [req-7361e6b1-1a82-4521-83cd-3f326b48c0ce req-eae30968-148e-487b-bf6b-eba9a2815496 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received unexpected event network-vif-plugged-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.650 2 DEBUG nova.objects.instance [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lazy-loading 'migration_context' on Instance uuid 30cf48f4-4b75-4aca-894f-5165b1943f3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.669 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.670 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Ensure instance console log exists: /var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.670 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.671 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.671 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.672 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.678 2 WARNING nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.682 2 DEBUG nova.virt.libvirt.host [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.683 2 DEBUG nova.virt.libvirt.host [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.687 2 DEBUG nova.virt.libvirt.host [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.687 2 DEBUG nova.virt.libvirt.host [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.689 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.689 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.690 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.690 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.690 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.691 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.691 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.691 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.692 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.692 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.692 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.693 2 DEBUG nova.virt.hardware [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.696 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.718 2 DEBUG nova.network.neutron [-] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:03:20 np0005465987 nova_compute[230713]: 2025-10-02 12:03:20.739 2 INFO nova.compute.manager [-] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Took 1.33 seconds to deallocate network for instance.#033[00m
Oct  2 08:03:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:03:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2782296448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.097 2 INFO nova.compute.manager [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Took 0.36 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.100 2 DEBUG nova.compute.manager [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Deleting volume: b3df5dc9-9a56-4922-8b65-4162deb6be93 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.102 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.128 2 DEBUG nova.storage.rbd_utils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.132 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.380 2 DEBUG oslo_concurrency.lockutils [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.381 2 DEBUG oslo_concurrency.lockutils [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.385 2 DEBUG oslo_concurrency.lockutils [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.422 2 INFO nova.scheduler.client.report [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Deleted allocations for instance 6febae1b-d70f-43e6-8aba-1e913541fec2#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.512 2 DEBUG oslo_concurrency.lockutils [None req-cf84dd25-6554-44af-aa51-d7cc622c1a81 efb31eeadee34403b1ab7a584f3616f7 3f2b3ac7d7504c9c96f0d4a67e0243c9 - - default default] Lock "6febae1b-d70f-43e6-8aba-1e913541fec2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:03:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/125296754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.541 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.544 2 DEBUG nova.objects.instance [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lazy-loading 'pci_devices' on Instance uuid 30cf48f4-4b75-4aca-894f-5165b1943f3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.561 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <uuid>30cf48f4-4b75-4aca-894f-5165b1943f3e</uuid>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <name>instance-00000014</name>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1373326528</nova:name>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:03:20</nova:creationTime>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <nova:user uuid="e829ca31274b48b9b656df1ba321832c">tempest-DeleteServersAdminTestJSON-200716909-project-member</nova:user>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <nova:project uuid="612d5f60edd74e79a420378579b416ea">tempest-DeleteServersAdminTestJSON-200716909</nova:project>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <entry name="serial">30cf48f4-4b75-4aca-894f-5165b1943f3e</entry>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <entry name="uuid">30cf48f4-4b75-4aca-894f-5165b1943f3e</entry>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/30cf48f4-4b75-4aca-894f-5165b1943f3e_disk">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/30cf48f4-4b75-4aca-894f-5165b1943f3e_disk.config">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e/console.log" append="off"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:03:21 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:03:21 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:03:21 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:03:21 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.620 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.620 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.621 2 INFO nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Using config drive#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.649 2 DEBUG nova.storage.rbd_utils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.796 2 INFO nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Creating config drive at /var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e/disk.config#033[00m
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.802 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppd0e2dj5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:21 np0005465987 nova_compute[230713]: 2025-10-02 12:03:21.950 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppd0e2dj5" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:21.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:22 np0005465987 nova_compute[230713]: 2025-10-02 12:03:22.000 2 DEBUG nova.storage.rbd_utils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] rbd image 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:03:22 np0005465987 nova_compute[230713]: 2025-10-02 12:03:22.006 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e/disk.config 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:22 np0005465987 nova_compute[230713]: 2025-10-02 12:03:22.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:22 np0005465987 nova_compute[230713]: 2025-10-02 12:03:22.251 2 DEBUG oslo_concurrency.processutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e/disk.config 30cf48f4-4b75-4aca-894f-5165b1943f3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:22 np0005465987 nova_compute[230713]: 2025-10-02 12:03:22.252 2 INFO nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Deleting local config drive /var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e/disk.config because it was imported into RBD.#033[00m
Oct  2 08:03:22 np0005465987 systemd-machined[188335]: New machine qemu-12-instance-00000014.
Oct  2 08:03:22 np0005465987 systemd[1]: Started Virtual Machine qemu-12-instance-00000014.
Oct  2 08:03:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:22 np0005465987 nova_compute[230713]: 2025-10-02 12:03:22.770 2 DEBUG nova.compute.manager [req-5f9743c9-7a99-4149-8086-311d466fbc1c req-2b095bdf-bee2-45fd-9597-c2a702a99f99 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Received event network-vif-deleted-4a9c114c-7ca6-4f80-ab5d-d38765e4c15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:03:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000053s ======
Oct  2 08:03:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:22.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Oct  2 08:03:23 np0005465987 nova_compute[230713]: 2025-10-02 12:03:23.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:23 np0005465987 nova_compute[230713]: 2025-10-02 12:03:23.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:23 np0005465987 nova_compute[230713]: 2025-10-02 12:03:23.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:03:23 np0005465987 nova_compute[230713]: 2025-10-02 12:03:23.974 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406603.9742074, 30cf48f4-4b75-4aca-894f-5165b1943f3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:23 np0005465987 nova_compute[230713]: 2025-10-02 12:03:23.975 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:03:23 np0005465987 nova_compute[230713]: 2025-10-02 12:03:23.980 2 DEBUG nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:03:23 np0005465987 nova_compute[230713]: 2025-10-02 12:03:23.981 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:03:23 np0005465987 nova_compute[230713]: 2025-10-02 12:03:23.986 2 INFO nova.virt.libvirt.driver [-] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Instance spawned successfully.#033[00m
Oct  2 08:03:23 np0005465987 nova_compute[230713]: 2025-10-02 12:03:23.987 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:03:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:23.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.179 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.185 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.190 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.191 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.191 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.192 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.193 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.194 2 DEBUG nova.virt.libvirt.driver [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.201 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.368 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.369 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406603.9763584, 30cf48f4-4b75-4aca-894f-5165b1943f3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.369 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.421 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.425 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.448 2 INFO nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Took 4.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.449 2 DEBUG nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.455 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.545 2 INFO nova.compute.manager [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Took 5.90 seconds to build instance.#033[00m
Oct  2 08:03:24 np0005465987 nova_compute[230713]: 2025-10-02 12:03:24.567 2 DEBUG oslo_concurrency.lockutils [None req-34966a6c-f7c3-4459-a836-71e5cbbab151 e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "30cf48f4-4b75-4aca-894f-5165b1943f3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:24 np0005465987 podman[240102]: 2025-10-02 12:03:24.845606611 +0000 UTC m=+0.062431771 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:03:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:25.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:26.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:27 np0005465987 nova_compute[230713]: 2025-10-02 12:03:27.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:27 np0005465987 nova_compute[230713]: 2025-10-02 12:03:27.820 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406592.819023, d0395d7b-b82b-4a36-8cd5-8fa9b00807c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:27 np0005465987 nova_compute[230713]: 2025-10-02 12:03:27.820 2 INFO nova.compute.manager [-] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:27.930 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:27.931 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:27.931 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:27 np0005465987 nova_compute[230713]: 2025-10-02 12:03:27.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:27 np0005465987 nova_compute[230713]: 2025-10-02 12:03:27.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:28 np0005465987 nova_compute[230713]: 2025-10-02 12:03:28.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:28.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.119 2 DEBUG nova.compute.manager [None req-2406c8dd-eb64-488a-9f13-03d08464f1d2 - - - - - -] [instance: d0395d7b-b82b-4a36-8cd5-8fa9b00807c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.148 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "30cf48f4-4b75-4aca-894f-5165b1943f3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.149 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "30cf48f4-4b75-4aca-894f-5165b1943f3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.149 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "30cf48f4-4b75-4aca-894f-5165b1943f3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.149 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "30cf48f4-4b75-4aca-894f-5165b1943f3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.149 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "30cf48f4-4b75-4aca-894f-5165b1943f3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.150 2 INFO nova.compute.manager [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Terminating instance#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.151 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "refresh_cache-30cf48f4-4b75-4aca-894f-5165b1943f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.151 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquired lock "refresh_cache-30cf48f4-4b75-4aca-894f-5165b1943f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.151 2 DEBUG nova.network.neutron [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.388 2 DEBUG nova.network.neutron [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.710 2 DEBUG nova.network.neutron [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.754 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Releasing lock "refresh_cache-30cf48f4-4b75-4aca-894f-5165b1943f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.754 2 DEBUG nova.compute.manager [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:03:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:30.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:30 np0005465987 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct  2 08:03:30 np0005465987 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000014.scope: Consumed 8.497s CPU time.
Oct  2 08:03:30 np0005465987 systemd-machined[188335]: Machine qemu-12-instance-00000014 terminated.
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.973 2 INFO nova.virt.libvirt.driver [-] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Instance destroyed successfully.#033[00m
Oct  2 08:03:30 np0005465987 nova_compute[230713]: 2025-10-02 12:03:30.975 2 DEBUG nova.objects.instance [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lazy-loading 'resources' on Instance uuid 30cf48f4-4b75-4aca-894f-5165b1943f3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.084 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.085 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.085 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.086 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.146 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.147 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.147 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.148 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.148 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:31 np0005465987 nova_compute[230713]: 2025-10-02 12:03:31.148 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:03:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:03:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4998 writes, 26K keys, 4998 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 4998 writes, 4998 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1584 writes, 7545 keys, 1584 commit groups, 1.0 writes per commit group, ingest: 15.47 MB, 0.03 MB/s#012Interval WAL: 1584 writes, 1584 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     72.3      0.43              0.09        14    0.031       0      0       0.0       0.0#012  L6      1/0    8.62 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     90.2     74.3      1.43              0.32        13    0.110     61K   6874       0.0       0.0#012 Sum      1/0    8.62 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     69.3     73.9      1.87              0.41        27    0.069     61K   6874       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1     42.8     43.6      1.15              0.20        10    0.115     25K   2555       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     90.2     74.3      1.43              0.32        13    0.110     61K   6874       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     72.6      0.43              0.09        13    0.033       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.031, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.08 MB/s write, 0.13 GB read, 0.07 MB/s read, 1.9 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 12.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000254 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(718,11.93 MB,3.92318%) FilterBlock(27,173.55 KB,0.0557498%) IndexBlock(27,331.31 KB,0.10643%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:03:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:31.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:32 np0005465987 nova_compute[230713]: 2025-10-02 12:03:32.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:32 np0005465987 nova_compute[230713]: 2025-10-02 12:03:32.579 2 INFO nova.virt.libvirt.driver [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Deleting instance files /var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e_del#033[00m
Oct  2 08:03:32 np0005465987 nova_compute[230713]: 2025-10-02 12:03:32.580 2 INFO nova.virt.libvirt.driver [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Deletion of /var/lib/nova/instances/30cf48f4-4b75-4aca-894f-5165b1943f3e_del complete#033[00m
Oct  2 08:03:32 np0005465987 nova_compute[230713]: 2025-10-02 12:03:32.669 2 INFO nova.compute.manager [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Took 1.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:03:32 np0005465987 nova_compute[230713]: 2025-10-02 12:03:32.669 2 DEBUG oslo.service.loopingcall [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:03:32 np0005465987 nova_compute[230713]: 2025-10-02 12:03:32.669 2 DEBUG nova.compute.manager [-] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:03:32 np0005465987 nova_compute[230713]: 2025-10-02 12:03:32.670 2 DEBUG nova.network.neutron [-] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:03:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:32.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:32 np0005465987 podman[240144]: 2025-10-02 12:03:32.850322915 +0000 UTC m=+0.063640795 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:03:32 np0005465987 podman[240145]: 2025-10-02 12:03:32.880553959 +0000 UTC m=+0.084322942 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:03:32 np0005465987 podman[240143]: 2025-10-02 12:03:32.903968849 +0000 UTC m=+0.113281281 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:03:32 np0005465987 nova_compute[230713]: 2025-10-02 12:03:32.997 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406597.9941263, 6febae1b-d70f-43e6-8aba-1e913541fec2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:32.999 2 INFO nova.compute.manager [-] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.012 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.088 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.089 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.089 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.089 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.090 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.109 2 DEBUG nova.compute.manager [None req-12204331-a659-4142-b7b8-ecb4052c46dd - - - - - -] [instance: 6febae1b-d70f-43e6-8aba-1e913541fec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.237 2 DEBUG nova.network.neutron [-] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.289 2 DEBUG nova.network.neutron [-] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.358 2 INFO nova.compute.manager [-] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Took 0.69 seconds to deallocate network for instance.#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.452 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.453 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:03:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2979139565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.496 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.638 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.639 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4868MB free_disk=20.946483612060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:03:33 np0005465987 nova_compute[230713]: 2025-10-02 12:03:33.639 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.171 2 DEBUG oslo_concurrency.processutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:03:34 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1695241263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.664 2 DEBUG oslo_concurrency.processutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.673 2 DEBUG nova.compute.provider_tree [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.695 2 DEBUG nova.scheduler.client.report [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:34.734 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:03:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:34.735 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.737 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.739 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:34.769 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:03:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:34.770 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:03:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.830 2 INFO nova.scheduler.client.report [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Deleted allocations for instance 30cf48f4-4b75-4aca-894f-5165b1943f3e#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.848 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.849 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.871 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:03:34 np0005465987 nova_compute[230713]: 2025-10-02 12:03:34.947 2 DEBUG oslo_concurrency.lockutils [None req-8dc5ced4-cdc2-4934-abaa-9539a92692db e829ca31274b48b9b656df1ba321832c 612d5f60edd74e79a420378579b416ea - - default default] Lock "30cf48f4-4b75-4aca-894f-5165b1943f3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:03:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1929740917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:03:35 np0005465987 nova_compute[230713]: 2025-10-02 12:03:35.275 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:03:35 np0005465987 nova_compute[230713]: 2025-10-02 12:03:35.280 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:03:35 np0005465987 nova_compute[230713]: 2025-10-02 12:03:35.311 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:03:35 np0005465987 nova_compute[230713]: 2025-10-02 12:03:35.389 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:03:35 np0005465987 nova_compute[230713]: 2025-10-02 12:03:35.390 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:36.773 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:36.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:37 np0005465987 nova_compute[230713]: 2025-10-02 12:03:37.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:37 np0005465987 nova_compute[230713]: 2025-10-02 12:03:37.343 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:37 np0005465987 nova_compute[230713]: 2025-10-02 12:03:37.344 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:37 np0005465987 nova_compute[230713]: 2025-10-02 12:03:37.344 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:37 np0005465987 nova_compute[230713]: 2025-10-02 12:03:37.345 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:03:37 np0005465987 nova_compute[230713]: 2025-10-02 12:03:37.960 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:38.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:38 np0005465987 nova_compute[230713]: 2025-10-02 12:03:38.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:38.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:03:39.736 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:03:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:40.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:03:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:40.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:03:40 np0005465987 nova_compute[230713]: 2025-10-02 12:03:40.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:42.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:42 np0005465987 nova_compute[230713]: 2025-10-02 12:03:42.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:03:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:42.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:03:43 np0005465987 nova_compute[230713]: 2025-10-02 12:03:43.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:44.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:03:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:44.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:03:45 np0005465987 nova_compute[230713]: 2025-10-02 12:03:45.972 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406610.9712298, 30cf48f4-4b75-4aca-894f-5165b1943f3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:03:45 np0005465987 nova_compute[230713]: 2025-10-02 12:03:45.973 2 INFO nova.compute.manager [-] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:03:46 np0005465987 nova_compute[230713]: 2025-10-02 12:03:46.011 2 DEBUG nova.compute.manager [None req-d873f31d-cb0d-4845-a373-c0bea0e1f2cd - - - - - -] [instance: 30cf48f4-4b75-4aca-894f-5165b1943f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:03:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:46.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:46 np0005465987 nova_compute[230713]: 2025-10-02 12:03:46.029 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:03:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:03:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:03:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:03:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:03:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:03:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:46.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:47 np0005465987 nova_compute[230713]: 2025-10-02 12:03:47.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:47 np0005465987 podman[240665]: 2025-10-02 12:03:47.069371438 +0000 UTC m=+0.023057443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:03:47 np0005465987 podman[240665]: 2025-10-02 12:03:47.266045843 +0000 UTC m=+0.219731858 container create e1b05df8a9672541b4868e7401bbace1b19a2aae01e6f66724e136ebe16c206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 08:03:47 np0005465987 systemd[1]: Started libpod-conmon-e1b05df8a9672541b4868e7401bbace1b19a2aae01e6f66724e136ebe16c206b.scope.
Oct  2 08:03:47 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:03:47 np0005465987 podman[240665]: 2025-10-02 12:03:47.609752599 +0000 UTC m=+0.563438654 container init e1b05df8a9672541b4868e7401bbace1b19a2aae01e6f66724e136ebe16c206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wiles, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 08:03:47 np0005465987 podman[240665]: 2025-10-02 12:03:47.623957732 +0000 UTC m=+0.577643767 container start e1b05df8a9672541b4868e7401bbace1b19a2aae01e6f66724e136ebe16c206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:03:47 np0005465987 competent_wiles[240681]: 167 167
Oct  2 08:03:47 np0005465987 systemd[1]: libpod-e1b05df8a9672541b4868e7401bbace1b19a2aae01e6f66724e136ebe16c206b.scope: Deactivated successfully.
Oct  2 08:03:47 np0005465987 podman[240665]: 2025-10-02 12:03:47.823990858 +0000 UTC m=+0.777676903 container attach e1b05df8a9672541b4868e7401bbace1b19a2aae01e6f66724e136ebe16c206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wiles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:03:47 np0005465987 podman[240665]: 2025-10-02 12:03:47.824571943 +0000 UTC m=+0.778257988 container died e1b05df8a9672541b4868e7401bbace1b19a2aae01e6f66724e136ebe16c206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct  2 08:03:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:48.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:48 np0005465987 nova_compute[230713]: 2025-10-02 12:03:48.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:48 np0005465987 systemd[1]: var-lib-containers-storage-overlay-51a46f33902370e2657ee68c5a4b687798128db992c97429d94e1dbc919fc97d-merged.mount: Deactivated successfully.
Oct  2 08:03:48 np0005465987 podman[240665]: 2025-10-02 12:03:48.787917574 +0000 UTC m=+1.741603619 container remove e1b05df8a9672541b4868e7401bbace1b19a2aae01e6f66724e136ebe16c206b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 08:03:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:03:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:48.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:03:48 np0005465987 systemd[1]: libpod-conmon-e1b05df8a9672541b4868e7401bbace1b19a2aae01e6f66724e136ebe16c206b.scope: Deactivated successfully.
Oct  2 08:03:49 np0005465987 podman[240705]: 2025-10-02 12:03:48.967694375 +0000 UTC m=+0.031492679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:03:49 np0005465987 podman[240705]: 2025-10-02 12:03:49.491646634 +0000 UTC m=+0.555444898 container create a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 08:03:49 np0005465987 systemd[1]: Started libpod-conmon-a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1.scope.
Oct  2 08:03:49 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:03:49 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d20f54a3f88b1763d8496d5f9c2ddcff633497921a52fe6077756e5d9bba72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:49 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d20f54a3f88b1763d8496d5f9c2ddcff633497921a52fe6077756e5d9bba72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:49 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d20f54a3f88b1763d8496d5f9c2ddcff633497921a52fe6077756e5d9bba72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:49 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d20f54a3f88b1763d8496d5f9c2ddcff633497921a52fe6077756e5d9bba72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:49 np0005465987 podman[240705]: 2025-10-02 12:03:49.894114572 +0000 UTC m=+0.957912826 container init a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaum, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:03:49 np0005465987 podman[240705]: 2025-10-02 12:03:49.902987471 +0000 UTC m=+0.966785715 container start a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 08:03:50 np0005465987 podman[240705]: 2025-10-02 12:03:50.023088516 +0000 UTC m=+1.086886780 container attach a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 08:03:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:50.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:50.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]: [
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:    {
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        "available": false,
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        "ceph_device": false,
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        "lsm_data": {},
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        "lvs": [],
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        "path": "/dev/sr0",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        "rejected_reasons": [
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "Insufficient space (<5GB)",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "Has a FileSystem"
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        ],
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        "sys_api": {
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "actuators": null,
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "device_nodes": "sr0",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "devname": "sr0",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "human_readable_size": "482.00 KB",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "id_bus": "ata",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "model": "QEMU DVD-ROM",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "nr_requests": "2",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "parent": "/dev/sr0",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "partitions": {},
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "path": "/dev/sr0",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "removable": "1",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "rev": "2.5+",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "ro": "0",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "rotational": "0",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "sas_address": "",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "sas_device_handle": "",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "scheduler_mode": "mq-deadline",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "sectors": 0,
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "sectorsize": "2048",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "size": 493568.0,
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "support_discard": "2048",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "type": "disk",
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:            "vendor": "QEMU"
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:        }
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]:    }
Oct  2 08:03:51 np0005465987 mystifying_chaum[240722]: ]
Oct  2 08:03:51 np0005465987 systemd[1]: libpod-a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1.scope: Deactivated successfully.
Oct  2 08:03:51 np0005465987 podman[240705]: 2025-10-02 12:03:51.193951175 +0000 UTC m=+2.257749439 container died a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:03:51 np0005465987 systemd[1]: libpod-a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1.scope: Consumed 1.223s CPU time.
Oct  2 08:03:51 np0005465987 systemd[1]: var-lib-containers-storage-overlay-04d20f54a3f88b1763d8496d5f9c2ddcff633497921a52fe6077756e5d9bba72-merged.mount: Deactivated successfully.
Oct  2 08:03:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:52.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:52 np0005465987 podman[240705]: 2025-10-02 12:03:52.080689963 +0000 UTC m=+3.144488207 container remove a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 08:03:52 np0005465987 nova_compute[230713]: 2025-10-02 12:03:52.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:52 np0005465987 systemd[1]: libpod-conmon-a68b02288985ae66e75341a3219fcc79de4482623d6a09c6609007f5b297d9a1.scope: Deactivated successfully.
Oct  2 08:03:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:03:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:52.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:03:53 np0005465987 nova_compute[230713]: 2025-10-02 12:03:53.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:54.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:55 np0005465987 podman[241836]: 2025-10-02 12:03:55.871169015 +0000 UTC m=+0.078107175 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:03:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:03:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:56.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:03:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:56.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:03:57 np0005465987 nova_compute[230713]: 2025-10-02 12:03:57.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:03:58 np0005465987 nova_compute[230713]: 2025-10-02 12:03:58.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:03:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:03:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:58.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:03:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:03:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:58.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:04:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:00.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:04:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:02.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:02 np0005465987 nova_compute[230713]: 2025-10-02 12:04:02.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:04:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:04:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:04:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:04:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:02.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:03 np0005465987 nova_compute[230713]: 2025-10-02 12:04:03.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:03 np0005465987 podman[241855]: 2025-10-02 12:04:03.926371747 +0000 UTC m=+0.141695907 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:04:03 np0005465987 podman[241857]: 2025-10-02 12:04:03.929206154 +0000 UTC m=+0.126739084 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  2 08:04:03 np0005465987 podman[241856]: 2025-10-02 12:04:03.93243002 +0000 UTC m=+0.132116688 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:04:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:04:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:04.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:04:04 np0005465987 ceph-mgr[81180]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct  2 08:04:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:04.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:06.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:06.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:07 np0005465987 nova_compute[230713]: 2025-10-02 12:04:07.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:08 np0005465987 nova_compute[230713]: 2025-10-02 12:04:08.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:04:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:08.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:04:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:08.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:10.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:10.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:04:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:12.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:04:12 np0005465987 nova_compute[230713]: 2025-10-02 12:04:12.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:13 np0005465987 nova_compute[230713]: 2025-10-02 12:04:13.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:14.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:16.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:16.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:17 np0005465987 nova_compute[230713]: 2025-10-02 12:04:17.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:18.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:18 np0005465987 nova_compute[230713]: 2025-10-02 12:04:18.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:04:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:04:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:18.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:20.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:22.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:22 np0005465987 nova_compute[230713]: 2025-10-02 12:04:22.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:04:22Z|00098|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 08:04:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:22.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:23 np0005465987 nova_compute[230713]: 2025-10-02 12:04:23.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:24.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:04:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:04:26 np0005465987 podman[241969]: 2025-10-02 12:04:26.850459302 +0000 UTC m=+0.069691758 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:04:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:26.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:27 np0005465987 nova_compute[230713]: 2025-10-02 12:04:27.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:04:27.931 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:04:27.931 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:04:27.932 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:04:28 np0005465987 nova_compute[230713]: 2025-10-02 12:04:28.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:28.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:28.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:28 np0005465987 nova_compute[230713]: 2025-10-02 12:04:28.974 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:04:28 np0005465987 nova_compute[230713]: 2025-10-02 12:04:28.975 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:04:28 np0005465987 nova_compute[230713]: 2025-10-02 12:04:28.975 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:04:28 np0005465987 nova_compute[230713]: 2025-10-02 12:04:28.975 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:04:28 np0005465987 nova_compute[230713]: 2025-10-02 12:04:28.991 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:04:28 np0005465987 nova_compute[230713]: 2025-10-02 12:04:28.991 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:04:29 np0005465987 nova_compute[230713]: 2025-10-02 12:04:29.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:04:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:30.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:30.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:04:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:32.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:04:32 np0005465987 nova_compute[230713]: 2025-10-02 12:04:32.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:32.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:32 np0005465987 nova_compute[230713]: 2025-10-02 12:04:32.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:04:33 np0005465987 nova_compute[230713]: 2025-10-02 12:04:33.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:04:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:34.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:04:34 np0005465987 nova_compute[230713]: 2025-10-02 12:04:34.327 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:34 np0005465987 nova_compute[230713]: 2025-10-02 12:04:34.327 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:34 np0005465987 nova_compute[230713]: 2025-10-02 12:04:34.359 2 DEBUG nova.compute.manager [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:04:34 np0005465987 nova_compute[230713]: 2025-10-02 12:04:34.526 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:34 np0005465987 nova_compute[230713]: 2025-10-02 12:04:34.526 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:34 np0005465987 nova_compute[230713]: 2025-10-02 12:04:34.533 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:04:34 np0005465987 nova_compute[230713]: 2025-10-02 12:04:34.533 2 INFO nova.compute.claims [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:04:34 np0005465987 podman[241988]: 2025-10-02 12:04:34.838809294 +0000 UTC m=+0.055943738 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:04:34 np0005465987 podman[241989]: 2025-10-02 12:04:34.84093831 +0000 UTC m=+0.055597277 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  2 08:04:34 np0005465987 podman[241987]: 2025-10-02 12:04:34.870701122 +0000 UTC m=+0.090795806 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct  2 08:04:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:04:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:34.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:04:34 np0005465987 nova_compute[230713]: 2025-10-02 12:04:34.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:04:34 np0005465987 nova_compute[230713]: 2025-10-02 12:04:34.996 2 DEBUG nova.scheduler.client.report [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.042 2 DEBUG nova.scheduler.client.report [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.042 2 DEBUG nova.compute.provider_tree [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.063 2 DEBUG nova.scheduler.client.report [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.095 2 DEBUG nova.scheduler.client.report [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.136 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.167 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:04:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/413277808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.560 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.569 2 DEBUG nova.compute.provider_tree [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.591 2 DEBUG nova.scheduler.client.report [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.617 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.618 2 DEBUG nova.compute.manager [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.620 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.620 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.620 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.620 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.692 2 DEBUG nova.compute.manager [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.713 2 INFO nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.751 2 DEBUG nova.compute.manager [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.904 2 DEBUG nova.compute.manager [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.905 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.905 2 INFO nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Creating image(s)#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.932 2 DEBUG nova.storage.rbd_utils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.957 2 DEBUG nova.storage.rbd_utils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.981 2 DEBUG nova.storage.rbd_utils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:04:35 np0005465987 nova_compute[230713]: 2025-10-02 12:04:35.985 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:04:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:04:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1996495807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.039 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.040 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.040 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.041 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.062 2 DEBUG nova.storage.rbd_utils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.066 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.084 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:04:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:04:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:36.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.233 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.235 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4915MB free_disk=20.946636199951172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.235 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.236 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.310 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 81cd8274-bb25-4b6c-aa66-89669fd098d5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.310 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.311 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.355 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.470 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.541 2 DEBUG nova.storage.rbd_utils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] resizing rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:04:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:04:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/93842212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.805 2 DEBUG nova.objects.instance [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lazy-loading 'migration_context' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.822 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.823 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.823 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Ensure instance console log exists: /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.824 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.824 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.824 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.825 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.830 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.833 2 WARNING nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.836 2 DEBUG nova.virt.libvirt.host [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.837 2 DEBUG nova.virt.libvirt.host [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.839 2 DEBUG nova.virt.libvirt.host [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.839 2 DEBUG nova.virt.libvirt.host [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.840 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.840 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.841 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.841 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.841 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.841 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.842 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.842 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.842 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.842 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.842 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.843 2 DEBUG nova.virt.hardware [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.845 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.867 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.897 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:04:36 np0005465987 nova_compute[230713]: 2025-10-02 12:04:36.897 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:04:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:36.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:04:37 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2321269412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.251 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.279 2 DEBUG nova.storage.rbd_utils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.283 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:04:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  2 08:04:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  2 08:04:37 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  2 08:04:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:04:37 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/529022672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.718 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.720 2 DEBUG nova.objects.instance [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lazy-loading 'pci_devices' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.738 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <uuid>81cd8274-bb25-4b6c-aa66-89669fd098d5</uuid>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <name>instance-00000019</name>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1602823290</nova:name>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:04:36</nova:creationTime>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <nova:user uuid="a10a935aed2d40ff8ee890b3089c7d03">tempest-UnshelveToHostMultiNodesTest-32053853-project-member</nova:user>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <nova:project uuid="39f16ad971c74b0296dabeb9b59464da">tempest-UnshelveToHostMultiNodesTest-32053853</nova:project>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <entry name="serial">81cd8274-bb25-4b6c-aa66-89669fd098d5</entry>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <entry name="uuid">81cd8274-bb25-4b6c-aa66-89669fd098d5</entry>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/console.log" append="off"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:04:37 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:04:37 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:04:37 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:04:37 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.796 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.796 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.797 2 INFO nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Using config drive#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.822 2 DEBUG nova.storage.rbd_utils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.898 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.898 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.899 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:04:37 np0005465987 nova_compute[230713]: 2025-10-02 12:04:37.899 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:04:38 np0005465987 nova_compute[230713]: 2025-10-02 12:04:38.103 2 INFO nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Creating config drive at /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config#033[00m
Oct  2 08:04:38 np0005465987 nova_compute[230713]: 2025-10-02 12:04:38.108 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm7ifrx4n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:04:38 np0005465987 nova_compute[230713]: 2025-10-02 12:04:38.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:38.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:38 np0005465987 nova_compute[230713]: 2025-10-02 12:04:38.234 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm7ifrx4n" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:04:38 np0005465987 nova_compute[230713]: 2025-10-02 12:04:38.271 2 DEBUG nova.storage.rbd_utils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:04:38 np0005465987 nova_compute[230713]: 2025-10-02 12:04:38.276 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:04:38 np0005465987 nova_compute[230713]: 2025-10-02 12:04:38.471 2 DEBUG oslo_concurrency.processutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:04:38 np0005465987 nova_compute[230713]: 2025-10-02 12:04:38.472 2 INFO nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deleting local config drive /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config because it was imported into RBD.#033[00m
Oct  2 08:04:38 np0005465987 systemd-machined[188335]: New machine qemu-13-instance-00000019.
Oct  2 08:04:38 np0005465987 systemd[1]: Started Virtual Machine qemu-13-instance-00000019.
Oct  2 08:04:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 08:04:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:38.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 08:04:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:04:39.546 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:04:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:04:39.547 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:04:39 np0005465987 nova_compute[230713]: 2025-10-02 12:04:39.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:04:39.548 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.115 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406680.1152368, 81cd8274-bb25-4b6c-aa66-89669fd098d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.116 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.118 2 DEBUG nova.compute.manager [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.118 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.121 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance spawned successfully.#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.121 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:04:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:40.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.143 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.147 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.151 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.151 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.151 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.152 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.152 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.152 2 DEBUG nova.virt.libvirt.driver [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.174 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.175 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406680.1195333, 81cd8274-bb25-4b6c-aa66-89669fd098d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.175 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] VM Started (Lifecycle Event)#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.198 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.201 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.205 2 INFO nova.compute.manager [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Took 4.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.206 2 DEBUG nova.compute.manager [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.215 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.255 2 INFO nova.compute.manager [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Took 5.77 seconds to build instance.#033[00m
Oct  2 08:04:40 np0005465987 nova_compute[230713]: 2025-10-02 12:04:40.286 2 DEBUG oslo_concurrency.lockutils [None req-3eaaec75-64cf-442b-b934-586fd08277e5 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:04:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:04:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:40.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:04:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:42.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:42 np0005465987 nova_compute[230713]: 2025-10-02 12:04:42.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:42.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:43 np0005465987 nova_compute[230713]: 2025-10-02 12:04:43.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:43 np0005465987 nova_compute[230713]: 2025-10-02 12:04:43.327 2 DEBUG oslo_concurrency.lockutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:43 np0005465987 nova_compute[230713]: 2025-10-02 12:04:43.327 2 DEBUG oslo_concurrency.lockutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:43 np0005465987 nova_compute[230713]: 2025-10-02 12:04:43.328 2 INFO nova.compute.manager [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Shelving#033[00m
Oct  2 08:04:43 np0005465987 nova_compute[230713]: 2025-10-02 12:04:43.362 2 DEBUG nova.virt.libvirt.driver [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:04:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:44.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:44.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:46.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:47 np0005465987 nova_compute[230713]: 2025-10-02 12:04:47.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:48 np0005465987 nova_compute[230713]: 2025-10-02 12:04:48.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:48.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:48.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:50.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:04:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:04:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:52.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:52 np0005465987 nova_compute[230713]: 2025-10-02 12:04:52.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:04:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:04:53 np0005465987 nova_compute[230713]: 2025-10-02 12:04:53.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:53 np0005465987 nova_compute[230713]: 2025-10-02 12:04:53.419 2 DEBUG nova.virt.libvirt.driver [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:04:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:54.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:54.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:04:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1681002428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:04:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:04:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1681002428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:04:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:56.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:04:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:56.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:57 np0005465987 nova_compute[230713]: 2025-10-02 12:04:57.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:57 np0005465987 podman[242462]: 2025-10-02 12:04:57.843032655 +0000 UTC m=+0.056417560 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:04:58 np0005465987 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct  2 08:04:58 np0005465987 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000019.scope: Consumed 15.437s CPU time.
Oct  2 08:04:58 np0005465987 systemd-machined[188335]: Machine qemu-13-instance-00000019 terminated.
Oct  2 08:04:58 np0005465987 nova_compute[230713]: 2025-10-02 12:04:58.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:04:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:58.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:58 np0005465987 nova_compute[230713]: 2025-10-02 12:04:58.444 2 INFO nova.virt.libvirt.driver [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance shutdown successfully after 15 seconds.#033[00m
Oct  2 08:04:58 np0005465987 nova_compute[230713]: 2025-10-02 12:04:58.449 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance destroyed successfully.#033[00m
Oct  2 08:04:58 np0005465987 nova_compute[230713]: 2025-10-02 12:04:58.450 2 DEBUG nova.objects.instance [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lazy-loading 'numa_topology' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:04:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:04:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:04:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:58.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:04:59 np0005465987 nova_compute[230713]: 2025-10-02 12:04:59.712 2 INFO nova.virt.libvirt.driver [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Beginning cold snapshot process#033[00m
Oct  2 08:04:59 np0005465987 nova_compute[230713]: 2025-10-02 12:04:59.910 2 DEBUG nova.virt.libvirt.imagebackend [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:05:00 np0005465987 nova_compute[230713]: 2025-10-02 12:05:00.155 2 DEBUG nova.storage.rbd_utils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] creating snapshot(bb8060bb51f44a1e99e20c5d27b264c4) on rbd image(81cd8274-bb25-4b6c-aa66-89669fd098d5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:05:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:00.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Oct  2 08:05:00 np0005465987 nova_compute[230713]: 2025-10-02 12:05:00.498 2 DEBUG nova.storage.rbd_utils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] cloning vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk@bb8060bb51f44a1e99e20c5d27b264c4 to images/bed55885-843b-4de0-bacc-58bcdc0a44ca clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:05:00 np0005465987 nova_compute[230713]: 2025-10-02 12:05:00.631 2 DEBUG nova.storage.rbd_utils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] flattening images/bed55885-843b-4de0-bacc-58bcdc0a44ca flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:05:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:05:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:00.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:05:01 np0005465987 nova_compute[230713]: 2025-10-02 12:05:01.236 2 DEBUG nova.storage.rbd_utils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] removing snapshot(bb8060bb51f44a1e99e20c5d27b264c4) on rbd image(81cd8274-bb25-4b6c-aa66-89669fd098d5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:05:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Oct  2 08:05:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:02.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:02 np0005465987 nova_compute[230713]: 2025-10-02 12:05:02.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:02 np0005465987 nova_compute[230713]: 2025-10-02 12:05:02.422 2 DEBUG nova.storage.rbd_utils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] creating snapshot(snap) on rbd image(bed55885-843b-4de0-bacc-58bcdc0a44ca) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:05:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Oct  2 08:05:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:02.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:03 np0005465987 nova_compute[230713]: 2025-10-02 12:05:03.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:04.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:04.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:05 np0005465987 podman[242627]: 2025-10-02 12:05:05.8674773 +0000 UTC m=+0.071838245 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:05:05 np0005465987 podman[242626]: 2025-10-02 12:05:05.886572035 +0000 UTC m=+0.097818145 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:05:05 np0005465987 podman[242625]: 2025-10-02 12:05:05.91612846 +0000 UTC m=+0.124800231 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:05:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:06.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:06 np0005465987 nova_compute[230713]: 2025-10-02 12:05:06.216 2 INFO nova.virt.libvirt.driver [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Snapshot image upload complete#033[00m
Oct  2 08:05:06 np0005465987 nova_compute[230713]: 2025-10-02 12:05:06.217 2 DEBUG nova.compute.manager [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:06.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:07 np0005465987 nova_compute[230713]: 2025-10-02 12:05:07.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Oct  2 08:05:08 np0005465987 nova_compute[230713]: 2025-10-02 12:05:08.146 2 INFO nova.compute.manager [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Shelve offloading#033[00m
Oct  2 08:05:08 np0005465987 nova_compute[230713]: 2025-10-02 12:05:08.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:08 np0005465987 nova_compute[230713]: 2025-10-02 12:05:08.153 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance destroyed successfully.#033[00m
Oct  2 08:05:08 np0005465987 nova_compute[230713]: 2025-10-02 12:05:08.154 2 DEBUG nova.compute.manager [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:08 np0005465987 nova_compute[230713]: 2025-10-02 12:05:08.157 2 DEBUG oslo_concurrency.lockutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:08 np0005465987 nova_compute[230713]: 2025-10-02 12:05:08.157 2 DEBUG oslo_concurrency.lockutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquired lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:08 np0005465987 nova_compute[230713]: 2025-10-02 12:05:08.157 2 DEBUG nova.network.neutron [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:05:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:08.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:08 np0005465987 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  2 08:05:08 np0005465987 nova_compute[230713]: 2025-10-02 12:05:08.801 2 DEBUG nova.network.neutron [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:05:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:08.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:09 np0005465987 nova_compute[230713]: 2025-10-02 12:05:09.656 2 DEBUG nova.network.neutron [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:05:09 np0005465987 nova_compute[230713]: 2025-10-02 12:05:09.702 2 DEBUG oslo_concurrency.lockutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Releasing lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:09 np0005465987 nova_compute[230713]: 2025-10-02 12:05:09.707 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance destroyed successfully.#033[00m
Oct  2 08:05:09 np0005465987 nova_compute[230713]: 2025-10-02 12:05:09.708 2 DEBUG nova.objects.instance [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lazy-loading 'resources' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:10.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:10.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:12.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:12 np0005465987 nova_compute[230713]: 2025-10-02 12:05:12.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:12 np0005465987 nova_compute[230713]: 2025-10-02 12:05:12.386 2 INFO nova.virt.libvirt.driver [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deleting instance files /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5_del#033[00m
Oct  2 08:05:12 np0005465987 nova_compute[230713]: 2025-10-02 12:05:12.386 2 INFO nova.virt.libvirt.driver [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deletion of /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5_del complete#033[00m
Oct  2 08:05:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:12.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.260 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406698.2596462, 81cd8274-bb25-4b6c-aa66-89669fd098d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.261 2 INFO nova.compute.manager [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.350 2 DEBUG nova.compute.manager [None req-9a3de481-c424-4401-91eb-064d1b8d7e66 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.354 2 DEBUG nova.compute.manager [None req-9a3de481-c424-4401-91eb-064d1b8d7e66 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.607 2 INFO nova.scheduler.client.report [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Deleted allocations for instance 81cd8274-bb25-4b6c-aa66-89669fd098d5#033[00m
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.613 2 INFO nova.compute.manager [None req-9a3de481-c424-4401-91eb-064d1b8d7e66 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.687 2 DEBUG oslo_concurrency.lockutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.688 2 DEBUG oslo_concurrency.lockutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:13 np0005465987 nova_compute[230713]: 2025-10-02 12:05:13.726 2 DEBUG oslo_concurrency.processutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2898171107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:14 np0005465987 nova_compute[230713]: 2025-10-02 12:05:14.149 2 DEBUG oslo_concurrency.processutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:14 np0005465987 nova_compute[230713]: 2025-10-02 12:05:14.155 2 DEBUG nova.compute.provider_tree [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:05:14 np0005465987 nova_compute[230713]: 2025-10-02 12:05:14.177 2 DEBUG nova.scheduler.client.report [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:05:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:14.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:14 np0005465987 nova_compute[230713]: 2025-10-02 12:05:14.204 2 DEBUG oslo_concurrency.lockutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:14 np0005465987 nova_compute[230713]: 2025-10-02 12:05:14.307 2 DEBUG oslo_concurrency.lockutils [None req-62c10f38-2859-476f-930f-aeddee57650b a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 30.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:14.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:16.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:17 np0005465987 nova_compute[230713]: 2025-10-02 12:05:17.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:18 np0005465987 nova_compute[230713]: 2025-10-02 12:05:18.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:18.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:18.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:05:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:05:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:05:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:05:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:20.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:20 np0005465987 nova_compute[230713]: 2025-10-02 12:05:20.750 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:20 np0005465987 nova_compute[230713]: 2025-10-02 12:05:20.750 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:20 np0005465987 nova_compute[230713]: 2025-10-02 12:05:20.751 2 INFO nova.compute.manager [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Unshelving#033[00m
Oct  2 08:05:20 np0005465987 nova_compute[230713]: 2025-10-02 12:05:20.864 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:20 np0005465987 nova_compute[230713]: 2025-10-02 12:05:20.865 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:20 np0005465987 nova_compute[230713]: 2025-10-02 12:05:20.870 2 DEBUG nova.objects.instance [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'pci_requests' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:20 np0005465987 nova_compute[230713]: 2025-10-02 12:05:20.888 2 DEBUG nova.objects.instance [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'numa_topology' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:20 np0005465987 nova_compute[230713]: 2025-10-02 12:05:20.912 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:05:20 np0005465987 nova_compute[230713]: 2025-10-02 12:05:20.913 2 INFO nova.compute.claims [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:05:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:20.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:21 np0005465987 nova_compute[230713]: 2025-10-02 12:05:21.174 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1678619395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:21 np0005465987 nova_compute[230713]: 2025-10-02 12:05:21.613 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:21 np0005465987 nova_compute[230713]: 2025-10-02 12:05:21.620 2 DEBUG nova.compute.provider_tree [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:05:21 np0005465987 nova_compute[230713]: 2025-10-02 12:05:21.857 2 DEBUG nova.scheduler.client.report [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:05:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:21 np0005465987 nova_compute[230713]: 2025-10-02 12:05:21.982 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:22.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:22 np0005465987 nova_compute[230713]: 2025-10-02 12:05:22.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:22 np0005465987 nova_compute[230713]: 2025-10-02 12:05:22.462 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:22 np0005465987 nova_compute[230713]: 2025-10-02 12:05:22.463 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquired lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:22 np0005465987 nova_compute[230713]: 2025-10-02 12:05:22.463 2 DEBUG nova.network.neutron [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:05:22 np0005465987 nova_compute[230713]: 2025-10-02 12:05:22.726 2 DEBUG nova.network.neutron [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:05:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:22.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:05:22.981 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:05:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:05:22.982 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:05:22 np0005465987 nova_compute[230713]: 2025-10-02 12:05:22.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:05:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:05:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:05:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:05:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:05:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.446 2 DEBUG nova.network.neutron [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.495 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Releasing lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.497 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.498 2 INFO nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Creating image(s)#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.530 2 DEBUG nova.storage.rbd_utils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.534 2 DEBUG nova.objects.instance [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.614 2 DEBUG nova.storage.rbd_utils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.641 2 DEBUG nova.storage.rbd_utils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.645 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "70a66607176196546b9a89566c47a7c7a8ba718c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:23 np0005465987 nova_compute[230713]: 2025-10-02 12:05:23.645 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "70a66607176196546b9a89566c47a7c7a8ba718c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:24.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:24 np0005465987 nova_compute[230713]: 2025-10-02 12:05:24.585 2 DEBUG nova.virt.libvirt.imagebackend [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/bed55885-843b-4de0-bacc-58bcdc0a44ca/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/bed55885-843b-4de0-bacc-58bcdc0a44ca/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:05:24 np0005465987 nova_compute[230713]: 2025-10-02 12:05:24.637 2 DEBUG nova.virt.libvirt.imagebackend [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/bed55885-843b-4de0-bacc-58bcdc0a44ca/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:05:24 np0005465987 nova_compute[230713]: 2025-10-02 12:05:24.637 2 DEBUG nova.storage.rbd_utils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] cloning images/bed55885-843b-4de0-bacc-58bcdc0a44ca@snap to None/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:05:24 np0005465987 nova_compute[230713]: 2025-10-02 12:05:24.850 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "70a66607176196546b9a89566c47a7c7a8ba718c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:24.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:05:24.984 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:05:24 np0005465987 nova_compute[230713]: 2025-10-02 12:05:24.993 2 DEBUG nova.objects.instance [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'migration_context' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:25 np0005465987 nova_compute[230713]: 2025-10-02 12:05:25.079 2 DEBUG nova.storage.rbd_utils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] flattening vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:05:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:26.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.347 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Image rbd:vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.349 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.349 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Ensure instance console log exists: /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.349 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.350 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.350 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.352 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:04:43Z,direct_url=<?>,disk_format='raw',id=bed55885-843b-4de0-bacc-58bcdc0a44ca,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1602823290-shelved',owner='39f16ad971c74b0296dabeb9b59464da',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:05:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.356 2 WARNING nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.359 2 DEBUG nova.virt.libvirt.host [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.360 2 DEBUG nova.virt.libvirt.host [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.364 2 DEBUG nova.virt.libvirt.host [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.364 2 DEBUG nova.virt.libvirt.host [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.365 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.365 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:04:43Z,direct_url=<?>,disk_format='raw',id=bed55885-843b-4de0-bacc-58bcdc0a44ca,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1602823290-shelved',owner='39f16ad971c74b0296dabeb9b59464da',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:05:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.366 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.366 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.366 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.367 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.367 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.367 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.367 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.368 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.368 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.368 2 DEBUG nova.virt.hardware [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:05:26 np0005465987 nova_compute[230713]: 2025-10-02 12:05:26.369 2 DEBUG nova.objects.instance [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:26.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:27 np0005465987 nova_compute[230713]: 2025-10-02 12:05:27.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:27 np0005465987 nova_compute[230713]: 2025-10-02 12:05:27.735 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:05:27.932 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:05:27.933 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:05:27.933 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:05:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/270720575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.162 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.188 2 DEBUG nova.storage.rbd_utils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.192 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:28.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:05:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1291441108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.734 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.736 2 DEBUG nova.objects.instance [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'pci_devices' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.767 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <uuid>81cd8274-bb25-4b6c-aa66-89669fd098d5</uuid>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <name>instance-00000019</name>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1602823290</nova:name>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:05:26</nova:creationTime>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <nova:user uuid="a10a935aed2d40ff8ee890b3089c7d03">tempest-UnshelveToHostMultiNodesTest-32053853-project-member</nova:user>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <nova:project uuid="39f16ad971c74b0296dabeb9b59464da">tempest-UnshelveToHostMultiNodesTest-32053853</nova:project>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="bed55885-843b-4de0-bacc-58bcdc0a44ca"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <entry name="serial">81cd8274-bb25-4b6c-aa66-89669fd098d5</entry>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <entry name="uuid">81cd8274-bb25-4b6c-aa66-89669fd098d5</entry>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/console.log" append="off"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <input type="keyboard" bus="usb"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:05:28 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:05:28 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:05:28 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:05:28 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.848 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.849 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.849 2 INFO nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Using config drive#033[00m
Oct  2 08:05:28 np0005465987 podman[243160]: 2025-10-02 12:05:28.876520472 +0000 UTC m=+0.090998300 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.878 2 DEBUG nova.storage.rbd_utils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:28 np0005465987 nova_compute[230713]: 2025-10-02 12:05:28.909 2 DEBUG nova.objects.instance [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:28.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.070 2 DEBUG nova.objects.instance [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lazy-loading 'keypairs' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.209 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Acquiring lock "c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.210 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.237 2 DEBUG nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.343 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.344 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.354 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.354 2 INFO nova.compute.claims [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.386 2 INFO nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Creating config drive at /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.396 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp19zx98h_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.536 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp19zx98h_" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.567 2 DEBUG nova.storage.rbd_utils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] rbd image 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.571 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.616 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:05:29 np0005465987 nova_compute[230713]: 2025-10-02 12:05:29.968 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.002 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.003 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.003 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2480881078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.043 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.056 2 DEBUG nova.compute.provider_tree [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.074 2 DEBUG nova.scheduler.client.report [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.104 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.105 2 DEBUG nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.169 2 DEBUG nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.170 2 DEBUG nova.network.neutron [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.197 2 INFO nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:05:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:30.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.228 2 DEBUG nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.303 2 DEBUG oslo_concurrency.processutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config 81cd8274-bb25-4b6c-aa66-89669fd098d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.304 2 INFO nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deleting local config drive /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5/disk.config because it was imported into RBD.#033[00m
Oct  2 08:05:30 np0005465987 systemd-machined[188335]: New machine qemu-14-instance-00000019.
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.392 2 DEBUG nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.393 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.393 2 INFO nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Creating image(s)#033[00m
Oct  2 08:05:30 np0005465987 systemd[1]: Started Virtual Machine qemu-14-instance-00000019.
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.421 2 DEBUG nova.storage.rbd_utils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] rbd image c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.455 2 DEBUG nova.storage.rbd_utils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] rbd image c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.480 2 DEBUG nova.storage.rbd_utils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] rbd image c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.485 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.512 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.554 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.554 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.555 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.555 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.583 2 DEBUG nova.storage.rbd_utils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] rbd image c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.589 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.936 2 DEBUG nova.network.neutron [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:05:30 np0005465987 nova_compute[230713]: 2025-10-02 12:05:30.937 2 DEBUG nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:05:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:30.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.092 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.128 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.129 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.130 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.701 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406731.6999974, 81cd8274-bb25-4b6c-aa66-89669fd098d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.704 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.707 2 DEBUG nova.compute.manager [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.708 2 DEBUG nova.virt.libvirt.driver [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.712 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance spawned successfully.#033[00m
Oct  2 08:05:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:31 np0005465987 nova_compute[230713]: 2025-10-02 12:05:31.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.210 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:32.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.294 2 DEBUG nova.storage.rbd_utils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] resizing rbd image c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.426662) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732426811, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2415, "num_deletes": 258, "total_data_size": 5616106, "memory_usage": 5708128, "flush_reason": "Manual Compaction"}
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732454187, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3607201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25403, "largest_seqno": 27813, "table_properties": {"data_size": 3597676, "index_size": 5891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20655, "raw_average_key_size": 20, "raw_value_size": 3577976, "raw_average_value_size": 3497, "num_data_blocks": 260, "num_entries": 1023, "num_filter_entries": 1023, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406543, "oldest_key_time": 1759406543, "file_creation_time": 1759406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 27554 microseconds, and 8712 cpu microseconds.
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.454236) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3607201 bytes OK
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.454258) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.456285) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.456301) EVENT_LOG_v1 {"time_micros": 1759406732456295, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.456321) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5605387, prev total WAL file size 5605387, number of live WAL files 2.
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.457730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3522KB)], [51(8828KB)]
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732457761, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12647975, "oldest_snapshot_seqno": -1}
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5366 keys, 12528781 bytes, temperature: kUnknown
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732543359, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 12528781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12488443, "index_size": 25828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 134731, "raw_average_key_size": 25, "raw_value_size": 12387444, "raw_average_value_size": 2308, "num_data_blocks": 1069, "num_entries": 5366, "num_filter_entries": 5366, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.543592) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12528781 bytes
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.545032) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.6 rd, 146.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.6 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.0) write-amplify(3.5) OK, records in: 5903, records dropped: 537 output_compression: NoCompression
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.545050) EVENT_LOG_v1 {"time_micros": 1759406732545041, "job": 30, "event": "compaction_finished", "compaction_time_micros": 85678, "compaction_time_cpu_micros": 23565, "output_level": 6, "num_output_files": 1, "total_output_size": 12528781, "num_input_records": 5903, "num_output_records": 5366, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732545678, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406732547346, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.457626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.547428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.547433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.547434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.547436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:32 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:32.547437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.671 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.681 2 DEBUG nova.objects.instance [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lazy-loading 'migration_context' on Instance uuid c278f3ec-d2b1-46a9-bcb7-76e90670b7d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.689 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.725 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.726 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Ensure instance console log exists: /var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.727 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.728 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.729 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.731 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.738 2 WARNING nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.743 2 DEBUG nova.virt.libvirt.host [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.745 2 DEBUG nova.virt.libvirt.host [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.753 2 DEBUG nova.virt.libvirt.host [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.754 2 DEBUG nova.virt.libvirt.host [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.756 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.756 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.758 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.758 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.759 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.760 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.760 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.761 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.762 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.762 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.763 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.764 2 DEBUG nova.virt.hardware [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.770 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.809 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.810 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406731.7003725, 81cd8274-bb25-4b6c-aa66-89669fd098d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.810 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] VM Started (Lifecycle Event)#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.966 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:32 np0005465987 nova_compute[230713]: 2025-10-02 12:05:32.970 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:05:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:32.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.004 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:05:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/633674661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.250 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.275 2 DEBUG nova.storage.rbd_utils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] rbd image c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.280 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:05:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:05:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3930794615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.721 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.725 2 DEBUG nova.objects.instance [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lazy-loading 'pci_devices' on Instance uuid c278f3ec-d2b1-46a9-bcb7-76e90670b7d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.760 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <uuid>c278f3ec-d2b1-46a9-bcb7-76e90670b7d9</uuid>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <name>instance-0000001e</name>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerExternalEventsTest-server-549723668</nova:name>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:05:32</nova:creationTime>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <nova:user uuid="415050d5c96b4fcfa58872eb1a8b1ff8">tempest-ServerExternalEventsTest-72804171-project-member</nova:user>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <nova:project uuid="463bf1f08e744cea8f9afb27d568bdbe">tempest-ServerExternalEventsTest-72804171</nova:project>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <entry name="serial">c278f3ec-d2b1-46a9-bcb7-76e90670b7d9</entry>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <entry name="uuid">c278f3ec-d2b1-46a9-bcb7-76e90670b7d9</entry>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk.config">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9/console.log" append="off"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:05:33 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:05:33 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:05:33 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:05:33 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.973 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.974 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:05:33 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.974 2 INFO nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Using config drive#033[00m
Oct  2 08:05:34 np0005465987 nova_compute[230713]: 2025-10-02 12:05:33.999 2 DEBUG nova.storage.rbd_utils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] rbd image c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:34.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:34 np0005465987 nova_compute[230713]: 2025-10-02 12:05:34.267 2 INFO nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Creating config drive at /var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9/disk.config#033[00m
Oct  2 08:05:34 np0005465987 nova_compute[230713]: 2025-10-02 12:05:34.278 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84_twyk9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:34 np0005465987 nova_compute[230713]: 2025-10-02 12:05:34.411 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84_twyk9" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:34 np0005465987 nova_compute[230713]: 2025-10-02 12:05:34.452 2 DEBUG nova.storage.rbd_utils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] rbd image c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:05:34 np0005465987 nova_compute[230713]: 2025-10-02 12:05:34.457 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9/disk.config c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:34 np0005465987 nova_compute[230713]: 2025-10-02 12:05:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:34.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.234423) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735234468, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 291, "num_deletes": 251, "total_data_size": 101757, "memory_usage": 108576, "flush_reason": "Manual Compaction"}
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735237097, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 66542, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27818, "largest_seqno": 28104, "table_properties": {"data_size": 64637, "index_size": 133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5160, "raw_average_key_size": 18, "raw_value_size": 60742, "raw_average_value_size": 219, "num_data_blocks": 6, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406733, "oldest_key_time": 1759406733, "file_creation_time": 1759406735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 2721 microseconds, and 1126 cpu microseconds.
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.237144) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 66542 bytes OK
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.237164) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.238733) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.238759) EVENT_LOG_v1 {"time_micros": 1759406735238750, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.238774) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 99565, prev total WAL file size 99606, number of live WAL files 2.
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.239155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(64KB)], [54(11MB)]
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735239183, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 12595323, "oldest_snapshot_seqno": -1}
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5131 keys, 10659672 bytes, temperature: kUnknown
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735299340, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 10659672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10622532, "index_size": 23206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 130590, "raw_average_key_size": 25, "raw_value_size": 10527130, "raw_average_value_size": 2051, "num_data_blocks": 951, "num_entries": 5131, "num_filter_entries": 5131, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759406735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.299571) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10659672 bytes
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.302576) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.1 rd, 177.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.9 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(349.5) write-amplify(160.2) OK, records in: 5643, records dropped: 512 output_compression: NoCompression
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.302599) EVENT_LOG_v1 {"time_micros": 1759406735302587, "job": 32, "event": "compaction_finished", "compaction_time_micros": 60230, "compaction_time_cpu_micros": 19496, "output_level": 6, "num_output_files": 1, "total_output_size": 10659672, "num_input_records": 5643, "num_output_records": 5131, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735302722, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406735304504, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.239099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.304597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.304605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.304609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.304614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:05:35.304617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:05:35 np0005465987 nova_compute[230713]: 2025-10-02 12:05:35.410 2 DEBUG oslo_concurrency.processutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9/disk.config c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.952s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:35 np0005465987 nova_compute[230713]: 2025-10-02 12:05:35.411 2 INFO nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Deleting local config drive /var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9/disk.config because it was imported into RBD.#033[00m
Oct  2 08:05:35 np0005465987 systemd-machined[188335]: New machine qemu-15-instance-0000001e.
Oct  2 08:05:35 np0005465987 systemd[1]: Started Virtual Machine qemu-15-instance-0000001e.
Oct  2 08:05:35 np0005465987 nova_compute[230713]: 2025-10-02 12:05:35.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:05:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:36.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:05:36 np0005465987 podman[243711]: 2025-10-02 12:05:36.559561343 +0000 UTC m=+0.062819773 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 08:05:36 np0005465987 podman[243712]: 2025-10-02 12:05:36.578311699 +0000 UTC m=+0.092997666 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:05:36 np0005465987 podman[243709]: 2025-10-02 12:05:36.608575093 +0000 UTC m=+0.131694178 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller)
Oct  2 08:05:36 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  2 08:05:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:36 np0005465987 nova_compute[230713]: 2025-10-02 12:05:36.974 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406736.9742014, c278f3ec-d2b1-46a9-bcb7-76e90670b7d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:05:36 np0005465987 nova_compute[230713]: 2025-10-02 12:05:36.975 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:05:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:36.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:36 np0005465987 nova_compute[230713]: 2025-10-02 12:05:36.990 2 DEBUG nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:05:36 np0005465987 nova_compute[230713]: 2025-10-02 12:05:36.990 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:05:36 np0005465987 nova_compute[230713]: 2025-10-02 12:05:36.993 2 INFO nova.virt.libvirt.driver [-] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Instance spawned successfully.#033[00m
Oct  2 08:05:36 np0005465987 nova_compute[230713]: 2025-10-02 12:05:36.993 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:05:37 np0005465987 nova_compute[230713]: 2025-10-02 12:05:37.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:05:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:38.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.342 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.342 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.343 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.343 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.343 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.416 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.417 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.417 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.418 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.418 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.419 2 DEBUG nova.virt.libvirt.driver [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.422 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.425 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.499 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.501 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406736.9899175, c278f3ec-d2b1-46a9-bcb7-76e90670b7d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.501 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] VM Started (Lifecycle Event)#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.578 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.582 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.646 2 INFO nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Took 8.25 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.647 2 DEBUG nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.804 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:05:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:38 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1365960670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.827 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.868 2 INFO nova.compute.manager [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Took 9.56 seconds to build instance.#033[00m
Oct  2 08:05:38 np0005465987 nova_compute[230713]: 2025-10-02 12:05:38.982 2 DEBUG oslo_concurrency.lockutils [None req-17108d31-07d8-4660-9e9d-4c4bab3ab125 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:38.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.022 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.022 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.025 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.025 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.166 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.167 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4586MB free_disk=20.764575958251953GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.168 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.168 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.271 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 81cd8274-bb25-4b6c-aa66-89669fd098d5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.272 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance c278f3ec-d2b1-46a9-bcb7-76e90670b7d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.272 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.273 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.348 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3195151013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.778 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.784 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.811 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.850 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:05:39 np0005465987 nova_compute[230713]: 2025-10-02 12:05:39.850 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:40.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.794 2 DEBUG nova.compute.manager [None req-2843cae7-26a0-4924-b183-0749a39ba2b9 afe47cfd3bf742b685c5c028f4f2c52e 1ba010187ec042669294de7ed68dd41e - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.795 2 DEBUG nova.compute.manager [None req-2843cae7-26a0-4924-b183-0749a39ba2b9 afe47cfd3bf742b685c5c028f4f2c52e 1ba010187ec042669294de7ed68dd41e - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.796 2 DEBUG oslo_concurrency.lockutils [None req-2843cae7-26a0-4924-b183-0749a39ba2b9 afe47cfd3bf742b685c5c028f4f2c52e 1ba010187ec042669294de7ed68dd41e - - default default] Acquiring lock "refresh_cache-c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.796 2 DEBUG oslo_concurrency.lockutils [None req-2843cae7-26a0-4924-b183-0749a39ba2b9 afe47cfd3bf742b685c5c028f4f2c52e 1ba010187ec042669294de7ed68dd41e - - default default] Acquired lock "refresh_cache-c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.797 2 DEBUG nova.network.neutron [None req-2843cae7-26a0-4924-b183-0749a39ba2b9 afe47cfd3bf742b685c5c028f4f2c52e 1ba010187ec042669294de7ed68dd41e - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.852 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.853 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.853 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.854 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.873 2 DEBUG nova.compute.manager [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.960 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:40 np0005465987 nova_compute[230713]: 2025-10-02 12:05:40.979 2 DEBUG oslo_concurrency.lockutils [None req-da90c91f-4e99-45b7-bb53-751aff29d05f 9fbbb9af77d7442996ea19a4e8e03c07 e75d09bf728d4df195c9df2fa73baa9c - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 20.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:40.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.069 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Acquiring lock "c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.070 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.070 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Acquiring lock "c278f3ec-d2b1-46a9-bcb7-76e90670b7d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.071 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "c278f3ec-d2b1-46a9-bcb7-76e90670b7d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.071 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "c278f3ec-d2b1-46a9-bcb7-76e90670b7d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.072 2 INFO nova.compute.manager [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Terminating instance#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.074 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Acquiring lock "refresh_cache-c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.141 2 DEBUG nova.network.neutron [None req-2843cae7-26a0-4924-b183-0749a39ba2b9 afe47cfd3bf742b685c5c028f4f2c52e 1ba010187ec042669294de7ed68dd41e - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.609 2 DEBUG nova.network.neutron [None req-2843cae7-26a0-4924-b183-0749a39ba2b9 afe47cfd3bf742b685c5c028f4f2c52e 1ba010187ec042669294de7ed68dd41e - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.641 2 DEBUG oslo_concurrency.lockutils [None req-2843cae7-26a0-4924-b183-0749a39ba2b9 afe47cfd3bf742b685c5c028f4f2c52e 1ba010187ec042669294de7ed68dd41e - - default default] Releasing lock "refresh_cache-c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.643 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Acquired lock "refresh_cache-c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:41 np0005465987 nova_compute[230713]: 2025-10-02 12:05:41.643 2 DEBUG nova.network.neutron [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:05:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:42.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:42 np0005465987 nova_compute[230713]: 2025-10-02 12:05:42.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Oct  2 08:05:42 np0005465987 nova_compute[230713]: 2025-10-02 12:05:42.468 2 DEBUG nova.network.neutron [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:05:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:43.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:43 np0005465987 nova_compute[230713]: 2025-10-02 12:05:43.130 2 DEBUG nova.network.neutron [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:05:43 np0005465987 nova_compute[230713]: 2025-10-02 12:05:43.157 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Releasing lock "refresh_cache-c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:43 np0005465987 nova_compute[230713]: 2025-10-02 12:05:43.157 2 DEBUG nova.compute.manager [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:05:43 np0005465987 nova_compute[230713]: 2025-10-02 12:05:43.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:43 np0005465987 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct  2 08:05:43 np0005465987 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001e.scope: Consumed 7.607s CPU time.
Oct  2 08:05:43 np0005465987 systemd-machined[188335]: Machine qemu-15-instance-0000001e terminated.
Oct  2 08:05:43 np0005465987 nova_compute[230713]: 2025-10-02 12:05:43.384 2 INFO nova.virt.libvirt.driver [-] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Instance destroyed successfully.#033[00m
Oct  2 08:05:43 np0005465987 nova_compute[230713]: 2025-10-02 12:05:43.386 2 DEBUG nova.objects.instance [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lazy-loading 'resources' on Instance uuid c278f3ec-d2b1-46a9-bcb7-76e90670b7d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.212 2 DEBUG oslo_concurrency.lockutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.213 2 DEBUG oslo_concurrency.lockutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.213 2 INFO nova.compute.manager [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Shelving#033[00m
Oct  2 08:05:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:44.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.253 2 DEBUG nova.virt.libvirt.driver [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.482 2 INFO nova.virt.libvirt.driver [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Deleting instance files /var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_del#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.483 2 INFO nova.virt.libvirt.driver [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Deletion of /var/lib/nova/instances/c278f3ec-d2b1-46a9-bcb7-76e90670b7d9_del complete#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.611 2 INFO nova.compute.manager [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Took 1.45 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.612 2 DEBUG oslo.service.loopingcall [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.613 2 DEBUG nova.compute.manager [-] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.613 2 DEBUG nova.network.neutron [-] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.807 2 DEBUG nova.network.neutron [-] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.841 2 DEBUG nova.network.neutron [-] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.868 2 INFO nova.compute.manager [-] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Took 0.26 seconds to deallocate network for instance.#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.972 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:44 np0005465987 nova_compute[230713]: 2025-10-02 12:05:44.973 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:45.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:45 np0005465987 nova_compute[230713]: 2025-10-02 12:05:45.038 2 DEBUG oslo_concurrency.processutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3478073098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:45 np0005465987 nova_compute[230713]: 2025-10-02 12:05:45.446 2 DEBUG oslo_concurrency.processutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:45 np0005465987 nova_compute[230713]: 2025-10-02 12:05:45.452 2 DEBUG nova.compute.provider_tree [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:05:45 np0005465987 nova_compute[230713]: 2025-10-02 12:05:45.485 2 DEBUG nova.scheduler.client.report [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:05:45 np0005465987 nova_compute[230713]: 2025-10-02 12:05:45.517 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:45 np0005465987 nova_compute[230713]: 2025-10-02 12:05:45.567 2 INFO nova.scheduler.client.report [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Deleted allocations for instance c278f3ec-d2b1-46a9-bcb7-76e90670b7d9#033[00m
Oct  2 08:05:45 np0005465987 nova_compute[230713]: 2025-10-02 12:05:45.762 2 DEBUG oslo_concurrency.lockutils [None req-54c32861-d60a-434e-8c7d-174c8bde6fca 415050d5c96b4fcfa58872eb1a8b1ff8 463bf1f08e744cea8f9afb27d568bdbe - - default default] Lock "c278f3ec-d2b1-46a9-bcb7-76e90670b7d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:46.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:47.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:47 np0005465987 nova_compute[230713]: 2025-10-02 12:05:47.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:47 np0005465987 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct  2 08:05:47 np0005465987 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000019.scope: Consumed 13.661s CPU time.
Oct  2 08:05:47 np0005465987 systemd-machined[188335]: Machine qemu-14-instance-00000019 terminated.
Oct  2 08:05:47 np0005465987 nova_compute[230713]: 2025-10-02 12:05:47.445 2 INFO nova.virt.libvirt.driver [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:05:47 np0005465987 nova_compute[230713]: 2025-10-02 12:05:47.451 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance destroyed successfully.#033[00m
Oct  2 08:05:47 np0005465987 nova_compute[230713]: 2025-10-02 12:05:47.452 2 DEBUG nova.objects.instance [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lazy-loading 'numa_topology' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:47 np0005465987 nova_compute[230713]: 2025-10-02 12:05:47.880 2 INFO nova.virt.libvirt.driver [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Beginning cold snapshot process#033[00m
Oct  2 08:05:48 np0005465987 nova_compute[230713]: 2025-10-02 12:05:48.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:48.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:49.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:49 np0005465987 nova_compute[230713]: 2025-10-02 12:05:49.947 2 DEBUG nova.virt.libvirt.imagebackend [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:05:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:50.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:50 np0005465987 nova_compute[230713]: 2025-10-02 12:05:50.267 2 DEBUG nova.storage.rbd_utils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] creating snapshot(8e112ed42705463abfb5700e1cbd7db2) on rbd image(81cd8274-bb25-4b6c-aa66-89669fd098d5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:05:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:51.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Oct  2 08:05:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:52 np0005465987 nova_compute[230713]: 2025-10-02 12:05:52.077 2 DEBUG nova.storage.rbd_utils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] cloning vms/81cd8274-bb25-4b6c-aa66-89669fd098d5_disk@8e112ed42705463abfb5700e1cbd7db2 to images/a5e65e3c-7180-41c3-815b-a56c1c0389b1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:05:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:52.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:52 np0005465987 nova_compute[230713]: 2025-10-02 12:05:52.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:52 np0005465987 nova_compute[230713]: 2025-10-02 12:05:52.426 2 DEBUG nova.storage.rbd_utils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] flattening images/a5e65e3c-7180-41c3-815b-a56c1c0389b1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:05:52 np0005465987 nova_compute[230713]: 2025-10-02 12:05:52.884 2 DEBUG nova.storage.rbd_utils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] removing snapshot(8e112ed42705463abfb5700e1cbd7db2) on rbd image(81cd8274-bb25-4b6c-aa66-89669fd098d5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:05:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:53.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:53 np0005465987 nova_compute[230713]: 2025-10-02 12:05:53.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Oct  2 08:05:53 np0005465987 nova_compute[230713]: 2025-10-02 12:05:53.488 2 DEBUG nova.storage.rbd_utils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] creating snapshot(snap) on rbd image(a5e65e3c-7180-41c3-815b-a56c1c0389b1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:05:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:05:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:54.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:05:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Oct  2 08:05:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:05:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634991860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:05:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:05:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634991860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:05:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:55.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:56.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:05:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:57.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.164 2 INFO nova.virt.libvirt.driver [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Snapshot image upload complete#033[00m
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.165 2 DEBUG nova.compute.manager [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.254 2 INFO nova.compute.manager [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Shelve offloading#033[00m
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.317 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance destroyed successfully.#033[00m
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.317 2 DEBUG nova.compute.manager [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.319 2 DEBUG oslo_concurrency.lockutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.319 2 DEBUG oslo_concurrency.lockutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquired lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.319 2 DEBUG nova.network.neutron [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:05:57 np0005465987 nova_compute[230713]: 2025-10-02 12:05:57.595 2 DEBUG nova.network.neutron [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:05:58 np0005465987 nova_compute[230713]: 2025-10-02 12:05:58.146 2 DEBUG nova.network.neutron [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:05:58 np0005465987 nova_compute[230713]: 2025-10-02 12:05:58.173 2 DEBUG oslo_concurrency.lockutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Releasing lock "refresh_cache-81cd8274-bb25-4b6c-aa66-89669fd098d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:58 np0005465987 nova_compute[230713]: 2025-10-02 12:05:58.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:05:58 np0005465987 nova_compute[230713]: 2025-10-02 12:05:58.181 2 INFO nova.virt.libvirt.driver [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Instance destroyed successfully.#033[00m
Oct  2 08:05:58 np0005465987 nova_compute[230713]: 2025-10-02 12:05:58.181 2 DEBUG nova.objects.instance [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lazy-loading 'resources' on Instance uuid 81cd8274-bb25-4b6c-aa66-89669fd098d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:05:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:58.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:58 np0005465987 nova_compute[230713]: 2025-10-02 12:05:58.383 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406743.3822389, c278f3ec-d2b1-46a9-bcb7-76e90670b7d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:05:58 np0005465987 nova_compute[230713]: 2025-10-02 12:05:58.384 2 INFO nova.compute.manager [-] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:05:58 np0005465987 nova_compute[230713]: 2025-10-02 12:05:58.414 2 DEBUG nova.compute.manager [None req-ef558813-7971-430b-b39d-ddd9a4fdd2a1 - - - - - -] [instance: c278f3ec-d2b1-46a9-bcb7-76e90670b7d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:05:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:05:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:05:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:59.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:05:59 np0005465987 podman[244025]: 2025-10-02 12:05:59.843767515 +0000 UTC m=+0.057829579 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:06:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:00.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:01.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.158 2 INFO nova.virt.libvirt.driver [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deleting instance files /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5_del#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.159 2 INFO nova.virt.libvirt.driver [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Deletion of /var/lib/nova/instances/81cd8274-bb25-4b6c-aa66-89669fd098d5_del complete#033[00m
Oct  2 08:06:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.345 2 INFO nova.scheduler.client.report [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Deleted allocations for instance 81cd8274-bb25-4b6c-aa66-89669fd098d5#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.395 2 DEBUG oslo_concurrency.lockutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.395 2 DEBUG oslo_concurrency.lockutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.430 2 DEBUG oslo_concurrency.processutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.466 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406747.4446483, 81cd8274-bb25-4b6c-aa66-89669fd098d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.467 2 INFO nova.compute.manager [-] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.508 2 DEBUG nova.compute.manager [None req-e52f9e0a-1944-4647-83ea-c23a55ca2c64 - - - - - -] [instance: 81cd8274-bb25-4b6c-aa66-89669fd098d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Oct  2 08:06:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3861229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.961 2 DEBUG oslo_concurrency.processutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.967 2 DEBUG nova.compute.provider_tree [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:06:02 np0005465987 nova_compute[230713]: 2025-10-02 12:06:02.984 2 DEBUG nova.scheduler.client.report [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:06:03 np0005465987 nova_compute[230713]: 2025-10-02 12:06:03.009 2 DEBUG oslo_concurrency.lockutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:03.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:03 np0005465987 nova_compute[230713]: 2025-10-02 12:06:03.070 2 DEBUG oslo_concurrency.lockutils [None req-d984d544-d74d-40cc-946e-30a64be814c8 a10a935aed2d40ff8ee890b3089c7d03 39f16ad971c74b0296dabeb9b59464da - - default default] Lock "81cd8274-bb25-4b6c-aa66-89669fd098d5" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:03 np0005465987 nova_compute[230713]: 2025-10-02 12:06:03.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:04.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:04 np0005465987 nova_compute[230713]: 2025-10-02 12:06:04.512 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "c7915dc0-12f0-455a-85b5-74fe8115ced4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:04 np0005465987 nova_compute[230713]: 2025-10-02 12:06:04.513 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "c7915dc0-12f0-455a-85b5-74fe8115ced4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:04 np0005465987 nova_compute[230713]: 2025-10-02 12:06:04.566 2 DEBUG nova.compute.manager [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:06:04 np0005465987 nova_compute[230713]: 2025-10-02 12:06:04.672 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:04 np0005465987 nova_compute[230713]: 2025-10-02 12:06:04.673 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:04 np0005465987 nova_compute[230713]: 2025-10-02 12:06:04.682 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:06:04 np0005465987 nova_compute[230713]: 2025-10-02 12:06:04.682 2 INFO nova.compute.claims [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:06:04 np0005465987 nova_compute[230713]: 2025-10-02 12:06:04.829 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:06:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:05.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:06:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/381002376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.250 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.259 2 DEBUG nova.compute.provider_tree [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.285 2 DEBUG nova.scheduler.client.report [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.338 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.339 2 DEBUG nova.compute.manager [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.410 2 DEBUG nova.compute.manager [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.461 2 INFO nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.510 2 DEBUG nova.compute.manager [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.631 2 DEBUG nova.compute.manager [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.632 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.633 2 INFO nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Creating image(s)#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.662 2 DEBUG nova.storage.rbd_utils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.693 2 DEBUG nova.storage.rbd_utils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.726 2 DEBUG nova.storage.rbd_utils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.730 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.807 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.808 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.809 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.810 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.844 2 DEBUG nova.storage.rbd_utils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:05 np0005465987 nova_compute[230713]: 2025-10-02 12:06:05.847 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c7915dc0-12f0-455a-85b5-74fe8115ced4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:06:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:06.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:06:06 np0005465987 podman[244185]: 2025-10-02 12:06:06.89681045 +0000 UTC m=+0.094125095 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  2 08:06:06 np0005465987 podman[244184]: 2025-10-02 12:06:06.906201384 +0000 UTC m=+0.111355460 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:06:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:06 np0005465987 podman[244183]: 2025-10-02 12:06:06.988996923 +0000 UTC m=+0.191454786 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 08:06:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:07.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:07 np0005465987 nova_compute[230713]: 2025-10-02 12:06:07.197 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c7915dc0-12f0-455a-85b5-74fe8115ced4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:07 np0005465987 nova_compute[230713]: 2025-10-02 12:06:07.277 2 DEBUG nova.storage.rbd_utils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] resizing rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:06:07 np0005465987 nova_compute[230713]: 2025-10-02 12:06:07.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:06:08.108 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:06:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:06:08.109 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:06:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:06:08.110 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:08.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.439 2 DEBUG nova.objects.instance [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'migration_context' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.464 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.464 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Ensure instance console log exists: /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.465 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.465 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.465 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.467 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.473 2 WARNING nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.492 2 DEBUG nova.virt.libvirt.host [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.493 2 DEBUG nova.virt.libvirt.host [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.503 2 DEBUG nova.virt.libvirt.host [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.504 2 DEBUG nova.virt.libvirt.host [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.505 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.506 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.506 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.506 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.507 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.507 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.507 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.508 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.508 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.508 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.508 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.509 2 DEBUG nova.virt.hardware [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:06:08 np0005465987 nova_compute[230713]: 2025-10-02 12:06:08.512 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:06:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4084862216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:06:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:09.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:09 np0005465987 nova_compute[230713]: 2025-10-02 12:06:09.044 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:09 np0005465987 nova_compute[230713]: 2025-10-02 12:06:09.087 2 DEBUG nova.storage.rbd_utils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:09 np0005465987 nova_compute[230713]: 2025-10-02 12:06:09.092 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:06:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/375416715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:06:09 np0005465987 nova_compute[230713]: 2025-10-02 12:06:09.552 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:09 np0005465987 nova_compute[230713]: 2025-10-02 12:06:09.556 2 DEBUG nova.objects.instance [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'pci_devices' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:09 np0005465987 nova_compute[230713]: 2025-10-02 12:06:09.650 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <uuid>c7915dc0-12f0-455a-85b5-74fe8115ced4</uuid>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <name>instance-0000001f</name>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServersAdmin275Test-server-1762019275</nova:name>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:06:08</nova:creationTime>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <nova:user uuid="c5ffcae7415e4e65bdf37996eab10f90">tempest-ServersAdmin275Test-157392629-project-member</nova:user>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <nova:project uuid="7730d14683074be79cbff93dd183fca0">tempest-ServersAdmin275Test-157392629</nova:project>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <entry name="serial">c7915dc0-12f0-455a-85b5-74fe8115ced4</entry>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <entry name="uuid">c7915dc0-12f0-455a-85b5-74fe8115ced4</entry>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c7915dc0-12f0-455a-85b5-74fe8115ced4_disk">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/console.log" append="off"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:06:09 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:06:09 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:06:09 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:06:09 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:06:10 np0005465987 nova_compute[230713]: 2025-10-02 12:06:10.072 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:06:10 np0005465987 nova_compute[230713]: 2025-10-02 12:06:10.074 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:06:10 np0005465987 nova_compute[230713]: 2025-10-02 12:06:10.075 2 INFO nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Using config drive#033[00m
Oct  2 08:06:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:10.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:10 np0005465987 nova_compute[230713]: 2025-10-02 12:06:10.347 2 DEBUG nova.storage.rbd_utils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:10 np0005465987 nova_compute[230713]: 2025-10-02 12:06:10.582 2 INFO nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Creating config drive at /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config#033[00m
Oct  2 08:06:10 np0005465987 nova_compute[230713]: 2025-10-02 12:06:10.587 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj_npx4vz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:10 np0005465987 nova_compute[230713]: 2025-10-02 12:06:10.717 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj_npx4vz" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:10 np0005465987 nova_compute[230713]: 2025-10-02 12:06:10.746 2 DEBUG nova.storage.rbd_utils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:10 np0005465987 nova_compute[230713]: 2025-10-02 12:06:10.750 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:11.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:12 np0005465987 nova_compute[230713]: 2025-10-02 12:06:12.231 2 DEBUG oslo_concurrency.processutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:12 np0005465987 nova_compute[230713]: 2025-10-02 12:06:12.232 2 INFO nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deleting local config drive /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config because it was imported into RBD.#033[00m
Oct  2 08:06:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:12.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:12 np0005465987 systemd-machined[188335]: New machine qemu-16-instance-0000001f.
Oct  2 08:06:12 np0005465987 systemd[1]: Started Virtual Machine qemu-16-instance-0000001f.
Oct  2 08:06:12 np0005465987 nova_compute[230713]: 2025-10-02 12:06:12.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:13.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:13 np0005465987 nova_compute[230713]: 2025-10-02 12:06:13.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:13 np0005465987 nova_compute[230713]: 2025-10-02 12:06:13.878 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406773.8783617, c7915dc0-12f0-455a-85b5-74fe8115ced4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:06:13 np0005465987 nova_compute[230713]: 2025-10-02 12:06:13.879 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:06:13 np0005465987 nova_compute[230713]: 2025-10-02 12:06:13.881 2 DEBUG nova.compute.manager [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:06:13 np0005465987 nova_compute[230713]: 2025-10-02 12:06:13.881 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:06:13 np0005465987 nova_compute[230713]: 2025-10-02 12:06:13.884 2 INFO nova.virt.libvirt.driver [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance spawned successfully.#033[00m
Oct  2 08:06:13 np0005465987 nova_compute[230713]: 2025-10-02 12:06:13.884 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.062 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.065 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:06:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:14.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.492 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.493 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.494 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.494 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.495 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.495 2 DEBUG nova.virt.libvirt.driver [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.516 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.517 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406773.879096, c7915dc0-12f0-455a-85b5-74fe8115ced4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.517 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] VM Started (Lifecycle Event)#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.556 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.559 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.636 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.695 2 INFO nova.compute.manager [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Took 9.06 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.695 2 DEBUG nova.compute.manager [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.815 2 INFO nova.compute.manager [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Took 10.17 seconds to build instance.#033[00m
Oct  2 08:06:14 np0005465987 nova_compute[230713]: 2025-10-02 12:06:14.912 2 DEBUG oslo_concurrency.lockutils [None req-acf63906-c4cb-4c0f-b115-05a6a223c535 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "c7915dc0-12f0-455a-85b5-74fe8115ced4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:15.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Oct  2 08:06:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:16.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:17.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.092 2 INFO nova.compute.manager [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Rebuilding instance#033[00m
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.353 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.375 2 DEBUG nova.compute.manager [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.431 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'pci_requests' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.445 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'pci_devices' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.461 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'resources' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.474 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'migration_context' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.499 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:06:17 np0005465987 nova_compute[230713]: 2025-10-02 12:06:17.503 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:06:18 np0005465987 nova_compute[230713]: 2025-10-02 12:06:18.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:19.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:20.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:22.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:22 np0005465987 nova_compute[230713]: 2025-10-02 12:06:22.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Oct  2 08:06:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:23.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:23 np0005465987 nova_compute[230713]: 2025-10-02 12:06:23.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:24.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:27.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:27 np0005465987 nova_compute[230713]: 2025-10-02 12:06:27.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:27 np0005465987 nova_compute[230713]: 2025-10-02 12:06:27.541 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:06:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:06:27.934 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:06:27.934 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:06:27.934 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:28 np0005465987 nova_compute[230713]: 2025-10-02 12:06:28.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:28.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:29.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:30.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:30 np0005465987 podman[244499]: 2025-10-02 12:06:30.862843529 +0000 UTC m=+0.077993565 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:06:30 np0005465987 nova_compute[230713]: 2025-10-02 12:06:30.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:30 np0005465987 nova_compute[230713]: 2025-10-02 12:06:30.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:06:30 np0005465987 nova_compute[230713]: 2025-10-02 12:06:30.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:06:30 np0005465987 nova_compute[230713]: 2025-10-02 12:06:30.987 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-c7915dc0-12f0-455a-85b5-74fe8115ced4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:06:30 np0005465987 nova_compute[230713]: 2025-10-02 12:06:30.987 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-c7915dc0-12f0-455a-85b5-74fe8115ced4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:06:30 np0005465987 nova_compute[230713]: 2025-10-02 12:06:30.988 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:06:30 np0005465987 nova_compute[230713]: 2025-10-02 12:06:30.988 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:31.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:31 np0005465987 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct  2 08:06:31 np0005465987 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001f.scope: Consumed 13.480s CPU time.
Oct  2 08:06:31 np0005465987 systemd-machined[188335]: Machine qemu-16-instance-0000001f terminated.
Oct  2 08:06:31 np0005465987 nova_compute[230713]: 2025-10-02 12:06:31.522 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:06:31 np0005465987 nova_compute[230713]: 2025-10-02 12:06:31.557 2 INFO nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance shutdown successfully after 14 seconds.#033[00m
Oct  2 08:06:31 np0005465987 nova_compute[230713]: 2025-10-02 12:06:31.561 2 INFO nova.virt.libvirt.driver [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance destroyed successfully.#033[00m
Oct  2 08:06:31 np0005465987 nova_compute[230713]: 2025-10-02 12:06:31.565 2 INFO nova.virt.libvirt.driver [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance destroyed successfully.#033[00m
Oct  2 08:06:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:32 np0005465987 nova_compute[230713]: 2025-10-02 12:06:32.085 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:06:32 np0005465987 nova_compute[230713]: 2025-10-02 12:06:32.101 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-c7915dc0-12f0-455a-85b5-74fe8115ced4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:06:32 np0005465987 nova_compute[230713]: 2025-10-02 12:06:32.101 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:06:32 np0005465987 nova_compute[230713]: 2025-10-02 12:06:32.102 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:32.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:32 np0005465987 nova_compute[230713]: 2025-10-02 12:06:32.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:32 np0005465987 nova_compute[230713]: 2025-10-02 12:06:32.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:32 np0005465987 nova_compute[230713]: 2025-10-02 12:06:32.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:32 np0005465987 nova_compute[230713]: 2025-10-02 12:06:32.992 2 INFO nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deleting instance files /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4_del#033[00m
Oct  2 08:06:32 np0005465987 nova_compute[230713]: 2025-10-02 12:06:32.993 2 INFO nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deletion of /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4_del complete#033[00m
Oct  2 08:06:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:33.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:33 np0005465987 nova_compute[230713]: 2025-10-02 12:06:33.153 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:06:33 np0005465987 nova_compute[230713]: 2025-10-02 12:06:33.153 2 INFO nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Creating image(s)#033[00m
Oct  2 08:06:33 np0005465987 nova_compute[230713]: 2025-10-02 12:06:33.180 2 DEBUG nova.storage.rbd_utils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:33 np0005465987 nova_compute[230713]: 2025-10-02 12:06:33.205 2 DEBUG nova.storage.rbd_utils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:33 np0005465987 nova_compute[230713]: 2025-10-02 12:06:33.232 2 DEBUG nova.storage.rbd_utils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:33 np0005465987 nova_compute[230713]: 2025-10-02 12:06:33.236 2 DEBUG oslo_concurrency.lockutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:33 np0005465987 nova_compute[230713]: 2025-10-02 12:06:33.236 2 DEBUG oslo_concurrency.lockutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:33 np0005465987 nova_compute[230713]: 2025-10-02 12:06:33.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:33 np0005465987 nova_compute[230713]: 2025-10-02 12:06:33.792 2 DEBUG nova.virt.libvirt.imagebackend [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/db05f54c-61f8-42d6-a1e2-da3219a77b12/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/db05f54c-61f8-42d6-a1e2-da3219a77b12/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:06:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:34.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:06:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:06:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:06:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.369 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.432 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.part --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.433 2 DEBUG nova.virt.images [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] db05f54c-61f8-42d6-a1e2-da3219a77b12 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.441 2 DEBUG nova.privsep.utils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.441 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.part /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.636 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.part /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.converted" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.643 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.709 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609.converted --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.711 2 DEBUG oslo_concurrency.lockutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.744 2 DEBUG nova.storage.rbd_utils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.747 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 c7915dc0-12f0-455a-85b5-74fe8115ced4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:35 np0005465987 nova_compute[230713]: 2025-10-02 12:06:35.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.015 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.016 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.016 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.016 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.017 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.157 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 c7915dc0-12f0-455a-85b5-74fe8115ced4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.246 2 DEBUG nova.storage.rbd_utils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] resizing rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:06:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.363 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.363 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Ensure instance console log exists: /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.364 2 DEBUG oslo_concurrency.lockutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.364 2 DEBUG oslo_concurrency.lockutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.364 2 DEBUG oslo_concurrency.lockutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.366 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.370 2 WARNING nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.376 2 DEBUG nova.virt.libvirt.host [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.377 2 DEBUG nova.virt.libvirt.host [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.379 2 DEBUG nova.virt.libvirt.host [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.380 2 DEBUG nova.virt.libvirt.host [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.381 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.382 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.382 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.382 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.383 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.383 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.383 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.383 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.383 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.384 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.384 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.384 2 DEBUG nova.virt.hardware [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.384 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.401 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/182866723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.453 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.601 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.603 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4775MB free_disk=20.80560302734375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.603 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.604 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.682 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance c7915dc0-12f0-455a-85b5-74fe8115ced4 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.682 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.682 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.719 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:06:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/151764086' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.860 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.886 2 DEBUG nova.storage.rbd_utils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:36 np0005465987 nova_compute[230713]: 2025-10-02 12:06:36.890 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:37.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:37 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4004864019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.135 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.142 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.156 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.183 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.183 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:06:37 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/106709009' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.345 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.348 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <uuid>c7915dc0-12f0-455a-85b5-74fe8115ced4</uuid>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <name>instance-0000001f</name>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServersAdmin275Test-server-1762019275</nova:name>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:06:36</nova:creationTime>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <nova:user uuid="c5ffcae7415e4e65bdf37996eab10f90">tempest-ServersAdmin275Test-157392629-project-member</nova:user>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <nova:project uuid="7730d14683074be79cbff93dd183fca0">tempest-ServersAdmin275Test-157392629</nova:project>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="db05f54c-61f8-42d6-a1e2-da3219a77b12"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <entry name="serial">c7915dc0-12f0-455a-85b5-74fe8115ced4</entry>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <entry name="uuid">c7915dc0-12f0-455a-85b5-74fe8115ced4</entry>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c7915dc0-12f0-455a-85b5-74fe8115ced4_disk">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/console.log" append="off"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:06:37 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:06:37 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:06:37 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:06:37 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.477 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.478 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.479 2 INFO nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Using config drive#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.510 2 DEBUG nova.storage.rbd_utils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.545 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Acquiring lock "6a417beb-4cf3-49ed-919d-cc0294e6e426" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.546 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "6a417beb-4cf3-49ed-919d-cc0294e6e426" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.558 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.568 2 DEBUG nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.609 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'keypairs' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.654 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.655 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.664 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.665 2 INFO nova.compute.claims [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.786 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.815 2 INFO nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Creating config drive at /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.822 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdu_xsxl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:37 np0005465987 podman[244978]: 2025-10-02 12:06:37.853145694 +0000 UTC m=+0.070223972 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  2 08:06:37 np0005465987 podman[244979]: 2025-10-02 12:06:37.855370204 +0000 UTC m=+0.072877565 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:06:37 np0005465987 podman[244977]: 2025-10-02 12:06:37.884593388 +0000 UTC m=+0.104883195 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.958 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdu_xsxl" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.987 2 DEBUG nova.storage.rbd_utils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:37 np0005465987 nova_compute[230713]: 2025-10-02 12:06:37.990 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:38 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3591459830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.207 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.214 2 DEBUG nova.compute.provider_tree [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.238 2 DEBUG oslo_concurrency.processutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.239 2 INFO nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deleting local config drive /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config because it was imported into RBD.#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.241 2 DEBUG nova.scheduler.client.report [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.266 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.267 2 DEBUG nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:06:38 np0005465987 systemd-machined[188335]: New machine qemu-17-instance-0000001f.
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.311 2 DEBUG nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.311 2 DEBUG nova.network.neutron [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:06:38 np0005465987 systemd[1]: Started Virtual Machine qemu-17-instance-0000001f.
Oct  2 08:06:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:38.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.345 2 INFO nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.367 2 DEBUG nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.473 2 DEBUG nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.475 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.475 2 INFO nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Creating image(s)#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.498 2 DEBUG nova.storage.rbd_utils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] rbd image 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.528 2 DEBUG nova.storage.rbd_utils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] rbd image 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.558 2 DEBUG nova.storage.rbd_utils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] rbd image 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.562 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.652 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.653 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.654 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.654 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.678 2 DEBUG nova.storage.rbd_utils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] rbd image 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.682 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.722 2 DEBUG nova.network.neutron [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:06:38 np0005465987 nova_compute[230713]: 2025-10-02 12:06:38.724 2 DEBUG nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.084 2 DEBUG nova.compute.manager [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:06:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:06:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:39.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.085 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.087 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for c7915dc0-12f0-455a-85b5-74fe8115ced4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.087 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406799.0833592, c7915dc0-12f0-455a-85b5-74fe8115ced4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.088 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.094 2 INFO nova.virt.libvirt.driver [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance spawned successfully.#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.094 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.102 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.138 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.178 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.180 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.181 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.181 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.182 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.182 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.182 2 DEBUG nova.virt.libvirt.driver [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.189 2 DEBUG nova.storage.rbd_utils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] resizing rbd image 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.227 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.227 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406799.0834992, c7915dc0-12f0-455a-85b5-74fe8115ced4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.227 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] VM Started (Lifecycle Event)#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.260 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.264 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.304 2 DEBUG nova.compute.manager [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.309 2 DEBUG nova.objects.instance [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a417beb-4cf3-49ed-919d-cc0294e6e426 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.311 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.358 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.359 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Ensure instance console log exists: /var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.361 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.361 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.362 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.363 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.367 2 WARNING nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.372 2 DEBUG nova.virt.libvirt.host [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.372 2 DEBUG nova.virt.libvirt.host [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.374 2 DEBUG nova.virt.libvirt.host [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.374 2 DEBUG nova.virt.libvirt.host [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.376 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.376 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.376 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.376 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.377 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.377 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.377 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.377 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.377 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.377 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.377 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.378 2 DEBUG nova.virt.hardware [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.380 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.411 2 DEBUG oslo_concurrency.lockutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.412 2 DEBUG oslo_concurrency.lockutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.412 2 DEBUG nova.objects.instance [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.463 2 DEBUG oslo_concurrency.lockutils [None req-79909e6b-d756-4525-99e9-c3dcc7ee8de6 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.810 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.834 2 DEBUG nova.storage.rbd_utils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] rbd image 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:39 np0005465987 nova_compute[230713]: 2025-10-02 12:06:39.837 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.183 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.184 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.184 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.184 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:06:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:06:40 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2206464622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.274 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.276 2 DEBUG nova.objects.instance [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a417beb-4cf3-49ed-919d-cc0294e6e426 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.297 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <uuid>6a417beb-4cf3-49ed-919d-cc0294e6e426</uuid>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <name>instance-00000021</name>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <nova:name>tempest-TenantUsagesTestJSON-server-1543065628</nova:name>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:06:39</nova:creationTime>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <nova:user uuid="91b8ff043ce84cc2ae2074da24b942fb">tempest-TenantUsagesTestJSON-908359732-project-member</nova:user>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <nova:project uuid="52b5ddca38684163936ded32b4972365">tempest-TenantUsagesTestJSON-908359732</nova:project>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <entry name="serial">6a417beb-4cf3-49ed-919d-cc0294e6e426</entry>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <entry name="uuid">6a417beb-4cf3-49ed-919d-cc0294e6e426</entry>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6a417beb-4cf3-49ed-919d-cc0294e6e426_disk">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6a417beb-4cf3-49ed-919d-cc0294e6e426_disk.config">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426/console.log" append="off"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:06:40 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:06:40 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:06:40 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:06:40 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:06:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:40.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.369 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.369 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.370 2 INFO nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Using config drive#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.393 2 DEBUG nova.storage.rbd_utils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] rbd image 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.518 2 INFO nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Creating config drive at /var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426/disk.config#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.627 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxlmuwl67 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.759 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxlmuwl67" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.786 2 DEBUG nova.storage.rbd_utils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] rbd image 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:06:40 np0005465987 nova_compute[230713]: 2025-10-02 12:06:40.789 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426/disk.config 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:06:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:06:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000055s ======
Oct  2 08:06:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:41.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Oct  2 08:06:41 np0005465987 nova_compute[230713]: 2025-10-02 12:06:41.103 2 DEBUG oslo_concurrency.processutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426/disk.config 6a417beb-4cf3-49ed-919d-cc0294e6e426_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:41 np0005465987 nova_compute[230713]: 2025-10-02 12:06:41.104 2 INFO nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Deleting local config drive /var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426/disk.config because it was imported into RBD.#033[00m
Oct  2 08:06:41 np0005465987 systemd-machined[188335]: New machine qemu-18-instance-00000021.
Oct  2 08:06:41 np0005465987 systemd[1]: Started Virtual Machine qemu-18-instance-00000021.
Oct  2 08:06:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.013 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406802.01275, 6a417beb-4cf3-49ed-919d-cc0294e6e426 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.013 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.017 2 DEBUG nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.018 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.021 2 INFO nova.virt.libvirt.driver [-] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Instance spawned successfully.#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.021 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.264 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.265 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.265 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.265 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.266 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.266 2 DEBUG nova.virt.libvirt.driver [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.274 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.276 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:06:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:42.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.341 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.341 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406802.0176487, 6a417beb-4cf3-49ed-919d-cc0294e6e426 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.341 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] VM Started (Lifecycle Event)#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.357 2 INFO nova.compute.manager [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Rebuilding instance#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.403 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.410 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.428 2 INFO nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Took 3.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.429 2 DEBUG nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.453 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.569 2 INFO nova.compute.manager [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Took 4.94 seconds to build instance.#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.659 2 DEBUG oslo_concurrency.lockutils [None req-e838c58d-c2f0-4d24-9574-7edc642ca9f9 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "6a417beb-4cf3-49ed-919d-cc0294e6e426" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.679 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lazy-loading 'trusted_certs' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.768 2 DEBUG nova.compute.manager [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.892 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lazy-loading 'pci_requests' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.933 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lazy-loading 'pci_devices' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.962 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lazy-loading 'resources' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:42 np0005465987 nova_compute[230713]: 2025-10-02 12:06:42.996 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lazy-loading 'migration_context' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:43 np0005465987 nova_compute[230713]: 2025-10-02 12:06:43.011 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:06:43 np0005465987 nova_compute[230713]: 2025-10-02 12:06:43.014 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:06:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:43.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:43 np0005465987 nova_compute[230713]: 2025-10-02 12:06:43.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:44.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.752 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Acquiring lock "6a417beb-4cf3-49ed-919d-cc0294e6e426" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.753 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "6a417beb-4cf3-49ed-919d-cc0294e6e426" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.753 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Acquiring lock "6a417beb-4cf3-49ed-919d-cc0294e6e426-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.753 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "6a417beb-4cf3-49ed-919d-cc0294e6e426-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.753 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "6a417beb-4cf3-49ed-919d-cc0294e6e426-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.754 2 INFO nova.compute.manager [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Terminating instance#033[00m
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.755 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Acquiring lock "refresh_cache-6a417beb-4cf3-49ed-919d-cc0294e6e426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.755 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Acquired lock "refresh_cache-6a417beb-4cf3-49ed-919d-cc0294e6e426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.755 2 DEBUG nova.network.neutron [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:06:44 np0005465987 nova_compute[230713]: 2025-10-02 12:06:44.975 2 DEBUG nova.network.neutron [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:06:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:45.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:45 np0005465987 nova_compute[230713]: 2025-10-02 12:06:45.388 2 DEBUG nova.network.neutron [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:06:45 np0005465987 nova_compute[230713]: 2025-10-02 12:06:45.477 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Releasing lock "refresh_cache-6a417beb-4cf3-49ed-919d-cc0294e6e426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:06:45 np0005465987 nova_compute[230713]: 2025-10-02 12:06:45.478 2 DEBUG nova.compute.manager [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:06:45 np0005465987 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct  2 08:06:45 np0005465987 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000021.scope: Consumed 4.274s CPU time.
Oct  2 08:06:45 np0005465987 systemd-machined[188335]: Machine qemu-18-instance-00000021 terminated.
Oct  2 08:06:45 np0005465987 nova_compute[230713]: 2025-10-02 12:06:45.695 2 INFO nova.virt.libvirt.driver [-] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Instance destroyed successfully.#033[00m
Oct  2 08:06:45 np0005465987 nova_compute[230713]: 2025-10-02 12:06:45.695 2 DEBUG nova.objects.instance [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lazy-loading 'resources' on Instance uuid 6a417beb-4cf3-49ed-919d-cc0294e6e426 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:06:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:46.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:46 np0005465987 nova_compute[230713]: 2025-10-02 12:06:46.539 2 INFO nova.virt.libvirt.driver [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Deleting instance files /var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426_del#033[00m
Oct  2 08:06:46 np0005465987 nova_compute[230713]: 2025-10-02 12:06:46.540 2 INFO nova.virt.libvirt.driver [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Deletion of /var/lib/nova/instances/6a417beb-4cf3-49ed-919d-cc0294e6e426_del complete#033[00m
Oct  2 08:06:46 np0005465987 nova_compute[230713]: 2025-10-02 12:06:46.743 2 INFO nova.compute.manager [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Took 1.27 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:06:46 np0005465987 nova_compute[230713]: 2025-10-02 12:06:46.744 2 DEBUG oslo.service.loopingcall [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:06:46 np0005465987 nova_compute[230713]: 2025-10-02 12:06:46.744 2 DEBUG nova.compute.manager [-] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:06:46 np0005465987 nova_compute[230713]: 2025-10-02 12:06:46.745 2 DEBUG nova.network.neutron [-] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:06:46 np0005465987 nova_compute[230713]: 2025-10-02 12:06:46.846 2 DEBUG nova.network.neutron [-] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:06:46 np0005465987 nova_compute[230713]: 2025-10-02 12:06:46.919 2 DEBUG nova.network.neutron [-] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:06:46 np0005465987 nova_compute[230713]: 2025-10-02 12:06:46.966 2 INFO nova.compute.manager [-] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Took 0.22 seconds to deallocate network for instance.#033[00m
Oct  2 08:06:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:47.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:47 np0005465987 nova_compute[230713]: 2025-10-02 12:06:47.265 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:47 np0005465987 nova_compute[230713]: 2025-10-02 12:06:47.265 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:47 np0005465987 nova_compute[230713]: 2025-10-02 12:06:47.355 2 DEBUG oslo_concurrency.processutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:47 np0005465987 nova_compute[230713]: 2025-10-02 12:06:47.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:47 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1432307146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:47 np0005465987 nova_compute[230713]: 2025-10-02 12:06:47.794 2 DEBUG oslo_concurrency.processutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:47 np0005465987 nova_compute[230713]: 2025-10-02 12:06:47.800 2 DEBUG nova.compute.provider_tree [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:06:47 np0005465987 nova_compute[230713]: 2025-10-02 12:06:47.826 2 DEBUG nova.scheduler.client.report [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:06:47 np0005465987 nova_compute[230713]: 2025-10-02 12:06:47.874 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:47 np0005465987 nova_compute[230713]: 2025-10-02 12:06:47.946 2 INFO nova.scheduler.client.report [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Deleted allocations for instance 6a417beb-4cf3-49ed-919d-cc0294e6e426#033[00m
Oct  2 08:06:48 np0005465987 nova_compute[230713]: 2025-10-02 12:06:48.164 2 DEBUG oslo_concurrency.lockutils [None req-933f89de-a736-4cc0-800b-d0ddd687986d 91b8ff043ce84cc2ae2074da24b942fb 52b5ddca38684163936ded32b4972365 - - default default] Lock "6a417beb-4cf3-49ed-919d-cc0294e6e426" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:48 np0005465987 nova_compute[230713]: 2025-10-02 12:06:48.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:48.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:06:48.990 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:06:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:06:48.991 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:06:48 np0005465987 nova_compute[230713]: 2025-10-02 12:06:48.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:49.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:50.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:51.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:06:51.992 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:06:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:52.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:52 np0005465987 nova_compute[230713]: 2025-10-02 12:06:52.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:53 np0005465987 nova_compute[230713]: 2025-10-02 12:06:53.059 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:06:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:53.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:53 np0005465987 nova_compute[230713]: 2025-10-02 12:06:53.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:54.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:55.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:06:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:56.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:06:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:57.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:57 np0005465987 nova_compute[230713]: 2025-10-02 12:06:57.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:58 np0005465987 nova_compute[230713]: 2025-10-02 12:06:58.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:06:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:58.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:06:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:06:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:59.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:07:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:00.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:00 np0005465987 nova_compute[230713]: 2025-10-02 12:07:00.694 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406805.6929324, 6a417beb-4cf3-49ed-919d-cc0294e6e426 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:00 np0005465987 nova_compute[230713]: 2025-10-02 12:07:00.695 2 INFO nova.compute.manager [-] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:07:00 np0005465987 nova_compute[230713]: 2025-10-02 12:07:00.720 2 DEBUG nova.compute.manager [None req-62f1c800-2853-4bb7-9cf8-06ad69142708 - - - - - -] [instance: 6a417beb-4cf3-49ed-919d-cc0294e6e426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:01.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:01 np0005465987 podman[245592]: 2025-10-02 12:07:01.877477786 +0000 UTC m=+0.075249980 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:07:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:02.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:02 np0005465987 nova_compute[230713]: 2025-10-02 12:07:02.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:03.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:03 np0005465987 nova_compute[230713]: 2025-10-02 12:07:03.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:04 np0005465987 nova_compute[230713]: 2025-10-02 12:07:04.103 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:07:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:04.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:06.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:07.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:07 np0005465987 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct  2 08:07:07 np0005465987 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001f.scope: Consumed 14.397s CPU time.
Oct  2 08:07:07 np0005465987 systemd-machined[188335]: Machine qemu-17-instance-0000001f terminated.
Oct  2 08:07:07 np0005465987 nova_compute[230713]: 2025-10-02 12:07:07.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:08 np0005465987 nova_compute[230713]: 2025-10-02 12:07:08.121 2 INFO nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance shutdown successfully after 25 seconds.#033[00m
Oct  2 08:07:08 np0005465987 nova_compute[230713]: 2025-10-02 12:07:08.127 2 INFO nova.virt.libvirt.driver [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance destroyed successfully.#033[00m
Oct  2 08:07:08 np0005465987 nova_compute[230713]: 2025-10-02 12:07:08.131 2 INFO nova.virt.libvirt.driver [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance destroyed successfully.#033[00m
Oct  2 08:07:08 np0005465987 nova_compute[230713]: 2025-10-02 12:07:08.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:08.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:08 np0005465987 podman[245637]: 2025-10-02 12:07:08.848742663 +0000 UTC m=+0.055592150 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:07:08 np0005465987 podman[245636]: 2025-10-02 12:07:08.858528383 +0000 UTC m=+0.068483625 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:07:08 np0005465987 podman[245635]: 2025-10-02 12:07:08.889204115 +0000 UTC m=+0.099781814 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  2 08:07:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:09.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:10.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:11.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.202 2 INFO nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deleting instance files /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4_del#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.203 2 INFO nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deletion of /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4_del complete#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.771 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.772 2 INFO nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Creating image(s)#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.798 2 DEBUG nova.storage.rbd_utils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.832 2 DEBUG nova.storage.rbd_utils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.864 2 DEBUG nova.storage.rbd_utils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.868 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.930 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.931 2 DEBUG oslo_concurrency.lockutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.932 2 DEBUG oslo_concurrency.lockutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.932 2 DEBUG oslo_concurrency.lockutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.960 2 DEBUG nova.storage.rbd_utils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:11 np0005465987 nova_compute[230713]: 2025-10-02 12:07:11.963 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c7915dc0-12f0-455a-85b5-74fe8115ced4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:12 np0005465987 nova_compute[230713]: 2025-10-02 12:07:12.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:13.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.383 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c7915dc0-12f0-455a-85b5-74fe8115ced4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.446 2 DEBUG nova.storage.rbd_utils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] resizing rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.771 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.772 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Ensure instance console log exists: /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.772 2 DEBUG oslo_concurrency.lockutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.773 2 DEBUG oslo_concurrency.lockutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.773 2 DEBUG oslo_concurrency.lockutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.774 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.778 2 WARNING nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.788 2 DEBUG nova.virt.libvirt.host [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.789 2 DEBUG nova.virt.libvirt.host [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.791 2 DEBUG nova.virt.libvirt.host [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.792 2 DEBUG nova.virt.libvirt.host [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.793 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.793 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.793 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.794 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.794 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.794 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.794 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.794 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.795 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.795 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.795 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.795 2 DEBUG nova.virt.hardware [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.795 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lazy-loading 'vcpu_model' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:13 np0005465987 nova_compute[230713]: 2025-10-02 12:07:13.810 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1463599971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.253 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.277 2 DEBUG nova.storage.rbd_utils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.280 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:14.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3394220777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.699 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.703 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <uuid>c7915dc0-12f0-455a-85b5-74fe8115ced4</uuid>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <name>instance-0000001f</name>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServersAdmin275Test-server-1762019275</nova:name>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:07:13</nova:creationTime>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <nova:user uuid="c5ffcae7415e4e65bdf37996eab10f90">tempest-ServersAdmin275Test-157392629-project-member</nova:user>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <nova:project uuid="7730d14683074be79cbff93dd183fca0">tempest-ServersAdmin275Test-157392629</nova:project>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <entry name="serial">c7915dc0-12f0-455a-85b5-74fe8115ced4</entry>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <entry name="uuid">c7915dc0-12f0-455a-85b5-74fe8115ced4</entry>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c7915dc0-12f0-455a-85b5-74fe8115ced4_disk">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/console.log" append="off"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:07:14 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:07:14 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:07:14 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:07:14 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.761 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.761 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.762 2 INFO nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Using config drive#033[00m
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.793 2 DEBUG nova.storage.rbd_utils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.821 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lazy-loading 'ec2_ids' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:14 np0005465987 nova_compute[230713]: 2025-10-02 12:07:14.873 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lazy-loading 'keypairs' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:15 np0005465987 nova_compute[230713]: 2025-10-02 12:07:15.065 2 INFO nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Creating config drive at /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config#033[00m
Oct  2 08:07:15 np0005465987 nova_compute[230713]: 2025-10-02 12:07:15.074 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpppfc047f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:15.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:15 np0005465987 nova_compute[230713]: 2025-10-02 12:07:15.209 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpppfc047f" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:15 np0005465987 nova_compute[230713]: 2025-10-02 12:07:15.250 2 DEBUG nova.storage.rbd_utils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] rbd image c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:15 np0005465987 nova_compute[230713]: 2025-10-02 12:07:15.256 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:15 np0005465987 nova_compute[230713]: 2025-10-02 12:07:15.686 2 DEBUG oslo_concurrency.processutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config c7915dc0-12f0-455a-85b5-74fe8115ced4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:15 np0005465987 nova_compute[230713]: 2025-10-02 12:07:15.688 2 INFO nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deleting local config drive /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4/disk.config because it was imported into RBD.#033[00m
Oct  2 08:07:15 np0005465987 systemd-machined[188335]: New machine qemu-19-instance-0000001f.
Oct  2 08:07:15 np0005465987 systemd[1]: Started Virtual Machine qemu-19-instance-0000001f.
Oct  2 08:07:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:07:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:16.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.531 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for c7915dc0-12f0-455a-85b5-74fe8115ced4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.532 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406836.5309408, c7915dc0-12f0-455a-85b5-74fe8115ced4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.532 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.536 2 DEBUG nova.compute.manager [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.537 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.541 2 INFO nova.virt.libvirt.driver [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance spawned successfully.#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.542 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.589 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.594 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.610 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.611 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.611 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.612 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.612 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.613 2 DEBUG nova.virt.libvirt.driver [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.710 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.711 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406836.535632, c7915dc0-12f0-455a-85b5-74fe8115ced4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.712 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] VM Started (Lifecycle Event)#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.782 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.787 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.828 2 DEBUG nova.compute.manager [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:16 np0005465987 nova_compute[230713]: 2025-10-02 12:07:16.836 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:07:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:17 np0005465987 nova_compute[230713]: 2025-10-02 12:07:17.061 2 DEBUG oslo_concurrency.lockutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:17 np0005465987 nova_compute[230713]: 2025-10-02 12:07:17.062 2 DEBUG oslo_concurrency.lockutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:17 np0005465987 nova_compute[230713]: 2025-10-02 12:07:17.062 2 DEBUG nova.objects.instance [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:07:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:07:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:17.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:07:17 np0005465987 nova_compute[230713]: 2025-10-02 12:07:17.467 2 DEBUG oslo_concurrency.lockutils [None req-abf28241-8c4d-4977-9b61-a0bcc572fddd 4bd0e9bbd4074e4b9b0df87098b17ed5 0365a979bc904cdab4f4a40a11c3f6ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:17 np0005465987 nova_compute[230713]: 2025-10-02 12:07:17.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:07:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:18.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.712 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "c7915dc0-12f0-455a-85b5-74fe8115ced4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.713 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "c7915dc0-12f0-455a-85b5-74fe8115ced4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.713 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "c7915dc0-12f0-455a-85b5-74fe8115ced4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.714 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "c7915dc0-12f0-455a-85b5-74fe8115ced4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.714 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "c7915dc0-12f0-455a-85b5-74fe8115ced4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.716 2 INFO nova.compute.manager [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Terminating instance#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.717 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "refresh_cache-c7915dc0-12f0-455a-85b5-74fe8115ced4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.717 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquired lock "refresh_cache-c7915dc0-12f0-455a-85b5-74fe8115ced4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.717 2 DEBUG nova.network.neutron [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:07:18 np0005465987 nova_compute[230713]: 2025-10-02 12:07:18.900 2 DEBUG nova.network.neutron [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:07:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:19.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:19 np0005465987 nova_compute[230713]: 2025-10-02 12:07:19.199 2 DEBUG nova.network.neutron [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:07:19 np0005465987 nova_compute[230713]: 2025-10-02 12:07:19.223 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Releasing lock "refresh_cache-c7915dc0-12f0-455a-85b5-74fe8115ced4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:07:19 np0005465987 nova_compute[230713]: 2025-10-02 12:07:19.225 2 DEBUG nova.compute.manager [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:07:19 np0005465987 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct  2 08:07:19 np0005465987 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001f.scope: Consumed 3.516s CPU time.
Oct  2 08:07:19 np0005465987 systemd-machined[188335]: Machine qemu-19-instance-0000001f terminated.
Oct  2 08:07:19 np0005465987 nova_compute[230713]: 2025-10-02 12:07:19.444 2 INFO nova.virt.libvirt.driver [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance destroyed successfully.#033[00m
Oct  2 08:07:19 np0005465987 nova_compute[230713]: 2025-10-02 12:07:19.445 2 DEBUG nova.objects.instance [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lazy-loading 'resources' on Instance uuid c7915dc0-12f0-455a-85b5-74fe8115ced4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.212 2 INFO nova.virt.libvirt.driver [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deleting instance files /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4_del#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.212 2 INFO nova.virt.libvirt.driver [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deletion of /var/lib/nova/instances/c7915dc0-12f0-455a-85b5-74fe8115ced4_del complete#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.290 2 INFO nova.compute.manager [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.291 2 DEBUG oslo.service.loopingcall [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.291 2 DEBUG nova.compute.manager [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.291 2 DEBUG nova.network.neutron [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:07:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.472 2 DEBUG nova.network.neutron [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.492 2 DEBUG nova.network.neutron [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.510 2 INFO nova.compute.manager [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Took 0.22 seconds to deallocate network for instance.#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.559 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.560 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:20 np0005465987 nova_compute[230713]: 2025-10-02 12:07:20.606 2 DEBUG oslo_concurrency.processutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3728058607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.022 2 DEBUG oslo_concurrency.processutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.028 2 DEBUG nova.compute.provider_tree [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.043 2 DEBUG nova.scheduler.client.report [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.060 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.089 2 INFO nova.scheduler.client.report [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Deleted allocations for instance c7915dc0-12f0-455a-85b5-74fe8115ced4#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.143 2 DEBUG oslo_concurrency.lockutils [None req-63135890-59fe-438e-8fa7-37f9b159b563 c5ffcae7415e4e65bdf37996eab10f90 7730d14683074be79cbff93dd183fca0 - - default default] Lock "c7915dc0-12f0-455a-85b5-74fe8115ced4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:21.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.606 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquiring lock "863eacb0-4381-4b09-adb0-93c628d97576" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.606 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.624 2 DEBUG nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.686 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.687 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.697 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.697 2 INFO nova.compute.claims [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:07:21 np0005465987 nova_compute[230713]: 2025-10-02 12:07:21.805 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:22 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/581658048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.233 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.240 2 DEBUG nova.compute.provider_tree [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.259 2 DEBUG nova.scheduler.client.report [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.302 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.303 2 DEBUG nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:07:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:07:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:22.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.389 2 DEBUG nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.390 2 DEBUG nova.network.neutron [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.418 2 INFO nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.444 2 DEBUG nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.565 2 DEBUG nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.566 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.566 2 INFO nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Creating image(s)#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.594 2 DEBUG nova.storage.rbd_utils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] rbd image 863eacb0-4381-4b09-adb0-93c628d97576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.624 2 DEBUG nova.storage.rbd_utils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] rbd image 863eacb0-4381-4b09-adb0-93c628d97576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.653 2 DEBUG nova.storage.rbd_utils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] rbd image 863eacb0-4381-4b09-adb0-93c628d97576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.656 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.684 2 DEBUG nova.policy [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ed46b991b844fe2bfa1d9dedc3d16b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '087032a23f1343b69dad71ab288c8d03', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.720 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.721 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.722 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.722 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.754 2 DEBUG nova.storage.rbd_utils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] rbd image 863eacb0-4381-4b09-adb0-93c628d97576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:22 np0005465987 nova_compute[230713]: 2025-10-02 12:07:22.761 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 863eacb0-4381-4b09-adb0-93c628d97576_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:23 np0005465987 nova_compute[230713]: 2025-10-02 12:07:23.102 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 863eacb0-4381-4b09-adb0-93c628d97576_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:23.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:23 np0005465987 nova_compute[230713]: 2025-10-02 12:07:23.261 2 DEBUG nova.storage.rbd_utils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] resizing rbd image 863eacb0-4381-4b09-adb0-93c628d97576_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:07:23 np0005465987 nova_compute[230713]: 2025-10-02 12:07:23.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:23 np0005465987 nova_compute[230713]: 2025-10-02 12:07:23.433 2 DEBUG nova.objects.instance [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lazy-loading 'migration_context' on Instance uuid 863eacb0-4381-4b09-adb0-93c628d97576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:23 np0005465987 nova_compute[230713]: 2025-10-02 12:07:23.455 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:07:23 np0005465987 nova_compute[230713]: 2025-10-02 12:07:23.456 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Ensure instance console log exists: /var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:07:23 np0005465987 nova_compute[230713]: 2025-10-02 12:07:23.456 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:23 np0005465987 nova_compute[230713]: 2025-10-02 12:07:23.457 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:23 np0005465987 nova_compute[230713]: 2025-10-02 12:07:23.458 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:24 np0005465987 nova_compute[230713]: 2025-10-02 12:07:24.192 2 DEBUG nova.network.neutron [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Successfully created port: 4f87305e-6ca7-4098-974d-1977b1bc7747 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:07:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:24.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:25.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:25 np0005465987 nova_compute[230713]: 2025-10-02 12:07:25.238 2 DEBUG nova.network.neutron [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Successfully updated port: 4f87305e-6ca7-4098-974d-1977b1bc7747 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:07:25 np0005465987 nova_compute[230713]: 2025-10-02 12:07:25.253 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquiring lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:07:25 np0005465987 nova_compute[230713]: 2025-10-02 12:07:25.253 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquired lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:07:25 np0005465987 nova_compute[230713]: 2025-10-02 12:07:25.253 2 DEBUG nova.network.neutron [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:07:25 np0005465987 nova_compute[230713]: 2025-10-02 12:07:25.538 2 DEBUG nova.compute.manager [req-d8f918bf-8450-4d99-82b2-0a009a6e6581 req-823dae3d-e885-46dc-8402-af123a15d453 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received event network-changed-4f87305e-6ca7-4098-974d-1977b1bc7747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:07:25 np0005465987 nova_compute[230713]: 2025-10-02 12:07:25.538 2 DEBUG nova.compute.manager [req-d8f918bf-8450-4d99-82b2-0a009a6e6581 req-823dae3d-e885-46dc-8402-af123a15d453 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Refreshing instance network info cache due to event network-changed-4f87305e-6ca7-4098-974d-1977b1bc7747. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:07:25 np0005465987 nova_compute[230713]: 2025-10-02 12:07:25.539 2 DEBUG oslo_concurrency.lockutils [req-d8f918bf-8450-4d99-82b2-0a009a6e6581 req-823dae3d-e885-46dc-8402-af123a15d453 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:07:25 np0005465987 nova_compute[230713]: 2025-10-02 12:07:25.584 2 DEBUG nova.network.neutron [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:07:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:26.355 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:07:26 np0005465987 nova_compute[230713]: 2025-10-02 12:07:26.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:26.356 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:07:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:26.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:26 np0005465987 nova_compute[230713]: 2025-10-02 12:07:26.989 2 DEBUG nova.network.neutron [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updating instance_info_cache with network_info: [{"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.010 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Releasing lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.010 2 DEBUG nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Instance network_info: |[{"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.010 2 DEBUG oslo_concurrency.lockutils [req-d8f918bf-8450-4d99-82b2-0a009a6e6581 req-823dae3d-e885-46dc-8402-af123a15d453 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.011 2 DEBUG nova.network.neutron [req-d8f918bf-8450-4d99-82b2-0a009a6e6581 req-823dae3d-e885-46dc-8402-af123a15d453 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Refreshing network info cache for port 4f87305e-6ca7-4098-974d-1977b1bc7747 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.015 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Start _get_guest_xml network_info=[{"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.021 2 WARNING nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.025 2 DEBUG nova.virt.libvirt.host [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.025 2 DEBUG nova.virt.libvirt.host [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.031 2 DEBUG nova.virt.libvirt.host [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.032 2 DEBUG nova.virt.libvirt.host [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.033 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.033 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.034 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.034 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.035 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.035 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.035 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.035 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.036 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.036 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.036 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.036 2 DEBUG nova.virt.hardware [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.039 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:27.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:27.357 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4089154726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.522 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.558 2 DEBUG nova.storage.rbd_utils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] rbd image 863eacb0-4381-4b09-adb0-93c628d97576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.563 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:27.935 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:27.935 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:27.935 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1584143293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.983 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.985 2 DEBUG nova.virt.libvirt.vif [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-452288390',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-452288390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-452288390',id=35,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='087032a23f1343b69dad71ab288c8d03',ramdisk_id='',reservation_id='r-5zu2i66r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-2033711876',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-2033711876-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:07:22Z,user_data=None,user_id='7ed46b991b844fe2bfa1d9dedc3d16b9',uuid=863eacb0-4381-4b09-adb0-93c628d97576,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.986 2 DEBUG nova.network.os_vif_util [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Converting VIF {"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.988 2 DEBUG nova.network.os_vif_util [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:16:0a,bridge_name='br-int',has_traffic_filtering=True,id=4f87305e-6ca7-4098-974d-1977b1bc7747,network=Network(95daee14-6460-43d2-8a30-748e3256cf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f87305e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:07:27 np0005465987 nova_compute[230713]: 2025-10-02 12:07:27.990 2 DEBUG nova.objects.instance [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lazy-loading 'pci_devices' on Instance uuid 863eacb0-4381-4b09-adb0-93c628d97576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.011 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <uuid>863eacb0-4381-4b09-adb0-93c628d97576</uuid>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <name>instance-00000023</name>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-452288390</nova:name>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:07:27</nova:creationTime>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <nova:user uuid="7ed46b991b844fe2bfa1d9dedc3d16b9">tempest-FloatingIPsAssociationNegativeTestJSON-2033711876-project-member</nova:user>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <nova:project uuid="087032a23f1343b69dad71ab288c8d03">tempest-FloatingIPsAssociationNegativeTestJSON-2033711876</nova:project>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <nova:port uuid="4f87305e-6ca7-4098-974d-1977b1bc7747">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <entry name="serial">863eacb0-4381-4b09-adb0-93c628d97576</entry>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <entry name="uuid">863eacb0-4381-4b09-adb0-93c628d97576</entry>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/863eacb0-4381-4b09-adb0-93c628d97576_disk">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/863eacb0-4381-4b09-adb0-93c628d97576_disk.config">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:90:16:0a"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <target dev="tap4f87305e-6c"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576/console.log" append="off"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:07:28 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:07:28 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:07:28 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:07:28 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.012 2 DEBUG nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Preparing to wait for external event network-vif-plugged-4f87305e-6ca7-4098-974d-1977b1bc7747 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.013 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquiring lock "863eacb0-4381-4b09-adb0-93c628d97576-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.013 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.013 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.014 2 DEBUG nova.virt.libvirt.vif [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-452288390',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-452288390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-452288390',id=35,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='087032a23f1343b69dad71ab288c8d03',ramdisk_id='',reservation_id='r-5zu2i66r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-2033711876',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-2033711876-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:07:22Z,user_data=None,user_id='7ed46b991b844fe2bfa1d9dedc3d16b9',uuid=863eacb0-4381-4b09-adb0-93c628d97576,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.015 2 DEBUG nova.network.os_vif_util [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Converting VIF {"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.015 2 DEBUG nova.network.os_vif_util [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:16:0a,bridge_name='br-int',has_traffic_filtering=True,id=4f87305e-6ca7-4098-974d-1977b1bc7747,network=Network(95daee14-6460-43d2-8a30-748e3256cf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f87305e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.016 2 DEBUG os_vif [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:16:0a,bridge_name='br-int',has_traffic_filtering=True,id=4f87305e-6ca7-4098-974d-1977b1bc7747,network=Network(95daee14-6460-43d2-8a30-748e3256cf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f87305e-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.021 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f87305e-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f87305e-6c, col_values=(('external_ids', {'iface-id': '4f87305e-6ca7-4098-974d-1977b1bc7747', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:16:0a', 'vm-uuid': '863eacb0-4381-4b09-adb0-93c628d97576'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:28 np0005465987 NetworkManager[44910]: <info>  [1759406848.0244] manager: (tap4f87305e-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.033 2 INFO os_vif [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:16:0a,bridge_name='br-int',has_traffic_filtering=True,id=4f87305e-6ca7-4098-974d-1977b1bc7747,network=Network(95daee14-6460-43d2-8a30-748e3256cf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f87305e-6c')#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.084 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.085 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.085 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] No VIF found with MAC fa:16:3e:90:16:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.086 2 INFO nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Using config drive#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.121 2 DEBUG nova.storage.rbd_utils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] rbd image 863eacb0-4381-4b09-adb0-93c628d97576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:07:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:28.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.639 2 INFO nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Creating config drive at /var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576/disk.config#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.645 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkjekuun execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.681 2 DEBUG nova.network.neutron [req-d8f918bf-8450-4d99-82b2-0a009a6e6581 req-823dae3d-e885-46dc-8402-af123a15d453 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updated VIF entry in instance network info cache for port 4f87305e-6ca7-4098-974d-1977b1bc7747. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.683 2 DEBUG nova.network.neutron [req-d8f918bf-8450-4d99-82b2-0a009a6e6581 req-823dae3d-e885-46dc-8402-af123a15d453 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updating instance_info_cache with network_info: [{"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.700 2 DEBUG oslo_concurrency.lockutils [req-d8f918bf-8450-4d99-82b2-0a009a6e6581 req-823dae3d-e885-46dc-8402-af123a15d453 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.791 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkjekuun" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.822 2 DEBUG nova.storage.rbd_utils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] rbd image 863eacb0-4381-4b09-adb0-93c628d97576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:28 np0005465987 nova_compute[230713]: 2025-10-02 12:07:28.826 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576/disk.config 863eacb0-4381-4b09-adb0-93c628d97576_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:29 np0005465987 nova_compute[230713]: 2025-10-02 12:07:29.337 2 DEBUG oslo_concurrency.processutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576/disk.config 863eacb0-4381-4b09-adb0-93c628d97576_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:29 np0005465987 nova_compute[230713]: 2025-10-02 12:07:29.338 2 INFO nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Deleting local config drive /var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576/disk.config because it was imported into RBD.#033[00m
Oct  2 08:07:29 np0005465987 kernel: tap4f87305e-6c: entered promiscuous mode
Oct  2 08:07:29 np0005465987 NetworkManager[44910]: <info>  [1759406849.4163] manager: (tap4f87305e-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct  2 08:07:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:29Z|00099|binding|INFO|Claiming lport 4f87305e-6ca7-4098-974d-1977b1bc7747 for this chassis.
Oct  2 08:07:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:29Z|00100|binding|INFO|4f87305e-6ca7-4098-974d-1977b1bc7747: Claiming fa:16:3e:90:16:0a 10.100.0.10
Oct  2 08:07:29 np0005465987 nova_compute[230713]: 2025-10-02 12:07:29.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:29 np0005465987 nova_compute[230713]: 2025-10-02 12:07:29.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:29 np0005465987 nova_compute[230713]: 2025-10-02 12:07:29.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.508 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:16:0a 10.100.0.10'], port_security=['fa:16:3e:90:16:0a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '863eacb0-4381-4b09-adb0-93c628d97576', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95daee14-6460-43d2-8a30-748e3256cf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '087032a23f1343b69dad71ab288c8d03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1748a0a5-e9d3-4fc5-a019-12aaecb57007', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56bb32ff-e9b0-4e17-a55b-865690c85cb8, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4f87305e-6ca7-4098-974d-1977b1bc7747) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.510 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4f87305e-6ca7-4098-974d-1977b1bc7747 in datapath 95daee14-6460-43d2-8a30-748e3256cf3f bound to our chassis#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.513 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95daee14-6460-43d2-8a30-748e3256cf3f#033[00m
Oct  2 08:07:29 np0005465987 systemd-machined[188335]: New machine qemu-20-instance-00000023.
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.536 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cafcb947-8c3e-412b-8354-15fe7f8cbdda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.538 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap95daee14-61 in ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.541 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap95daee14-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.541 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[81d3af1c-cbe8-42d9-b9d1-0caf2b89f929]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.543 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[638fb7cf-d50a-4236-a9dd-49b4d97a48a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.562 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[98b35324-54f4-460a-be35-ff3700ebfcd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 systemd[1]: Started Virtual Machine qemu-20-instance-00000023.
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.596 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c164488e-734c-4852-9cbc-f5026764921b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:29Z|00101|binding|INFO|Setting lport 4f87305e-6ca7-4098-974d-1977b1bc7747 ovn-installed in OVS
Oct  2 08:07:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:29Z|00102|binding|INFO|Setting lport 4f87305e-6ca7-4098-974d-1977b1bc7747 up in Southbound
Oct  2 08:07:29 np0005465987 nova_compute[230713]: 2025-10-02 12:07:29.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:29 np0005465987 systemd-udevd[246414]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:07:29 np0005465987 NetworkManager[44910]: <info>  [1759406849.6379] device (tap4f87305e-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.634 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[47bca427-6eb0-4a61-84ee-4a23ad90fbc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 NetworkManager[44910]: <info>  [1759406849.6389] device (tap4f87305e-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.644 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[034b3305-2312-4cd5-880c-2a2d3447b0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 systemd-udevd[246418]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:07:29 np0005465987 NetworkManager[44910]: <info>  [1759406849.6454] manager: (tap95daee14-60): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.675 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[cf96721a-6c01-45ce-b617-1922609c1b0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.681 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[665c37e6-a7bf-49da-92ca-fab26545f54c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 NetworkManager[44910]: <info>  [1759406849.7025] device (tap95daee14-60): carrier: link connected
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.708 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[eed967fb-f947-4cc8-99ae-a6acb1ece921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.722 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8a627598-7511-44f8-abfc-04002806944c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95daee14-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:c5:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491827, 'reachable_time': 44344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246443, 'error': None, 'target': 'ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.735 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0984d90b-ba11-45ef-a4fc-2b812fca8424]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:c55d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491827, 'tstamp': 491827}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246444, 'error': None, 'target': 'ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.748 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e5116cba-b3ad-49c6-b0e6-507de89c72fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95daee14-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:c5:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491827, 'reachable_time': 44344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246445, 'error': None, 'target': 'ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.781 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[34810a8e-a780-4c48-92d7-84c603d77c94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.830 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[14559e72-103f-444c-a8c2-ff632fab9d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.831 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95daee14-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.831 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.832 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95daee14-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:29 np0005465987 nova_compute[230713]: 2025-10-02 12:07:29.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:29 np0005465987 NetworkManager[44910]: <info>  [1759406849.8375] manager: (tap95daee14-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct  2 08:07:29 np0005465987 kernel: tap95daee14-60: entered promiscuous mode
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.842 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95daee14-60, col_values=(('external_ids', {'iface-id': 'feaa83df-4134-4b1d-9521-c1e3cc3517b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:29Z|00103|binding|INFO|Releasing lport feaa83df-4134-4b1d-9521-c1e3cc3517b6 from this chassis (sb_readonly=0)
Oct  2 08:07:29 np0005465987 nova_compute[230713]: 2025-10-02 12:07:29.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.878 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/95daee14-6460-43d2-8a30-748e3256cf3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/95daee14-6460-43d2-8a30-748e3256cf3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:07:29 np0005465987 nova_compute[230713]: 2025-10-02 12:07:29.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.879 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b41555-4591-4457-8141-4c2132e525a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.880 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-95daee14-6460-43d2-8a30-748e3256cf3f
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/95daee14-6460-43d2-8a30-748e3256cf3f.pid.haproxy
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 95daee14-6460-43d2-8a30-748e3256cf3f
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:07:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:29.881 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f', 'env', 'PROCESS_TAG=haproxy-95daee14-6460-43d2-8a30-748e3256cf3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/95daee14-6460-43d2-8a30-748e3256cf3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:07:30 np0005465987 podman[246509]: 2025-10-02 12:07:30.271018095 +0000 UTC m=+0.060945067 container create 56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:07:30 np0005465987 systemd[1]: Started libpod-conmon-56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da.scope.
Oct  2 08:07:30 np0005465987 podman[246509]: 2025-10-02 12:07:30.240965359 +0000 UTC m=+0.030892381 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:07:30 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:07:30 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8c8a511eee70eb523af1ae638577f26faf6cf60d323bd3ddd634b1224463a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:07:30 np0005465987 podman[246509]: 2025-10-02 12:07:30.364748893 +0000 UTC m=+0.154675905 container init 56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:07:30 np0005465987 podman[246509]: 2025-10-02 12:07:30.370089639 +0000 UTC m=+0.160016621 container start 56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:07:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:30 np0005465987 neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f[246534]: [NOTICE]   (246538) : New worker (246540) forked
Oct  2 08:07:30 np0005465987 neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f[246534]: [NOTICE]   (246538) : Loading success.
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.606 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "e98dd135-f74b-4cac-a0c4-6abd92c62605" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.607 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "e98dd135-f74b-4cac-a0c4-6abd92c62605" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.635 2 DEBUG nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.717 2 DEBUG nova.compute.manager [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received event network-vif-plugged-4f87305e-6ca7-4098-974d-1977b1bc7747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.718 2 DEBUG oslo_concurrency.lockutils [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "863eacb0-4381-4b09-adb0-93c628d97576-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.718 2 DEBUG oslo_concurrency.lockutils [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.719 2 DEBUG oslo_concurrency.lockutils [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.719 2 DEBUG nova.compute.manager [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Processing event network-vif-plugged-4f87305e-6ca7-4098-974d-1977b1bc7747 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.720 2 DEBUG nova.compute.manager [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received event network-vif-plugged-4f87305e-6ca7-4098-974d-1977b1bc7747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.720 2 DEBUG oslo_concurrency.lockutils [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "863eacb0-4381-4b09-adb0-93c628d97576-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.721 2 DEBUG oslo_concurrency.lockutils [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.721 2 DEBUG oslo_concurrency.lockutils [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.721 2 DEBUG nova.compute.manager [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] No waiting events found dispatching network-vif-plugged-4f87305e-6ca7-4098-974d-1977b1bc7747 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.722 2 WARNING nova.compute.manager [req-f8987327-e3c5-4ed9-ab25-e701d182457c req-3214f549-0b1f-4f81-ab69-9a2cf241d7ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received unexpected event network-vif-plugged-4f87305e-6ca7-4098-974d-1977b1bc7747 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.742 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.743 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.755 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.756 2 INFO nova.compute.claims [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.784 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406850.7836635, 863eacb0-4381-4b09-adb0-93c628d97576 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.784 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] VM Started (Lifecycle Event)#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.786 2 DEBUG nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.790 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.793 2 INFO nova.virt.libvirt.driver [-] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Instance spawned successfully.#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.794 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.820 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.830 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.832 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.832 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.833 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.833 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.833 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.834 2 DEBUG nova.virt.libvirt.driver [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.861 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.861 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406850.7844756, 863eacb0-4381-4b09-adb0-93c628d97576 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.862 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.890 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.901 2 INFO nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Took 8.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.901 2 DEBUG nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.905 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406850.788804, 863eacb0-4381-4b09-adb0-93c628d97576 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.905 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.908 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.971 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:30 np0005465987 nova_compute[230713]: 2025-10-02 12:07:30.978 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.004 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.017 2 INFO nova.compute.manager [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Took 9.35 seconds to build instance.#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.056 2 DEBUG oslo_concurrency.lockutils [None req-ce1dd513-87c1-4211-b986-5644475ae4a9 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:31 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4205512689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.384 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.392 2 DEBUG nova.compute.provider_tree [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.409 2 DEBUG nova.scheduler.client.report [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.439 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.440 2 DEBUG nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.499 2 DEBUG nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.500 2 DEBUG nova.network.neutron [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.520 2 INFO nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.545 2 DEBUG nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.639 2 DEBUG nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.641 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.641 2 INFO nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Creating image(s)#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.670 2 DEBUG nova.storage.rbd_utils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image e98dd135-f74b-4cac-a0c4-6abd92c62605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.702 2 DEBUG nova.storage.rbd_utils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image e98dd135-f74b-4cac-a0c4-6abd92c62605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.729 2 DEBUG nova.storage.rbd_utils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image e98dd135-f74b-4cac-a0c4-6abd92c62605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.735 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.767 2 DEBUG nova.network.neutron [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.768 2 DEBUG nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.830 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.831 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.831 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.832 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.862 2 DEBUG nova.storage.rbd_utils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image e98dd135-f74b-4cac-a0c4-6abd92c62605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:31 np0005465987 nova_compute[230713]: 2025-10-02 12:07:31.867 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e98dd135-f74b-4cac-a0c4-6abd92c62605_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.305 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e98dd135-f74b-4cac-a0c4-6abd92c62605_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.387 2 DEBUG nova.storage.rbd_utils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] resizing rbd image e98dd135-f74b-4cac-a0c4-6abd92c62605_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:07:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:32.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.529 2 DEBUG nova.objects.instance [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lazy-loading 'migration_context' on Instance uuid e98dd135-f74b-4cac-a0c4-6abd92c62605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.550 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.551 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Ensure instance console log exists: /var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.551 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.551 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.552 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.553 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.557 2 WARNING nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.563 2 DEBUG nova.virt.libvirt.host [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.564 2 DEBUG nova.virt.libvirt.host [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.567 2 DEBUG nova.virt.libvirt.host [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.567 2 DEBUG nova.virt.libvirt.host [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.568 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.569 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.569 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.569 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.570 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.570 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.570 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.570 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.571 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.571 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.571 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.571 2 DEBUG nova.virt.hardware [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.574 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:32 np0005465987 podman[246757]: 2025-10-02 12:07:32.835500342 +0000 UTC m=+0.053034729 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:07:32 np0005465987 nova_compute[230713]: 2025-10-02 12:07:32.988 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:07:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2092060022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.090 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.128 2 DEBUG nova.storage.rbd_utils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image e98dd135-f74b-4cac-a0c4-6abd92c62605_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.134 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.172 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.173 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.174 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.175 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 863eacb0-4381-4b09-adb0-93c628d97576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/626484903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.581 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.583 2 DEBUG nova.objects.instance [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lazy-loading 'pci_devices' on Instance uuid e98dd135-f74b-4cac-a0c4-6abd92c62605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.602 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <uuid>e98dd135-f74b-4cac-a0c4-6abd92c62605</uuid>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <name>instance-00000025</name>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServersOnMultiNodesTest-server-1317398619</nova:name>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:07:32</nova:creationTime>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <nova:user uuid="199c0d9541a04c4db07e50bfba9fddb1">tempest-ServersOnMultiNodesTest-10966744-project-member</nova:user>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <nova:project uuid="62aa9c47ee2841139cd7066168f59650">tempest-ServersOnMultiNodesTest-10966744</nova:project>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <entry name="serial">e98dd135-f74b-4cac-a0c4-6abd92c62605</entry>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <entry name="uuid">e98dd135-f74b-4cac-a0c4-6abd92c62605</entry>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/e98dd135-f74b-4cac-a0c4-6abd92c62605_disk">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/e98dd135-f74b-4cac-a0c4-6abd92c62605_disk.config">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605/console.log" append="off"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:07:33 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:07:33 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:07:33 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:07:33 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.663 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.663 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.664 2 INFO nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Using config drive#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.693 2 DEBUG nova.storage.rbd_utils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image e98dd135-f74b-4cac-a0c4-6abd92c62605_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.986 2 INFO nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Creating config drive at /var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605/disk.config#033[00m
Oct  2 08:07:33 np0005465987 nova_compute[230713]: 2025-10-02 12:07:33.990 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwgq_ye1y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.122 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwgq_ye1y" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.162 2 DEBUG nova.storage.rbd_utils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image e98dd135-f74b-4cac-a0c4-6abd92c62605_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.167 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605/disk.config e98dd135-f74b-4cac-a0c4-6abd92c62605_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.442 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406839.4412842, c7915dc0-12f0-455a-85b5-74fe8115ced4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.443 2 INFO nova.compute.manager [-] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.464 2 DEBUG nova.compute.manager [None req-89174a02-d8cf-4edb-a2df-284049cfc10e - - - - - -] [instance: c7915dc0-12f0-455a-85b5-74fe8115ced4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.532 2 DEBUG oslo_concurrency.processutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605/disk.config e98dd135-f74b-4cac-a0c4-6abd92c62605_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.533 2 INFO nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Deleting local config drive /var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605/disk.config because it was imported into RBD.#033[00m
Oct  2 08:07:34 np0005465987 systemd-machined[188335]: New machine qemu-21-instance-00000025.
Oct  2 08:07:34 np0005465987 systemd[1]: Started Virtual Machine qemu-21-instance-00000025.
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.695 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updating instance_info_cache with network_info: [{"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.712 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.712 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.712 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:34 np0005465987 nova_compute[230713]: 2025-10-02 12:07:34.712 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:35.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:35 np0005465987 NetworkManager[44910]: <info>  [1759406855.2177] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:35 np0005465987 NetworkManager[44910]: <info>  [1759406855.2191] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:35Z|00104|binding|INFO|Releasing lport feaa83df-4134-4b1d-9521-c1e3cc3517b6 from this chassis (sb_readonly=0)
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.502 2 DEBUG nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.502 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.503 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406855.5016034, e98dd135-f74b-4cac-a0c4-6abd92c62605 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.503 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.514 2 INFO nova.virt.libvirt.driver [-] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Instance spawned successfully.#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.515 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.520 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.534 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.543 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.544 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.544 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.544 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.545 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.545 2 DEBUG nova.virt.libvirt.driver [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.567 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.568 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406855.502828, e98dd135-f74b-4cac-a0c4-6abd92c62605 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.568 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] VM Started (Lifecycle Event)#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.595 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.598 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.603 2 INFO nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Took 3.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.603 2 DEBUG nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.637 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.674 2 INFO nova.compute.manager [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Took 4.97 seconds to build instance.#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.693 2 DEBUG oslo_concurrency.lockutils [None req-e35f2445-3728-490e-b9ec-2b55276923ab 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "e98dd135-f74b-4cac-a0c4-6abd92c62605" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:35 np0005465987 nova_compute[230713]: 2025-10-02 12:07:35.704 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:07:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:36.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:07:36 np0005465987 nova_compute[230713]: 2025-10-02 12:07:36.702 2 DEBUG nova.compute.manager [req-93507cfb-f9ff-4e1b-8720-ad407405b9fc req-74df4424-8dee-43bc-a8a4-ad0b68c690bf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received event network-changed-4f87305e-6ca7-4098-974d-1977b1bc7747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:07:36 np0005465987 nova_compute[230713]: 2025-10-02 12:07:36.703 2 DEBUG nova.compute.manager [req-93507cfb-f9ff-4e1b-8720-ad407405b9fc req-74df4424-8dee-43bc-a8a4-ad0b68c690bf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Refreshing instance network info cache due to event network-changed-4f87305e-6ca7-4098-974d-1977b1bc7747. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:07:36 np0005465987 nova_compute[230713]: 2025-10-02 12:07:36.703 2 DEBUG oslo_concurrency.lockutils [req-93507cfb-f9ff-4e1b-8720-ad407405b9fc req-74df4424-8dee-43bc-a8a4-ad0b68c690bf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:07:36 np0005465987 nova_compute[230713]: 2025-10-02 12:07:36.703 2 DEBUG oslo_concurrency.lockutils [req-93507cfb-f9ff-4e1b-8720-ad407405b9fc req-74df4424-8dee-43bc-a8a4-ad0b68c690bf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:07:36 np0005465987 nova_compute[230713]: 2025-10-02 12:07:36.704 2 DEBUG nova.network.neutron [req-93507cfb-f9ff-4e1b-8720-ad407405b9fc req-74df4424-8dee-43bc-a8a4-ad0b68c690bf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Refreshing network info cache for port 4f87305e-6ca7-4098-974d-1977b1bc7747 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:07:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:37.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:37 np0005465987 nova_compute[230713]: 2025-10-02 12:07:37.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:37 np0005465987 nova_compute[230713]: 2025-10-02 12:07:37.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:37 np0005465987 nova_compute[230713]: 2025-10-02 12:07:37.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:37 np0005465987 nova_compute[230713]: 2025-10-02 12:07:37.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:37 np0005465987 nova_compute[230713]: 2025-10-02 12:07:37.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:37 np0005465987 nova_compute[230713]: 2025-10-02 12:07:37.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:37 np0005465987 nova_compute[230713]: 2025-10-02 12:07:37.996 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:07:37 np0005465987 nova_compute[230713]: 2025-10-02 12:07:37.996 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:38 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4200805076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:38.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.409 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.475 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.476 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.480 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.480 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.635 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.636 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4515MB free_disk=20.859054565429688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.636 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.637 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.705 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 863eacb0-4381-4b09-adb0-93c628d97576 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.706 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e98dd135-f74b-4cac-a0c4-6abd92c62605 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.706 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.706 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:07:38 np0005465987 nova_compute[230713]: 2025-10-02 12:07:38.759 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2210858305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:39.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:39 np0005465987 nova_compute[230713]: 2025-10-02 12:07:39.218 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:39 np0005465987 nova_compute[230713]: 2025-10-02 12:07:39.224 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:07:39 np0005465987 nova_compute[230713]: 2025-10-02 12:07:39.238 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:07:39 np0005465987 nova_compute[230713]: 2025-10-02 12:07:39.294 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:07:39 np0005465987 nova_compute[230713]: 2025-10-02 12:07:39.294 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:39 np0005465987 podman[246983]: 2025-10-02 12:07:39.858378888 +0000 UTC m=+0.070071428 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:07:39 np0005465987 podman[246984]: 2025-10-02 12:07:39.861787942 +0000 UTC m=+0.066452668 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:07:39 np0005465987 podman[246982]: 2025-10-02 12:07:39.891313924 +0000 UTC m=+0.107622120 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:07:40 np0005465987 nova_compute[230713]: 2025-10-02 12:07:40.294 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:40 np0005465987 nova_compute[230713]: 2025-10-02 12:07:40.294 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:40 np0005465987 nova_compute[230713]: 2025-10-02 12:07:40.295 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:40 np0005465987 nova_compute[230713]: 2025-10-02 12:07:40.295 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:07:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:40.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:41 np0005465987 nova_compute[230713]: 2025-10-02 12:07:41.096 2 DEBUG nova.network.neutron [req-93507cfb-f9ff-4e1b-8720-ad407405b9fc req-74df4424-8dee-43bc-a8a4-ad0b68c690bf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updated VIF entry in instance network info cache for port 4f87305e-6ca7-4098-974d-1977b1bc7747. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:07:41 np0005465987 nova_compute[230713]: 2025-10-02 12:07:41.097 2 DEBUG nova.network.neutron [req-93507cfb-f9ff-4e1b-8720-ad407405b9fc req-74df4424-8dee-43bc-a8a4-ad0b68c690bf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updating instance_info_cache with network_info: [{"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:07:41 np0005465987 nova_compute[230713]: 2025-10-02 12:07:41.116 2 DEBUG oslo_concurrency.lockutils [req-93507cfb-f9ff-4e1b-8720-ad407405b9fc req-74df4424-8dee-43bc-a8a4-ad0b68c690bf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:07:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:41.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:41 np0005465987 nova_compute[230713]: 2025-10-02 12:07:41.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:07:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:42.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:07:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:07:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:07:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:07:42 np0005465987 nova_compute[230713]: 2025-10-02 12:07:42.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:43 np0005465987 nova_compute[230713]: 2025-10-02 12:07:43.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:43.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:44.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:44Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:16:0a 10.100.0.10
Oct  2 08:07:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:44Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:16:0a 10.100.0.10
Oct  2 08:07:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:45.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:46.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:47.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:47 np0005465987 nova_compute[230713]: 2025-10-02 12:07:47.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:47 np0005465987 nova_compute[230713]: 2025-10-02 12:07:47.789 2 DEBUG nova.compute.manager [req-5c9a2177-3c6d-46c3-b59b-9d6bc86a7be9 req-097cb5db-d142-459c-b5c7-69b3ecf615a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received event network-changed-4f87305e-6ca7-4098-974d-1977b1bc7747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:07:47 np0005465987 nova_compute[230713]: 2025-10-02 12:07:47.789 2 DEBUG nova.compute.manager [req-5c9a2177-3c6d-46c3-b59b-9d6bc86a7be9 req-097cb5db-d142-459c-b5c7-69b3ecf615a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Refreshing instance network info cache due to event network-changed-4f87305e-6ca7-4098-974d-1977b1bc7747. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:07:47 np0005465987 nova_compute[230713]: 2025-10-02 12:07:47.790 2 DEBUG oslo_concurrency.lockutils [req-5c9a2177-3c6d-46c3-b59b-9d6bc86a7be9 req-097cb5db-d142-459c-b5c7-69b3ecf615a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:07:47 np0005465987 nova_compute[230713]: 2025-10-02 12:07:47.790 2 DEBUG oslo_concurrency.lockutils [req-5c9a2177-3c6d-46c3-b59b-9d6bc86a7be9 req-097cb5db-d142-459c-b5c7-69b3ecf615a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:07:47 np0005465987 nova_compute[230713]: 2025-10-02 12:07:47.791 2 DEBUG nova.network.neutron [req-5c9a2177-3c6d-46c3-b59b-9d6bc86a7be9 req-097cb5db-d142-459c-b5c7-69b3ecf615a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Refreshing network info cache for port 4f87305e-6ca7-4098-974d-1977b1bc7747 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:07:48 np0005465987 nova_compute[230713]: 2025-10-02 12:07:48.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:48.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:49 np0005465987 nova_compute[230713]: 2025-10-02 12:07:49.159 2 DEBUG nova.network.neutron [req-5c9a2177-3c6d-46c3-b59b-9d6bc86a7be9 req-097cb5db-d142-459c-b5c7-69b3ecf615a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updated VIF entry in instance network info cache for port 4f87305e-6ca7-4098-974d-1977b1bc7747. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:07:49 np0005465987 nova_compute[230713]: 2025-10-02 12:07:49.160 2 DEBUG nova.network.neutron [req-5c9a2177-3c6d-46c3-b59b-9d6bc86a7be9 req-097cb5db-d142-459c-b5c7-69b3ecf615a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updating instance_info_cache with network_info: [{"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:07:49 np0005465987 nova_compute[230713]: 2025-10-02 12:07:49.192 2 DEBUG oslo_concurrency.lockutils [req-5c9a2177-3c6d-46c3-b59b-9d6bc86a7be9 req-097cb5db-d142-459c-b5c7-69b3ecf615a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-863eacb0-4381-4b09-adb0-93c628d97576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:07:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:49.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3071389385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:50.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:07:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:07:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:51.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.198 2 DEBUG oslo_concurrency.lockutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquiring lock "863eacb0-4381-4b09-adb0-93c628d97576" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.199 2 DEBUG oslo_concurrency.lockutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.199 2 DEBUG oslo_concurrency.lockutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquiring lock "863eacb0-4381-4b09-adb0-93c628d97576-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.199 2 DEBUG oslo_concurrency.lockutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.200 2 DEBUG oslo_concurrency.lockutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.201 2 INFO nova.compute.manager [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Terminating instance#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.201 2 DEBUG nova.compute.manager [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:07:52 np0005465987 kernel: tap4f87305e-6c (unregistering): left promiscuous mode
Oct  2 08:07:52 np0005465987 NetworkManager[44910]: <info>  [1759406872.2936] device (tap4f87305e-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:07:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:52Z|00105|binding|INFO|Releasing lport 4f87305e-6ca7-4098-974d-1977b1bc7747 from this chassis (sb_readonly=0)
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:52Z|00106|binding|INFO|Setting lport 4f87305e-6ca7-4098-974d-1977b1bc7747 down in Southbound
Oct  2 08:07:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:07:52Z|00107|binding|INFO|Removing iface tap4f87305e-6c ovn-installed in OVS
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.314 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:16:0a 10.100.0.10'], port_security=['fa:16:3e:90:16:0a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '863eacb0-4381-4b09-adb0-93c628d97576', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95daee14-6460-43d2-8a30-748e3256cf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '087032a23f1343b69dad71ab288c8d03', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1748a0a5-e9d3-4fc5-a019-12aaecb57007', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56bb32ff-e9b0-4e17-a55b-865690c85cb8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4f87305e-6ca7-4098-974d-1977b1bc7747) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.315 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4f87305e-6ca7-4098-974d-1977b1bc7747 in datapath 95daee14-6460-43d2-8a30-748e3256cf3f unbound from our chassis#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.316 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 95daee14-6460-43d2-8a30-748e3256cf3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.317 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[43184aac-2f38-438c-9a9c-b5d41d08fca0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.318 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f namespace which is not needed anymore#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:52 np0005465987 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct  2 08:07:52 np0005465987 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000023.scope: Consumed 14.301s CPU time.
Oct  2 08:07:52 np0005465987 systemd-machined[188335]: Machine qemu-20-instance-00000023 terminated.
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.436 2 INFO nova.virt.libvirt.driver [-] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Instance destroyed successfully.#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.436 2 DEBUG nova.objects.instance [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lazy-loading 'resources' on Instance uuid 863eacb0-4381-4b09-adb0-93c628d97576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:52 np0005465987 neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f[246534]: [NOTICE]   (246538) : haproxy version is 2.8.14-c23fe91
Oct  2 08:07:52 np0005465987 neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f[246534]: [NOTICE]   (246538) : path to executable is /usr/sbin/haproxy
Oct  2 08:07:52 np0005465987 neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f[246534]: [WARNING]  (246538) : Exiting Master process...
Oct  2 08:07:52 np0005465987 neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f[246534]: [WARNING]  (246538) : Exiting Master process...
Oct  2 08:07:52 np0005465987 neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f[246534]: [ALERT]    (246538) : Current worker (246540) exited with code 143 (Terminated)
Oct  2 08:07:52 np0005465987 neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f[246534]: [WARNING]  (246538) : All workers exited. Exiting... (0)
Oct  2 08:07:52 np0005465987 systemd[1]: libpod-56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da.scope: Deactivated successfully.
Oct  2 08:07:52 np0005465987 podman[247249]: 2025-10-02 12:07:52.45325126 +0000 UTC m=+0.046351795 container died 56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.455 2 DEBUG nova.virt.libvirt.vif [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-452288390',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-452288390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-452288390',id=35,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:07:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='087032a23f1343b69dad71ab288c8d03',ramdisk_id='',reservation_id='r-5zu2i66r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-2033711876',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-2033711876-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:07:30Z,user_data=None,user_id='7ed46b991b844fe2bfa1d9dedc3d16b9',uuid=863eacb0-4381-4b09-adb0-93c628d97576,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.457 2 DEBUG nova.network.os_vif_util [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Converting VIF {"id": "4f87305e-6ca7-4098-974d-1977b1bc7747", "address": "fa:16:3e:90:16:0a", "network": {"id": "95daee14-6460-43d2-8a30-748e3256cf3f", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-217994894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "087032a23f1343b69dad71ab288c8d03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f87305e-6c", "ovs_interfaceid": "4f87305e-6ca7-4098-974d-1977b1bc7747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.459 2 DEBUG nova.network.os_vif_util [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:16:0a,bridge_name='br-int',has_traffic_filtering=True,id=4f87305e-6ca7-4098-974d-1977b1bc7747,network=Network(95daee14-6460-43d2-8a30-748e3256cf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f87305e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.459 2 DEBUG os_vif [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:16:0a,bridge_name='br-int',has_traffic_filtering=True,id=4f87305e-6ca7-4098-974d-1977b1bc7747,network=Network(95daee14-6460-43d2-8a30-748e3256cf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f87305e-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f87305e-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.502 2 INFO os_vif [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:16:0a,bridge_name='br-int',has_traffic_filtering=True,id=4f87305e-6ca7-4098-974d-1977b1bc7747,network=Network(95daee14-6460-43d2-8a30-748e3256cf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f87305e-6c')#033[00m
Oct  2 08:07:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1382076632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:52 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da-userdata-shm.mount: Deactivated successfully.
Oct  2 08:07:52 np0005465987 systemd[1]: var-lib-containers-storage-overlay-2d8c8a511eee70eb523af1ae638577f26faf6cf60d323bd3ddd634b1224463a9-merged.mount: Deactivated successfully.
Oct  2 08:07:52 np0005465987 podman[247249]: 2025-10-02 12:07:52.543959355 +0000 UTC m=+0.137059880 container cleanup 56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:07:52 np0005465987 systemd[1]: libpod-conmon-56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da.scope: Deactivated successfully.
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.578 2 DEBUG nova.compute.manager [req-f0e98e14-9d78-4d26-95f7-5e2268148aad req-d214fa6a-017f-4d65-a264-9d6fa40607ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received event network-vif-unplugged-4f87305e-6ca7-4098-974d-1977b1bc7747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.579 2 DEBUG oslo_concurrency.lockutils [req-f0e98e14-9d78-4d26-95f7-5e2268148aad req-d214fa6a-017f-4d65-a264-9d6fa40607ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "863eacb0-4381-4b09-adb0-93c628d97576-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.579 2 DEBUG oslo_concurrency.lockutils [req-f0e98e14-9d78-4d26-95f7-5e2268148aad req-d214fa6a-017f-4d65-a264-9d6fa40607ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.579 2 DEBUG oslo_concurrency.lockutils [req-f0e98e14-9d78-4d26-95f7-5e2268148aad req-d214fa6a-017f-4d65-a264-9d6fa40607ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.580 2 DEBUG nova.compute.manager [req-f0e98e14-9d78-4d26-95f7-5e2268148aad req-d214fa6a-017f-4d65-a264-9d6fa40607ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] No waiting events found dispatching network-vif-unplugged-4f87305e-6ca7-4098-974d-1977b1bc7747 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.580 2 DEBUG nova.compute.manager [req-f0e98e14-9d78-4d26-95f7-5e2268148aad req-d214fa6a-017f-4d65-a264-9d6fa40607ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received event network-vif-unplugged-4f87305e-6ca7-4098-974d-1977b1bc7747 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:07:52 np0005465987 podman[247306]: 2025-10-02 12:07:52.621181928 +0000 UTC m=+0.054680494 container remove 56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.626 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0c416852-5c0c-456f-9fba-c807fd9c55fe]: (4, ('Thu Oct  2 12:07:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f (56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da)\n56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da\nThu Oct  2 12:07:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f (56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da)\n56b53c2e47227730e222e6ea998ac756a90ae6aa357016050f8e67699ff0d4da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.628 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0f69ec09-af85-446e-bae4-320d83e715b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.629 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95daee14-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:52 np0005465987 kernel: tap95daee14-60: left promiscuous mode
Oct  2 08:07:52 np0005465987 nova_compute[230713]: 2025-10-02 12:07:52.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.650 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[832348db-1afc-4579-b5ff-ffb8ae26cec0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.681 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a8520b55-8121-478a-9ef5-e4540a345c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.682 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[30c8c5fc-2715-4dea-b29b-6229eaedb36c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.694 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[39902ace-90cb-4856-8fe6-3166fd090ffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491820, 'reachable_time': 37522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247321, 'error': None, 'target': 'ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.697 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-95daee14-6460-43d2-8a30-748e3256cf3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:07:52 np0005465987 systemd[1]: run-netns-ovnmeta\x2d95daee14\x2d6460\x2d43d2\x2d8a30\x2d748e3256cf3f.mount: Deactivated successfully.
Oct  2 08:07:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:07:52.697 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[25ebc611-00c8-4b33-a819-fc228cf7d1da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.032 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "daeabf52-b90b-4f52-9065-119a5324777b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.033 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "daeabf52-b90b-4f52-9065-119a5324777b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.050 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.130 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.130 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.137 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.138 2 INFO nova.compute.claims [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.177 2 INFO nova.virt.libvirt.driver [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Deleting instance files /var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576_del#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.178 2 INFO nova.virt.libvirt.driver [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Deletion of /var/lib/nova/instances/863eacb0-4381-4b09-adb0-93c628d97576_del complete#033[00m
Oct  2 08:07:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.224 2 INFO nova.compute.manager [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.225 2 DEBUG oslo.service.loopingcall [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.226 2 DEBUG nova.compute.manager [-] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.226 2 DEBUG nova.network.neutron [-] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.285 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:53 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/574804538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.731 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.737 2 DEBUG nova.compute.provider_tree [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.761 2 DEBUG nova.scheduler.client.report [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.791 2 DEBUG nova.network.neutron [-] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.794 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.831 2 INFO nova.compute.manager [-] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Took 0.60 seconds to deallocate network for instance.#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.867 2 DEBUG nova.compute.manager [req-e7d3fd25-0ec1-4ec7-9577-a2a0917db3f6 req-06b1e0ec-309f-4643-a5f0-ba75ccac9f86 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received event network-vif-deleted-4f87305e-6ca7-4098-974d-1977b1bc7747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.868 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "291b0dc2-0839-4e82-9af2-898ec4675ef3" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.868 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "291b0dc2-0839-4e82-9af2-898ec4675ef3" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.901 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] No node specified, defaulting to compute-1.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.907 2 DEBUG oslo_concurrency.lockutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.907 2 DEBUG oslo_concurrency.lockutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.960 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "291b0dc2-0839-4e82-9af2-898ec4675ef3" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:53 np0005465987 nova_compute[230713]: 2025-10-02 12:07:53.961 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.020 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.021 2 DEBUG nova.network.neutron [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.042 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.046 2 DEBUG oslo_concurrency.processutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.078 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.169 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.170 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.171 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Creating image(s)#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.207 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image daeabf52-b90b-4f52-9065-119a5324777b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.238 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image daeabf52-b90b-4f52-9065-119a5324777b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.270 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image daeabf52-b90b-4f52-9065-119a5324777b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.274 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.348 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.349 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.350 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.350 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.382 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image daeabf52-b90b-4f52-9065-119a5324777b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.387 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 daeabf52-b90b-4f52-9065-119a5324777b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:54.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.446 2 DEBUG nova.network.neutron [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.447 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:07:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3063043164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.534 2 DEBUG oslo_concurrency.processutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.539 2 DEBUG nova.compute.provider_tree [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.555 2 DEBUG nova.scheduler.client.report [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.579 2 DEBUG oslo_concurrency.lockutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.635 2 INFO nova.scheduler.client.report [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Deleted allocations for instance 863eacb0-4381-4b09-adb0-93c628d97576#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.650 2 DEBUG nova.compute.manager [req-74e7b2e3-d264-4523-b07a-e7b5c3e5af5f req-41e5e4cc-e150-4885-8d36-2f24d4ea034c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received event network-vif-plugged-4f87305e-6ca7-4098-974d-1977b1bc7747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.650 2 DEBUG oslo_concurrency.lockutils [req-74e7b2e3-d264-4523-b07a-e7b5c3e5af5f req-41e5e4cc-e150-4885-8d36-2f24d4ea034c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "863eacb0-4381-4b09-adb0-93c628d97576-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.651 2 DEBUG oslo_concurrency.lockutils [req-74e7b2e3-d264-4523-b07a-e7b5c3e5af5f req-41e5e4cc-e150-4885-8d36-2f24d4ea034c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.651 2 DEBUG oslo_concurrency.lockutils [req-74e7b2e3-d264-4523-b07a-e7b5c3e5af5f req-41e5e4cc-e150-4885-8d36-2f24d4ea034c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.651 2 DEBUG nova.compute.manager [req-74e7b2e3-d264-4523-b07a-e7b5c3e5af5f req-41e5e4cc-e150-4885-8d36-2f24d4ea034c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] No waiting events found dispatching network-vif-plugged-4f87305e-6ca7-4098-974d-1977b1bc7747 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.651 2 WARNING nova.compute.manager [req-74e7b2e3-d264-4523-b07a-e7b5c3e5af5f req-41e5e4cc-e150-4885-8d36-2f24d4ea034c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Received unexpected event network-vif-plugged-4f87305e-6ca7-4098-974d-1977b1bc7747 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.698 2 DEBUG oslo_concurrency.lockutils [None req-2d566286-5610-4698-86b7-5b655881c647 7ed46b991b844fe2bfa1d9dedc3d16b9 087032a23f1343b69dad71ab288c8d03 - - default default] Lock "863eacb0-4381-4b09-adb0-93c628d97576" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.822 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 daeabf52-b90b-4f52-9065-119a5324777b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:54 np0005465987 nova_compute[230713]: 2025-10-02 12:07:54.916 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] resizing rbd image daeabf52-b90b-4f52-9065-119a5324777b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.133 2 DEBUG nova.objects.instance [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lazy-loading 'migration_context' on Instance uuid daeabf52-b90b-4f52-9065-119a5324777b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.146 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.147 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Ensure instance console log exists: /var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.147 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.147 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.147 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.149 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.152 2 WARNING nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.155 2 DEBUG nova.virt.libvirt.host [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.155 2 DEBUG nova.virt.libvirt.host [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.158 2 DEBUG nova.virt.libvirt.host [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.158 2 DEBUG nova.virt.libvirt.host [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.159 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.159 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.160 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.160 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.160 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.160 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.160 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.160 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.161 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.161 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.161 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.161 2 DEBUG nova.virt.hardware [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.163 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:55.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/46126558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.592 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.619 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image daeabf52-b90b-4f52-9065-119a5324777b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:55 np0005465987 nova_compute[230713]: 2025-10-02 12:07:55.623 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:07:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/421099192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.070 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.073 2 DEBUG nova.objects.instance [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lazy-loading 'pci_devices' on Instance uuid daeabf52-b90b-4f52-9065-119a5324777b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.096 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <uuid>daeabf52-b90b-4f52-9065-119a5324777b</uuid>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <name>instance-00000029</name>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServersOnMultiNodesTest-server-2057920136-2</nova:name>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:07:55</nova:creationTime>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <nova:user uuid="199c0d9541a04c4db07e50bfba9fddb1">tempest-ServersOnMultiNodesTest-10966744-project-member</nova:user>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <nova:project uuid="62aa9c47ee2841139cd7066168f59650">tempest-ServersOnMultiNodesTest-10966744</nova:project>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <entry name="serial">daeabf52-b90b-4f52-9065-119a5324777b</entry>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <entry name="uuid">daeabf52-b90b-4f52-9065-119a5324777b</entry>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/daeabf52-b90b-4f52-9065-119a5324777b_disk">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/daeabf52-b90b-4f52-9065-119a5324777b_disk.config">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b/console.log" append="off"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:07:56 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:07:56 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:07:56 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:07:56 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.148 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.148 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.149 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Using config drive#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.173 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image daeabf52-b90b-4f52-9065-119a5324777b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.345 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Creating config drive at /var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b/disk.config#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.350 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ckez21i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:56.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.479 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ckez21i" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.503 2 DEBUG nova.storage.rbd_utils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] rbd image daeabf52-b90b-4f52-9065-119a5324777b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.506 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b/disk.config daeabf52-b90b-4f52-9065-119a5324777b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.657 2 DEBUG oslo_concurrency.processutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b/disk.config daeabf52-b90b-4f52-9065-119a5324777b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:56 np0005465987 nova_compute[230713]: 2025-10-02 12:07:56.659 2 INFO nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Deleting local config drive /var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b/disk.config because it was imported into RBD.#033[00m
Oct  2 08:07:56 np0005465987 systemd-machined[188335]: New machine qemu-22-instance-00000029.
Oct  2 08:07:56 np0005465987 systemd[1]: Started Virtual Machine qemu-22-instance-00000029.
Oct  2 08:07:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:07:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.543 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406877.5433455, daeabf52-b90b-4f52-9065-119a5324777b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.544 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: daeabf52-b90b-4f52-9065-119a5324777b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.546 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.546 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.551 2 INFO nova.virt.libvirt.driver [-] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Instance spawned successfully.#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.551 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.617 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.622 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.625 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.625 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.626 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.626 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.626 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.627 2 DEBUG nova.virt.libvirt.driver [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.767 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: daeabf52-b90b-4f52-9065-119a5324777b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.768 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406877.5442283, daeabf52-b90b-4f52-9065-119a5324777b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.768 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: daeabf52-b90b-4f52-9065-119a5324777b] VM Started (Lifecycle Event)#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.926 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.931 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.991 2 INFO nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Took 3.82 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:07:57 np0005465987 nova_compute[230713]: 2025-10-02 12:07:57.992 2 DEBUG nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:07:58 np0005465987 nova_compute[230713]: 2025-10-02 12:07:58.040 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: daeabf52-b90b-4f52-9065-119a5324777b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:07:58 np0005465987 nova_compute[230713]: 2025-10-02 12:07:58.209 2 INFO nova.compute.manager [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Took 5.11 seconds to build instance.#033[00m
Oct  2 08:07:58 np0005465987 nova_compute[230713]: 2025-10-02 12:07:58.233 2 DEBUG oslo_concurrency.lockutils [None req-3f9d936e-cf9a-44c7-b86c-df1a52877f10 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "daeabf52-b90b-4f52-9065-119a5324777b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:07:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:58.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:07:59 np0005465987 nova_compute[230713]: 2025-10-02 12:07:59.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:07:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:07:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:59.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:59 np0005465987 nova_compute[230713]: 2025-10-02 12:07:59.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:00.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:01.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:01 np0005465987 nova_compute[230713]: 2025-10-02 12:08:01.823 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "daeabf52-b90b-4f52-9065-119a5324777b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:01 np0005465987 nova_compute[230713]: 2025-10-02 12:08:01.825 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "daeabf52-b90b-4f52-9065-119a5324777b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:01 np0005465987 nova_compute[230713]: 2025-10-02 12:08:01.825 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "daeabf52-b90b-4f52-9065-119a5324777b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:01 np0005465987 nova_compute[230713]: 2025-10-02 12:08:01.826 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "daeabf52-b90b-4f52-9065-119a5324777b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:01 np0005465987 nova_compute[230713]: 2025-10-02 12:08:01.826 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "daeabf52-b90b-4f52-9065-119a5324777b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:01 np0005465987 nova_compute[230713]: 2025-10-02 12:08:01.827 2 INFO nova.compute.manager [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Terminating instance#033[00m
Oct  2 08:08:01 np0005465987 nova_compute[230713]: 2025-10-02 12:08:01.829 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "refresh_cache-daeabf52-b90b-4f52-9065-119a5324777b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:08:01 np0005465987 nova_compute[230713]: 2025-10-02 12:08:01.829 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquired lock "refresh_cache-daeabf52-b90b-4f52-9065-119a5324777b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:08:01 np0005465987 nova_compute[230713]: 2025-10-02 12:08:01.829 2 DEBUG nova.network.neutron [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:08:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:02 np0005465987 nova_compute[230713]: 2025-10-02 12:08:02.018 2 DEBUG nova.network.neutron [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:08:02 np0005465987 nova_compute[230713]: 2025-10-02 12:08:02.429 2 DEBUG nova.network.neutron [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:08:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:02 np0005465987 nova_compute[230713]: 2025-10-02 12:08:02.445 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Releasing lock "refresh_cache-daeabf52-b90b-4f52-9065-119a5324777b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:08:02 np0005465987 nova_compute[230713]: 2025-10-02 12:08:02.446 2 DEBUG nova.compute.manager [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:08:02 np0005465987 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct  2 08:08:02 np0005465987 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000029.scope: Consumed 5.806s CPU time.
Oct  2 08:08:02 np0005465987 systemd-machined[188335]: Machine qemu-22-instance-00000029 terminated.
Oct  2 08:08:02 np0005465987 nova_compute[230713]: 2025-10-02 12:08:02.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:02 np0005465987 nova_compute[230713]: 2025-10-02 12:08:02.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:02 np0005465987 nova_compute[230713]: 2025-10-02 12:08:02.667 2 INFO nova.virt.libvirt.driver [-] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Instance destroyed successfully.#033[00m
Oct  2 08:08:02 np0005465987 nova_compute[230713]: 2025-10-02 12:08:02.669 2 DEBUG nova.objects.instance [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lazy-loading 'resources' on Instance uuid daeabf52-b90b-4f52-9065-119a5324777b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:08:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:03.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:03 np0005465987 podman[247736]: 2025-10-02 12:08:03.83550842 +0000 UTC m=+0.053642786 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:08:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:04.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.125 2 INFO nova.virt.libvirt.driver [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Deleting instance files /var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b_del#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.126 2 INFO nova.virt.libvirt.driver [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Deletion of /var/lib/nova/instances/daeabf52-b90b-4f52-9065-119a5324777b_del complete#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.172 2 INFO nova.compute.manager [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Took 2.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.173 2 DEBUG oslo.service.loopingcall [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.173 2 DEBUG nova.compute.manager [-] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.174 2 DEBUG nova.network.neutron [-] [instance: daeabf52-b90b-4f52-9065-119a5324777b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:08:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:05.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.319 2 DEBUG nova.network.neutron [-] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.329 2 DEBUG nova.network.neutron [-] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.356 2 INFO nova.compute.manager [-] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Took 0.18 seconds to deallocate network for instance.#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.399 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.399 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.469 2 DEBUG oslo_concurrency.processutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:08:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:08:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1827984313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.904 2 DEBUG oslo_concurrency.processutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.913 2 DEBUG nova.compute.provider_tree [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.936 2 DEBUG nova.scheduler.client.report [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.961 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:05 np0005465987 nova_compute[230713]: 2025-10-02 12:08:05.994 2 INFO nova.scheduler.client.report [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Deleted allocations for instance daeabf52-b90b-4f52-9065-119a5324777b#033[00m
Oct  2 08:08:06 np0005465987 nova_compute[230713]: 2025-10-02 12:08:06.099 2 DEBUG oslo_concurrency.lockutils [None req-efa43a80-097b-43cd-abc8-65dad6e384f3 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "daeabf52-b90b-4f52-9065-119a5324777b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:06.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:07.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:07 np0005465987 nova_compute[230713]: 2025-10-02 12:08:07.434 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406872.4331684, 863eacb0-4381-4b09-adb0-93c628d97576 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:08:07 np0005465987 nova_compute[230713]: 2025-10-02 12:08:07.435 2 INFO nova.compute.manager [-] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:08:07 np0005465987 nova_compute[230713]: 2025-10-02 12:08:07.467 2 DEBUG nova.compute.manager [None req-6097353a-9074-479a-8679-388c670c1da8 - - - - - -] [instance: 863eacb0-4381-4b09-adb0-93c628d97576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:08:07 np0005465987 nova_compute[230713]: 2025-10-02 12:08:07.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:07 np0005465987 nova_compute[230713]: 2025-10-02 12:08:07.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:08.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:09.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:09 np0005465987 nova_compute[230713]: 2025-10-02 12:08:09.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:08:09.814 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:08:09.815 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:08:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:08:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4233098128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:08:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:10.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:10 np0005465987 podman[247780]: 2025-10-02 12:08:10.879061898 +0000 UTC m=+0.082049328 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:08:10 np0005465987 podman[247779]: 2025-10-02 12:08:10.898190664 +0000 UTC m=+0.101160624 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:08:10 np0005465987 podman[247778]: 2025-10-02 12:08:10.921619268 +0000 UTC m=+0.130997674 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  2 08:08:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:11.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:12.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:12 np0005465987 nova_compute[230713]: 2025-10-02 12:08:12.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:12 np0005465987 nova_compute[230713]: 2025-10-02 12:08:12.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:13.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:14.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:15.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.254 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "e98dd135-f74b-4cac-a0c4-6abd92c62605" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.255 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "e98dd135-f74b-4cac-a0c4-6abd92c62605" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.255 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "e98dd135-f74b-4cac-a0c4-6abd92c62605-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.256 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "e98dd135-f74b-4cac-a0c4-6abd92c62605-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.256 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "e98dd135-f74b-4cac-a0c4-6abd92c62605-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.258 2 INFO nova.compute.manager [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Terminating instance#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.259 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "refresh_cache-e98dd135-f74b-4cac-a0c4-6abd92c62605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.260 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquired lock "refresh_cache-e98dd135-f74b-4cac-a0c4-6abd92c62605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.260 2 DEBUG nova.network.neutron [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.512 2 DEBUG nova.network.neutron [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.814 2 DEBUG nova.network.neutron [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.866 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Releasing lock "refresh_cache-e98dd135-f74b-4cac-a0c4-6abd92c62605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:08:15 np0005465987 nova_compute[230713]: 2025-10-02 12:08:15.867 2 DEBUG nova.compute.manager [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:08:16 np0005465987 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000025.scope: Deactivated successfully.
Oct  2 08:08:16 np0005465987 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000025.scope: Consumed 15.232s CPU time.
Oct  2 08:08:16 np0005465987 systemd-machined[188335]: Machine qemu-21-instance-00000025 terminated.
Oct  2 08:08:16 np0005465987 nova_compute[230713]: 2025-10-02 12:08:16.091 2 INFO nova.virt.libvirt.driver [-] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Instance destroyed successfully.#033[00m
Oct  2 08:08:16 np0005465987 nova_compute[230713]: 2025-10-02 12:08:16.092 2 DEBUG nova.objects.instance [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lazy-loading 'resources' on Instance uuid e98dd135-f74b-4cac-a0c4-6abd92c62605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:08:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:16.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:17.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:17 np0005465987 nova_compute[230713]: 2025-10-02 12:08:17.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:17 np0005465987 nova_compute[230713]: 2025-10-02 12:08:17.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:17 np0005465987 nova_compute[230713]: 2025-10-02 12:08:17.665 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406882.6646004, daeabf52-b90b-4f52-9065-119a5324777b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:08:17 np0005465987 nova_compute[230713]: 2025-10-02 12:08:17.665 2 INFO nova.compute.manager [-] [instance: daeabf52-b90b-4f52-9065-119a5324777b] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:08:17 np0005465987 nova_compute[230713]: 2025-10-02 12:08:17.685 2 DEBUG nova.compute.manager [None req-dc4d8d69-eff4-4b5b-b6ec-a69b110e5e21 - - - - - -] [instance: daeabf52-b90b-4f52-9065-119a5324777b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:08:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:08:17.817 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:08:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:18.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:19.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:20.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:21.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:08:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2127436171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:08:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:08:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2127436171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:08:21 np0005465987 nova_compute[230713]: 2025-10-02 12:08:21.588 2 INFO nova.virt.libvirt.driver [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Deleting instance files /var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605_del#033[00m
Oct  2 08:08:21 np0005465987 nova_compute[230713]: 2025-10-02 12:08:21.589 2 INFO nova.virt.libvirt.driver [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Deletion of /var/lib/nova/instances/e98dd135-f74b-4cac-a0c4-6abd92c62605_del complete#033[00m
Oct  2 08:08:21 np0005465987 nova_compute[230713]: 2025-10-02 12:08:21.799 2 INFO nova.compute.manager [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Took 5.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:08:21 np0005465987 nova_compute[230713]: 2025-10-02 12:08:21.800 2 DEBUG oslo.service.loopingcall [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:08:21 np0005465987 nova_compute[230713]: 2025-10-02 12:08:21.801 2 DEBUG nova.compute.manager [-] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:08:21 np0005465987 nova_compute[230713]: 2025-10-02 12:08:21.801 2 DEBUG nova.network.neutron [-] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:08:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.122 2 DEBUG nova.network.neutron [-] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.147 2 DEBUG nova.network.neutron [-] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.163 2 INFO nova.compute.manager [-] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Took 0.36 seconds to deallocate network for instance.#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.208 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.208 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.324 2 DEBUG oslo_concurrency.processutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:08:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 08:08:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:22.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:08:22 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/827202599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:08:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:08:22 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1446007030' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:08:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:08:22 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1446007030' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.789 2 DEBUG oslo_concurrency.processutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.796 2 DEBUG nova.compute.provider_tree [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.814 2 DEBUG nova.scheduler.client.report [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.838 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.861 2 INFO nova.scheduler.client.report [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Deleted allocations for instance e98dd135-f74b-4cac-a0c4-6abd92c62605#033[00m
Oct  2 08:08:22 np0005465987 nova_compute[230713]: 2025-10-02 12:08:22.941 2 DEBUG oslo_concurrency.lockutils [None req-42d2f175-227d-4ca5-9751-02c651e92005 199c0d9541a04c4db07e50bfba9fddb1 62aa9c47ee2841139cd7066168f59650 - - default default] Lock "e98dd135-f74b-4cac-a0c4-6abd92c62605" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:23.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:24.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:25.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Oct  2 08:08:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:26.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:27.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:27 np0005465987 nova_compute[230713]: 2025-10-02 12:08:27.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:27 np0005465987 nova_compute[230713]: 2025-10-02 12:08:27.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:08:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2285490029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:08:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:08:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2285490029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:08:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:08:27.936 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:08:27.937 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:08:27.937 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:28.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:29.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:29 np0005465987 nova_compute[230713]: 2025-10-02 12:08:29.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:30.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:31 np0005465987 nova_compute[230713]: 2025-10-02 12:08:31.088 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406896.0874946, e98dd135-f74b-4cac-a0c4-6abd92c62605 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:08:31 np0005465987 nova_compute[230713]: 2025-10-02 12:08:31.089 2 INFO nova.compute.manager [-] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:08:31 np0005465987 nova_compute[230713]: 2025-10-02 12:08:31.124 2 DEBUG nova.compute.manager [None req-a2414711-6617-4a81-aca1-418b030b39fe - - - - - -] [instance: e98dd135-f74b-4cac-a0c4-6abd92c62605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:08:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:31.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:32 np0005465987 nova_compute[230713]: 2025-10-02 12:08:32.054 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:32 np0005465987 nova_compute[230713]: 2025-10-02 12:08:32.055 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:08:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:32.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:32 np0005465987 nova_compute[230713]: 2025-10-02 12:08:32.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:32 np0005465987 nova_compute[230713]: 2025-10-02 12:08:32.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:32 np0005465987 nova_compute[230713]: 2025-10-02 12:08:32.979 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:32 np0005465987 nova_compute[230713]: 2025-10-02 12:08:32.980 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:08:32 np0005465987 nova_compute[230713]: 2025-10-02 12:08:32.980 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:08:32 np0005465987 nova_compute[230713]: 2025-10-02 12:08:32.994 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:08:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Oct  2 08:08:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:33.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:33 np0005465987 nova_compute[230713]: 2025-10-02 12:08:33.973 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:34.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:34 np0005465987 podman[247882]: 2025-10-02 12:08:34.873199865 +0000 UTC m=+0.083032585 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 08:08:34 np0005465987 nova_compute[230713]: 2025-10-02 12:08:34.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:35.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:35 np0005465987 nova_compute[230713]: 2025-10-02 12:08:35.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:36.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:36 np0005465987 nova_compute[230713]: 2025-10-02 12:08:36.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:36 np0005465987 nova_compute[230713]: 2025-10-02 12:08:36.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:08:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:37 np0005465987 nova_compute[230713]: 2025-10-02 12:08:37.026 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:08:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:37.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:37 np0005465987 nova_compute[230713]: 2025-10-02 12:08:37.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:37 np0005465987 nova_compute[230713]: 2025-10-02 12:08:37.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.026 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.130 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.130 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.131 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.131 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.131 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:08:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:38.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:08:38 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3749972049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.581 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.769 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.771 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4771MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.772 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:38 np0005465987 nova_compute[230713]: 2025-10-02 12:08:38.772 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:39 np0005465987 nova_compute[230713]: 2025-10-02 12:08:39.009 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:08:39 np0005465987 nova_compute[230713]: 2025-10-02 12:08:39.009 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:08:39 np0005465987 nova_compute[230713]: 2025-10-02 12:08:39.058 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:08:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:39.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:08:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2462526934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:08:39 np0005465987 nova_compute[230713]: 2025-10-02 12:08:39.511 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:08:39 np0005465987 nova_compute[230713]: 2025-10-02 12:08:39.519 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:08:39 np0005465987 nova_compute[230713]: 2025-10-02 12:08:39.540 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:08:39 np0005465987 nova_compute[230713]: 2025-10-02 12:08:39.572 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:08:39 np0005465987 nova_compute[230713]: 2025-10-02 12:08:39.572 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:40.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:41.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:41 np0005465987 nova_compute[230713]: 2025-10-02 12:08:41.512 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:41 np0005465987 nova_compute[230713]: 2025-10-02 12:08:41.513 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:41 np0005465987 nova_compute[230713]: 2025-10-02 12:08:41.513 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:41 np0005465987 nova_compute[230713]: 2025-10-02 12:08:41.513 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:08:41 np0005465987 podman[247948]: 2025-10-02 12:08:41.850739019 +0000 UTC m=+0.057166983 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:08:41 np0005465987 podman[247949]: 2025-10-02 12:08:41.880474196 +0000 UTC m=+0.072745131 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:08:41 np0005465987 podman[247947]: 2025-10-02 12:08:41.888681022 +0000 UTC m=+0.093420020 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  2 08:08:41 np0005465987 nova_compute[230713]: 2025-10-02 12:08:41.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:42.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:42 np0005465987 nova_compute[230713]: 2025-10-02 12:08:42.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:42 np0005465987 nova_compute[230713]: 2025-10-02 12:08:42.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:43.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Oct  2 08:08:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:45.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:46.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:47.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Oct  2 08:08:47 np0005465987 nova_compute[230713]: 2025-10-02 12:08:47.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:08:47 np0005465987 nova_compute[230713]: 2025-10-02 12:08:47.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:08:47 np0005465987 nova_compute[230713]: 2025-10-02 12:08:47.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 08:08:47 np0005465987 nova_compute[230713]: 2025-10-02 12:08:47.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:08:47 np0005465987 nova_compute[230713]: 2025-10-02 12:08:47.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:47 np0005465987 nova_compute[230713]: 2025-10-02 12:08:47.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:08:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:48.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:49.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.574734) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929574764, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2699, "num_deletes": 505, "total_data_size": 5439719, "memory_usage": 5530624, "flush_reason": "Manual Compaction"}
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929606782, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3338723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28111, "largest_seqno": 30803, "table_properties": {"data_size": 3328772, "index_size": 5677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 25702, "raw_average_key_size": 20, "raw_value_size": 3306237, "raw_average_value_size": 2630, "num_data_blocks": 247, "num_entries": 1257, "num_filter_entries": 1257, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406735, "oldest_key_time": 1759406735, "file_creation_time": 1759406929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 32091 microseconds, and 6383 cpu microseconds.
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.606822) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3338723 bytes OK
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.606839) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.612771) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.612789) EVENT_LOG_v1 {"time_micros": 1759406929612783, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.612807) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5426969, prev total WAL file size 5426969, number of live WAL files 2.
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.614109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3260KB)], [57(10MB)]
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929614207, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 13998395, "oldest_snapshot_seqno": -1}
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5376 keys, 8583229 bytes, temperature: kUnknown
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929674135, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 8583229, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8547773, "index_size": 20946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137197, "raw_average_key_size": 25, "raw_value_size": 8451397, "raw_average_value_size": 1572, "num_data_blocks": 843, "num_entries": 5376, "num_filter_entries": 5376, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759406929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.674389) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8583229 bytes
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.677779) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.2 rd, 143.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.2 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(6.8) write-amplify(2.6) OK, records in: 6388, records dropped: 1012 output_compression: NoCompression
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.677800) EVENT_LOG_v1 {"time_micros": 1759406929677790, "job": 34, "event": "compaction_finished", "compaction_time_micros": 60017, "compaction_time_cpu_micros": 20591, "output_level": 6, "num_output_files": 1, "total_output_size": 8583229, "num_input_records": 6388, "num_output_records": 5376, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929678546, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406929681002, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.613988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.681038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.681043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.681045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.681047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:08:49 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:08:49.681049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:08:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:50.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:51.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:51 np0005465987 podman[248184]: 2025-10-02 12:08:51.886991245 +0000 UTC m=+0.094081477 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 08:08:51 np0005465987 podman[248184]: 2025-10-02 12:08:51.970882472 +0000 UTC m=+0.177972694 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  2 08:08:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:52.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:52 np0005465987 nova_compute[230713]: 2025-10-02 12:08:52.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:08:52 np0005465987 nova_compute[230713]: 2025-10-02 12:08:52.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:08:52 np0005465987 nova_compute[230713]: 2025-10-02 12:08:52.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 08:08:52 np0005465987 nova_compute[230713]: 2025-10-02 12:08:52.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:08:52 np0005465987 nova_compute[230713]: 2025-10-02 12:08:52.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:52 np0005465987 nova_compute[230713]: 2025-10-02 12:08:52.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:08:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:08:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:08:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Oct  2 08:08:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:53.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:08:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:08:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:54.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:08:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268355803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:08:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:08:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268355803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:08:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:55.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:08:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:08:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:08:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:08:55.794 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:08:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:08:55.795 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:08:55 np0005465987 nova_compute[230713]: 2025-10-02 12:08:55.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:08:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:57.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:08:57 np0005465987 nova_compute[230713]: 2025-10-02 12:08:57.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:08:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:58.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:08:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:59.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:00.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:09:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:09:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:01.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:02.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:02 np0005465987 nova_compute[230713]: 2025-10-02 12:09:02.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:09:02 np0005465987 nova_compute[230713]: 2025-10-02 12:09:02.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:09:02 np0005465987 nova_compute[230713]: 2025-10-02 12:09:02.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 08:09:02 np0005465987 nova_compute[230713]: 2025-10-02 12:09:02.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:09:02 np0005465987 nova_compute[230713]: 2025-10-02 12:09:02.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:02 np0005465987 nova_compute[230713]: 2025-10-02 12:09:02.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:09:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:03.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:04.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:04.797 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:05.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:05 np0005465987 podman[248490]: 2025-10-02 12:09:05.842684338 +0000 UTC m=+0.056947007 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:09:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:06.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:07 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:07Z|00108|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct  2 08:09:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:07.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:07 np0005465987 nova_compute[230713]: 2025-10-02 12:09:07.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:09.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:11.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:12.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:12 np0005465987 podman[248513]: 2025-10-02 12:09:12.844223001 +0000 UTC m=+0.061739339 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:09:12 np0005465987 nova_compute[230713]: 2025-10-02 12:09:12.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:09:12 np0005465987 nova_compute[230713]: 2025-10-02 12:09:12.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:12 np0005465987 nova_compute[230713]: 2025-10-02 12:09:12.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 08:09:12 np0005465987 nova_compute[230713]: 2025-10-02 12:09:12.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:09:12 np0005465987 nova_compute[230713]: 2025-10-02 12:09:12.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:09:12 np0005465987 nova_compute[230713]: 2025-10-02 12:09:12.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:12 np0005465987 podman[248512]: 2025-10-02 12:09:12.856436557 +0000 UTC m=+0.078765507 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:09:12 np0005465987 podman[248514]: 2025-10-02 12:09:12.871405809 +0000 UTC m=+0.087636421 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:09:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:13.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:14 np0005465987 nova_compute[230713]: 2025-10-02 12:09:14.177 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquiring lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:14 np0005465987 nova_compute[230713]: 2025-10-02 12:09:14.178 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:14 np0005465987 nova_compute[230713]: 2025-10-02 12:09:14.203 2 DEBUG nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:09:14 np0005465987 nova_compute[230713]: 2025-10-02 12:09:14.443 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:14 np0005465987 nova_compute[230713]: 2025-10-02 12:09:14.444 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:14 np0005465987 nova_compute[230713]: 2025-10-02 12:09:14.452 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:09:14 np0005465987 nova_compute[230713]: 2025-10-02 12:09:14.452 2 INFO nova.compute.claims [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:09:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:14.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:14 np0005465987 nova_compute[230713]: 2025-10-02 12:09:14.583 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2248352384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.019 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.026 2 DEBUG nova.compute.provider_tree [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.039 2 DEBUG nova.scheduler.client.report [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.059 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.060 2 DEBUG nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.111 2 DEBUG nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.112 2 DEBUG nova.network.neutron [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.134 2 INFO nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.155 2 DEBUG nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.283 2 DEBUG nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.284 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.285 2 INFO nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Creating image(s)#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.309 2 DEBUG nova.storage.rbd_utils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] rbd image 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.337 2 DEBUG nova.storage.rbd_utils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] rbd image 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:15.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.365 2 DEBUG nova.storage.rbd_utils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] rbd image 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.369 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.395 2 DEBUG nova.policy [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '245477e4901945099a0da748199456bc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1d9c4d04247d43b086698f34cdea3ffb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.430 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.431 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.431 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.432 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.455 2 DEBUG nova.storage.rbd_utils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] rbd image 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.458 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.927 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:15 np0005465987 nova_compute[230713]: 2025-10-02 12:09:15.987 2 DEBUG nova.storage.rbd_utils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] resizing rbd image 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:09:16 np0005465987 nova_compute[230713]: 2025-10-02 12:09:16.215 2 DEBUG nova.objects.instance [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lazy-loading 'migration_context' on Instance uuid 412cfeea-a5cf-4b59-b59b-679a7b7d4d62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:16 np0005465987 nova_compute[230713]: 2025-10-02 12:09:16.232 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:09:16 np0005465987 nova_compute[230713]: 2025-10-02 12:09:16.232 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Ensure instance console log exists: /var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:09:16 np0005465987 nova_compute[230713]: 2025-10-02 12:09:16.233 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:16 np0005465987 nova_compute[230713]: 2025-10-02 12:09:16.233 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:16 np0005465987 nova_compute[230713]: 2025-10-02 12:09:16.233 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:16 np0005465987 nova_compute[230713]: 2025-10-02 12:09:16.440 2 DEBUG nova.network.neutron [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Successfully created port: 9e8826e5-88b4-4834-9e63-a92924187d96 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:09:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:16.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:17.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:17 np0005465987 nova_compute[230713]: 2025-10-02 12:09:17.677 2 DEBUG nova.network.neutron [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Successfully updated port: 9e8826e5-88b4-4834-9e63-a92924187d96 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:09:17 np0005465987 nova_compute[230713]: 2025-10-02 12:09:17.692 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquiring lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:09:17 np0005465987 nova_compute[230713]: 2025-10-02 12:09:17.692 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquired lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:09:17 np0005465987 nova_compute[230713]: 2025-10-02 12:09:17.692 2 DEBUG nova.network.neutron [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:09:17 np0005465987 nova_compute[230713]: 2025-10-02 12:09:17.798 2 DEBUG nova.compute.manager [req-9f4cce38-8971-42ad-a33c-c8b58cdc9eed req-466ba7cf-e600-4e96-a5a7-0ddb1bcf36d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received event network-changed-9e8826e5-88b4-4834-9e63-a92924187d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:09:17 np0005465987 nova_compute[230713]: 2025-10-02 12:09:17.799 2 DEBUG nova.compute.manager [req-9f4cce38-8971-42ad-a33c-c8b58cdc9eed req-466ba7cf-e600-4e96-a5a7-0ddb1bcf36d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Refreshing instance network info cache due to event network-changed-9e8826e5-88b4-4834-9e63-a92924187d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:09:17 np0005465987 nova_compute[230713]: 2025-10-02 12:09:17.799 2 DEBUG oslo_concurrency.lockutils [req-9f4cce38-8971-42ad-a33c-c8b58cdc9eed req-466ba7cf-e600-4e96-a5a7-0ddb1bcf36d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:09:17 np0005465987 nova_compute[230713]: 2025-10-02 12:09:17.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:17 np0005465987 nova_compute[230713]: 2025-10-02 12:09:17.869 2 DEBUG nova.network.neutron [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:09:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:18.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:19.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.286 2 DEBUG nova.network.neutron [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updating instance_info_cache with network_info: [{"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.315 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Releasing lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.315 2 DEBUG nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Instance network_info: |[{"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.316 2 DEBUG oslo_concurrency.lockutils [req-9f4cce38-8971-42ad-a33c-c8b58cdc9eed req-466ba7cf-e600-4e96-a5a7-0ddb1bcf36d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.316 2 DEBUG nova.network.neutron [req-9f4cce38-8971-42ad-a33c-c8b58cdc9eed req-466ba7cf-e600-4e96-a5a7-0ddb1bcf36d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Refreshing network info cache for port 9e8826e5-88b4-4834-9e63-a92924187d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.319 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Start _get_guest_xml network_info=[{"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.324 2 WARNING nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.329 2 DEBUG nova.virt.libvirt.host [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.330 2 DEBUG nova.virt.libvirt.host [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.338 2 DEBUG nova.virt.libvirt.host [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.339 2 DEBUG nova.virt.libvirt.host [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.341 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.341 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.342 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.342 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.343 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.343 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.344 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.344 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.344 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.345 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.345 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.345 2 DEBUG nova.virt.hardware [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.349 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:20.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:09:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1450001861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.772 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.797 2 DEBUG nova.storage.rbd_utils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] rbd image 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:20 np0005465987 nova_compute[230713]: 2025-10-02 12:09:20.800 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:09:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2175521674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.218 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.220 2 DEBUG nova.virt.libvirt.vif [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-447739829',display_name='tempest-FloatingIPsAssociationTestJSON-server-447739829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-447739829',id=44,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d9c4d04247d43b086698f34cdea3ffb',ramdisk_id='',reservation_id='r-wk70kvvt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1186616354',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1186616354-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:15Z,user_data=None,user_id='245477e4901945099a0da748199456bc',uuid=412cfeea-a5cf-4b59-b59b-679a7b7d4d62,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.220 2 DEBUG nova.network.os_vif_util [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Converting VIF {"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.221 2 DEBUG nova.network.os_vif_util [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:51:ef,bridge_name='br-int',has_traffic_filtering=True,id=9e8826e5-88b4-4834-9e63-a92924187d96,network=Network(0ac7c314-5717-432f-9a9d-1e92ec61cf23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e8826e5-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.222 2 DEBUG nova.objects.instance [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lazy-loading 'pci_devices' on Instance uuid 412cfeea-a5cf-4b59-b59b-679a7b7d4d62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.241 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <uuid>412cfeea-a5cf-4b59-b59b-679a7b7d4d62</uuid>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <name>instance-0000002c</name>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-447739829</nova:name>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:09:20</nova:creationTime>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <nova:user uuid="245477e4901945099a0da748199456bc">tempest-FloatingIPsAssociationTestJSON-1186616354-project-member</nova:user>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <nova:project uuid="1d9c4d04247d43b086698f34cdea3ffb">tempest-FloatingIPsAssociationTestJSON-1186616354</nova:project>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <nova:port uuid="9e8826e5-88b4-4834-9e63-a92924187d96">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <entry name="serial">412cfeea-a5cf-4b59-b59b-679a7b7d4d62</entry>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <entry name="uuid">412cfeea-a5cf-4b59-b59b-679a7b7d4d62</entry>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk.config">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:3d:51:ef"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <target dev="tap9e8826e5-88"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62/console.log" append="off"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:09:21 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:09:21 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:09:21 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:09:21 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.242 2 DEBUG nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Preparing to wait for external event network-vif-plugged-9e8826e5-88b4-4834-9e63-a92924187d96 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.243 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquiring lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.243 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.243 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.244 2 DEBUG nova.virt.libvirt.vif [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-447739829',display_name='tempest-FloatingIPsAssociationTestJSON-server-447739829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-447739829',id=44,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d9c4d04247d43b086698f34cdea3ffb',ramdisk_id='',reservation_id='r-wk70kvvt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1186616354',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1186616354-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:15Z,user_data=None,user_id='245477e4901945099a0da748199456bc',uuid=412cfeea-a5cf-4b59-b59b-679a7b7d4d62,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.245 2 DEBUG nova.network.os_vif_util [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Converting VIF {"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.245 2 DEBUG nova.network.os_vif_util [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:51:ef,bridge_name='br-int',has_traffic_filtering=True,id=9e8826e5-88b4-4834-9e63-a92924187d96,network=Network(0ac7c314-5717-432f-9a9d-1e92ec61cf23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e8826e5-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.246 2 DEBUG os_vif [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:51:ef,bridge_name='br-int',has_traffic_filtering=True,id=9e8826e5-88b4-4834-9e63-a92924187d96,network=Network(0ac7c314-5717-432f-9a9d-1e92ec61cf23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e8826e5-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.251 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e8826e5-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.252 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e8826e5-88, col_values=(('external_ids', {'iface-id': '9e8826e5-88b4-4834-9e63-a92924187d96', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:51:ef', 'vm-uuid': '412cfeea-a5cf-4b59-b59b-679a7b7d4d62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:21 np0005465987 NetworkManager[44910]: <info>  [1759406961.2541] manager: (tap9e8826e5-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.263 2 INFO os_vif [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:51:ef,bridge_name='br-int',has_traffic_filtering=True,id=9e8826e5-88b4-4834-9e63-a92924187d96,network=Network(0ac7c314-5717-432f-9a9d-1e92ec61cf23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e8826e5-88')#033[00m
Oct  2 08:09:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:21.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.400 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.400 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.400 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] No VIF found with MAC fa:16:3e:3d:51:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.401 2 INFO nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Using config drive#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.430 2 DEBUG nova.storage.rbd_utils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] rbd image 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.768 2 INFO nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Creating config drive at /var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62/disk.config#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.772 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmproz4g_lr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.838 2 DEBUG nova.network.neutron [req-9f4cce38-8971-42ad-a33c-c8b58cdc9eed req-466ba7cf-e600-4e96-a5a7-0ddb1bcf36d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updated VIF entry in instance network info cache for port 9e8826e5-88b4-4834-9e63-a92924187d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.839 2 DEBUG nova.network.neutron [req-9f4cce38-8971-42ad-a33c-c8b58cdc9eed req-466ba7cf-e600-4e96-a5a7-0ddb1bcf36d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updating instance_info_cache with network_info: [{"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.866 2 DEBUG oslo_concurrency.lockutils [req-9f4cce38-8971-42ad-a33c-c8b58cdc9eed req-466ba7cf-e600-4e96-a5a7-0ddb1bcf36d0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.899 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmproz4g_lr" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.926 2 DEBUG nova.storage.rbd_utils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] rbd image 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:21 np0005465987 nova_compute[230713]: 2025-10-02 12:09:21.929 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62/disk.config 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.112 2 DEBUG oslo_concurrency.processutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62/disk.config 412cfeea-a5cf-4b59-b59b-679a7b7d4d62_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.113 2 INFO nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Deleting local config drive /var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62/disk.config because it was imported into RBD.#033[00m
Oct  2 08:09:22 np0005465987 kernel: tap9e8826e5-88: entered promiscuous mode
Oct  2 08:09:22 np0005465987 NetworkManager[44910]: <info>  [1759406962.1556] manager: (tap9e8826e5-88): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Oct  2 08:09:22 np0005465987 systemd-udevd[248898]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:09:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:22Z|00109|binding|INFO|Claiming lport 9e8826e5-88b4-4834-9e63-a92924187d96 for this chassis.
Oct  2 08:09:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:22Z|00110|binding|INFO|9e8826e5-88b4-4834-9e63-a92924187d96: Claiming fa:16:3e:3d:51:ef 10.100.0.6
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:22 np0005465987 NetworkManager[44910]: <info>  [1759406962.2167] device (tap9e8826e5-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:09:22 np0005465987 NetworkManager[44910]: <info>  [1759406962.2177] device (tap9e8826e5-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:09:22 np0005465987 systemd-machined[188335]: New machine qemu-23-instance-0000002c.
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.231 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:51:ef 10.100.0.6'], port_security=['fa:16:3e:3d:51:ef 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '412cfeea-a5cf-4b59-b59b-679a7b7d4d62', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ac7c314-5717-432f-9a9d-1e92ec61cf23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d9c4d04247d43b086698f34cdea3ffb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '34d2a56b-4b67-4c37-8020-bb8559e6c196', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bc1f92c-d069-4844-8f9d-c573877b2411, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=9e8826e5-88b4-4834-9e63-a92924187d96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.232 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 9e8826e5-88b4-4834-9e63-a92924187d96 in datapath 0ac7c314-5717-432f-9a9d-1e92ec61cf23 bound to our chassis#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.234 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ac7c314-5717-432f-9a9d-1e92ec61cf23#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.244 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2d278b9a-306c-4d92-be0e-8ee432b933eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.245 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ac7c314-51 in ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.247 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ac7c314-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.247 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[07306e23-01ab-4645-bfff-dda24708a880]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.247 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cce9384b-a20a-4293-975a-fa4b96979a83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 systemd[1]: Started Virtual Machine qemu-23-instance-0000002c.
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.259 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[23117fee-6bf0-4e17-ade3-c539266717db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.282 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e552dfe8-f9af-481c-8075-8a71374e08ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:22Z|00111|binding|INFO|Setting lport 9e8826e5-88b4-4834-9e63-a92924187d96 ovn-installed in OVS
Oct  2 08:09:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:22Z|00112|binding|INFO|Setting lport 9e8826e5-88b4-4834-9e63-a92924187d96 up in Southbound
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.322 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[effc235a-a1ca-47dc-ad24-6496dae1f85c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.326 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0958fc-ce2b-462b-a4c3-944ab142385a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 NetworkManager[44910]: <info>  [1759406962.3280] manager: (tap0ac7c314-50): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.359 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f2566a9c-2ecc-4d7c-8ef9-3165f488eba8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.362 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f0ce52-450e-4ab7-8da4-6aa8ef6cead6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 NetworkManager[44910]: <info>  [1759406962.3863] device (tap0ac7c314-50): carrier: link connected
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.398 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6237be53-e279-41bd-a544-27cf6d0265fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.415 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a97e2997-53b9-48f3-8cb7-454ec5497154]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ac7c314-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:ce:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503096, 'reachable_time': 40626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248934, 'error': None, 'target': 'ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.432 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[88e9296c-5e64-4277-be3a-a5c882ca2d60]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:cebd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503096, 'tstamp': 503096}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248935, 'error': None, 'target': 'ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.449 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[929c5329-b19c-412a-9bbb-31146c92f4d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ac7c314-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:ce:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503096, 'reachable_time': 40626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248936, 'error': None, 'target': 'ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.478 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[797f2bad-7ca3-467a-8c14-6b509a1f3cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.525 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6909e7-52ff-4394-8a1f-657768cafa2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.526 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ac7c314-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.526 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.527 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ac7c314-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:22 np0005465987 kernel: tap0ac7c314-50: entered promiscuous mode
Oct  2 08:09:22 np0005465987 NetworkManager[44910]: <info>  [1759406962.5293] manager: (tap0ac7c314-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.531 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ac7c314-50, col_values=(('external_ids', {'iface-id': 'bbad9949-ff39-435a-9230-c3cf2c6c1571'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:22Z|00113|binding|INFO|Releasing lport bbad9949-ff39-435a-9230-c3cf2c6c1571 from this chassis (sb_readonly=0)
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.546 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ac7c314-5717-432f-9a9d-1e92ec61cf23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ac7c314-5717-432f-9a9d-1e92ec61cf23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.547 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0aba7edf-31a8-4be4-b567-df0fc004c419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.547 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-0ac7c314-5717-432f-9a9d-1e92ec61cf23
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/0ac7c314-5717-432f-9a9d-1e92ec61cf23.pid.haproxy
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 0ac7c314-5717-432f-9a9d-1e92ec61cf23
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:09:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:22.548 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23', 'env', 'PROCESS_TAG=haproxy-0ac7c314-5717-432f-9a9d-1e92ec61cf23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ac7c314-5717-432f-9a9d-1e92ec61cf23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:09:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:22.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.699 2 DEBUG nova.compute.manager [req-56425c2e-803d-4080-af8b-b92444840586 req-82631f53-289a-466f-8b19-2dca199d63b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received event network-vif-plugged-9e8826e5-88b4-4834-9e63-a92924187d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.699 2 DEBUG oslo_concurrency.lockutils [req-56425c2e-803d-4080-af8b-b92444840586 req-82631f53-289a-466f-8b19-2dca199d63b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.700 2 DEBUG oslo_concurrency.lockutils [req-56425c2e-803d-4080-af8b-b92444840586 req-82631f53-289a-466f-8b19-2dca199d63b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.700 2 DEBUG oslo_concurrency.lockutils [req-56425c2e-803d-4080-af8b-b92444840586 req-82631f53-289a-466f-8b19-2dca199d63b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.700 2 DEBUG nova.compute.manager [req-56425c2e-803d-4080-af8b-b92444840586 req-82631f53-289a-466f-8b19-2dca199d63b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Processing event network-vif-plugged-9e8826e5-88b4-4834-9e63-a92924187d96 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:09:22 np0005465987 nova_compute[230713]: 2025-10-02 12:09:22.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:22 np0005465987 podman[249010]: 2025-10-02 12:09:22.935888514 +0000 UTC m=+0.083607490 container create 2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:09:22 np0005465987 podman[249010]: 2025-10-02 12:09:22.879203075 +0000 UTC m=+0.026922071 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:09:23 np0005465987 systemd[1]: Started libpod-conmon-2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541.scope.
Oct  2 08:09:23 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:09:23 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7dc130747d59c8e885c4876ecbf0a1c4717e961966d56ed8e2d541bd50ff0c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:09:23 np0005465987 podman[249010]: 2025-10-02 12:09:23.099794491 +0000 UTC m=+0.247513477 container init 2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:09:23 np0005465987 podman[249010]: 2025-10-02 12:09:23.106550137 +0000 UTC m=+0.254269113 container start 2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:09:23 np0005465987 neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23[249025]: [NOTICE]   (249029) : New worker (249031) forked
Oct  2 08:09:23 np0005465987 neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23[249025]: [NOTICE]   (249029) : Loading success.
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.151 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406963.151314, 412cfeea-a5cf-4b59-b59b-679a7b7d4d62 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.152 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] VM Started (Lifecycle Event)#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.154 2 DEBUG nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.157 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.160 2 INFO nova.virt.libvirt.driver [-] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Instance spawned successfully.#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.160 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.183 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.189 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.190 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.190 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.191 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.191 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.193 2 DEBUG nova.virt.libvirt.driver [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.196 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.237 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.238 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406963.1515057, 412cfeea-a5cf-4b59-b59b-679a7b7d4d62 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.238 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.268 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.270 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759406963.1571755, 412cfeea-a5cf-4b59-b59b-679a7b7d4d62 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.271 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.283 2 INFO nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Took 8.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.284 2 DEBUG nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.294 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.297 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.327 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.348 2 INFO nova.compute.manager [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Took 9.10 seconds to build instance.#033[00m
Oct  2 08:09:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:23.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:23 np0005465987 nova_compute[230713]: 2025-10-02 12:09:23.366 2 DEBUG oslo_concurrency.lockutils [None req-be91896a-6926-498c-89d9-eb11fe4dc7ff 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Oct  2 08:09:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:24 np0005465987 nova_compute[230713]: 2025-10-02 12:09:24.792 2 DEBUG nova.compute.manager [req-618c98d8-3121-434d-a475-f414f88c05f8 req-aec8c7b4-16eb-4825-9a3d-5debd6d1db4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received event network-vif-plugged-9e8826e5-88b4-4834-9e63-a92924187d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:09:24 np0005465987 nova_compute[230713]: 2025-10-02 12:09:24.793 2 DEBUG oslo_concurrency.lockutils [req-618c98d8-3121-434d-a475-f414f88c05f8 req-aec8c7b4-16eb-4825-9a3d-5debd6d1db4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:24 np0005465987 nova_compute[230713]: 2025-10-02 12:09:24.794 2 DEBUG oslo_concurrency.lockutils [req-618c98d8-3121-434d-a475-f414f88c05f8 req-aec8c7b4-16eb-4825-9a3d-5debd6d1db4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:24 np0005465987 nova_compute[230713]: 2025-10-02 12:09:24.794 2 DEBUG oslo_concurrency.lockutils [req-618c98d8-3121-434d-a475-f414f88c05f8 req-aec8c7b4-16eb-4825-9a3d-5debd6d1db4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:24 np0005465987 nova_compute[230713]: 2025-10-02 12:09:24.794 2 DEBUG nova.compute.manager [req-618c98d8-3121-434d-a475-f414f88c05f8 req-aec8c7b4-16eb-4825-9a3d-5debd6d1db4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] No waiting events found dispatching network-vif-plugged-9e8826e5-88b4-4834-9e63-a92924187d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:09:24 np0005465987 nova_compute[230713]: 2025-10-02 12:09:24.795 2 WARNING nova.compute.manager [req-618c98d8-3121-434d-a475-f414f88c05f8 req-aec8c7b4-16eb-4825-9a3d-5debd6d1db4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received unexpected event network-vif-plugged-9e8826e5-88b4-4834-9e63-a92924187d96 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:09:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:25.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:25 np0005465987 nova_compute[230713]: 2025-10-02 12:09:25.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:25 np0005465987 NetworkManager[44910]: <info>  [1759406965.5259] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct  2 08:09:25 np0005465987 NetworkManager[44910]: <info>  [1759406965.5274] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct  2 08:09:25 np0005465987 nova_compute[230713]: 2025-10-02 12:09:25.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:25 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:25Z|00114|binding|INFO|Releasing lport bbad9949-ff39-435a-9230-c3cf2c6c1571 from this chassis (sb_readonly=0)
Oct  2 08:09:25 np0005465987 nova_compute[230713]: 2025-10-02 12:09:25.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:26 np0005465987 nova_compute[230713]: 2025-10-02 12:09:26.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:27.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:27 np0005465987 nova_compute[230713]: 2025-10-02 12:09:27.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:27.937 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:27.938 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:27.938 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:28.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:29.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:30.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:31 np0005465987 nova_compute[230713]: 2025-10-02 12:09:31.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:09:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:31.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:09:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:32.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:32 np0005465987 nova_compute[230713]: 2025-10-02 12:09:32.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:33.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:33 np0005465987 nova_compute[230713]: 2025-10-02 12:09:33.748 2 DEBUG nova.compute.manager [req-a89e959e-a370-48f2-92f5-b5203a72dcf3 req-57e260cd-f2ca-4e93-9800-36671047edbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received event network-changed-9e8826e5-88b4-4834-9e63-a92924187d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:09:33 np0005465987 nova_compute[230713]: 2025-10-02 12:09:33.749 2 DEBUG nova.compute.manager [req-a89e959e-a370-48f2-92f5-b5203a72dcf3 req-57e260cd-f2ca-4e93-9800-36671047edbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Refreshing instance network info cache due to event network-changed-9e8826e5-88b4-4834-9e63-a92924187d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:09:33 np0005465987 nova_compute[230713]: 2025-10-02 12:09:33.749 2 DEBUG oslo_concurrency.lockutils [req-a89e959e-a370-48f2-92f5-b5203a72dcf3 req-57e260cd-f2ca-4e93-9800-36671047edbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:09:33 np0005465987 nova_compute[230713]: 2025-10-02 12:09:33.749 2 DEBUG oslo_concurrency.lockutils [req-a89e959e-a370-48f2-92f5-b5203a72dcf3 req-57e260cd-f2ca-4e93-9800-36671047edbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:09:33 np0005465987 nova_compute[230713]: 2025-10-02 12:09:33.749 2 DEBUG nova.network.neutron [req-a89e959e-a370-48f2-92f5-b5203a72dcf3 req-57e260cd-f2ca-4e93-9800-36671047edbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Refreshing network info cache for port 9e8826e5-88b4-4834-9e63-a92924187d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:09:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:34.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:34 np0005465987 nova_compute[230713]: 2025-10-02 12:09:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:34 np0005465987 nova_compute[230713]: 2025-10-02 12:09:34.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:09:34 np0005465987 nova_compute[230713]: 2025-10-02 12:09:34.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:09:35 np0005465987 nova_compute[230713]: 2025-10-02 12:09:35.195 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:09:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:35.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:35 np0005465987 nova_compute[230713]: 2025-10-02 12:09:35.593 2 DEBUG nova.network.neutron [req-a89e959e-a370-48f2-92f5-b5203a72dcf3 req-57e260cd-f2ca-4e93-9800-36671047edbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updated VIF entry in instance network info cache for port 9e8826e5-88b4-4834-9e63-a92924187d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:09:35 np0005465987 nova_compute[230713]: 2025-10-02 12:09:35.594 2 DEBUG nova.network.neutron [req-a89e959e-a370-48f2-92f5-b5203a72dcf3 req-57e260cd-f2ca-4e93-9800-36671047edbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updating instance_info_cache with network_info: [{"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:09:35 np0005465987 nova_compute[230713]: 2025-10-02 12:09:35.766 2 DEBUG oslo_concurrency.lockutils [req-a89e959e-a370-48f2-92f5-b5203a72dcf3 req-57e260cd-f2ca-4e93-9800-36671047edbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:09:35 np0005465987 nova_compute[230713]: 2025-10-02 12:09:35.767 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:09:35 np0005465987 nova_compute[230713]: 2025-10-02 12:09:35.767 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:09:35 np0005465987 nova_compute[230713]: 2025-10-02 12:09:35.767 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 412cfeea-a5cf-4b59-b59b-679a7b7d4d62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:36 np0005465987 nova_compute[230713]: 2025-10-02 12:09:36.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:36 np0005465987 podman[249041]: 2025-10-02 12:09:36.574464496 +0000 UTC m=+0.054323254 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:09:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:36.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:37Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:51:ef 10.100.0.6
Oct  2 08:09:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:37Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:51:ef 10.100.0.6
Oct  2 08:09:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:37.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:37 np0005465987 nova_compute[230713]: 2025-10-02 12:09:37.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:38 np0005465987 nova_compute[230713]: 2025-10-02 12:09:38.573 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updating instance_info_cache with network_info: [{"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:09:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:38.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:38 np0005465987 nova_compute[230713]: 2025-10-02 12:09:38.599 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:09:38 np0005465987 nova_compute[230713]: 2025-10-02 12:09:38.600 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:09:38 np0005465987 nova_compute[230713]: 2025-10-02 12:09:38.600 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:38 np0005465987 nova_compute[230713]: 2025-10-02 12:09:38.600 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:39.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:39 np0005465987 nova_compute[230713]: 2025-10-02 12:09:39.593 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:39 np0005465987 nova_compute[230713]: 2025-10-02 12:09:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:39 np0005465987 nova_compute[230713]: 2025-10-02 12:09:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:39 np0005465987 nova_compute[230713]: 2025-10-02 12:09:39.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:39 np0005465987 nova_compute[230713]: 2025-10-02 12:09:39.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:39 np0005465987 nova_compute[230713]: 2025-10-02 12:09:39.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:39 np0005465987 nova_compute[230713]: 2025-10-02 12:09:39.993 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:09:39 np0005465987 nova_compute[230713]: 2025-10-02 12:09:39.994 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:40Z|00115|binding|INFO|Releasing lport bbad9949-ff39-435a-9230-c3cf2c6c1571 from this chassis (sb_readonly=0)
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:40Z|00116|binding|INFO|Releasing lport bbad9949-ff39-435a-9230-c3cf2c6c1571 from this chassis (sb_readonly=0)
Oct  2 08:09:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:40 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1818087645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.423 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.431 2 DEBUG nova.compute.manager [req-f0fd9c48-6f4e-4fe0-bf6b-267bc488a440 req-7a7a37da-8a8b-44b4-9952-11861d8b2c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received event network-changed-9e8826e5-88b4-4834-9e63-a92924187d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.432 2 DEBUG nova.compute.manager [req-f0fd9c48-6f4e-4fe0-bf6b-267bc488a440 req-7a7a37da-8a8b-44b4-9952-11861d8b2c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Refreshing instance network info cache due to event network-changed-9e8826e5-88b4-4834-9e63-a92924187d96. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.432 2 DEBUG oslo_concurrency.lockutils [req-f0fd9c48-6f4e-4fe0-bf6b-267bc488a440 req-7a7a37da-8a8b-44b4-9952-11861d8b2c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.433 2 DEBUG oslo_concurrency.lockutils [req-f0fd9c48-6f4e-4fe0-bf6b-267bc488a440 req-7a7a37da-8a8b-44b4-9952-11861d8b2c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.433 2 DEBUG nova.network.neutron [req-f0fd9c48-6f4e-4fe0-bf6b-267bc488a440 req-7a7a37da-8a8b-44b4-9952-11861d8b2c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Refreshing network info cache for port 9e8826e5-88b4-4834-9e63-a92924187d96 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.515 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.515 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:09:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:40.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.708 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.709 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4607MB free_disk=20.852458953857422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.709 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.710 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.782 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 412cfeea-a5cf-4b59-b59b-679a7b7d4d62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.782 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.783 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.799 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.818 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.819 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.839 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.858 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:09:40 np0005465987 nova_compute[230713]: 2025-10-02 12:09:40.931 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:41 np0005465987 nova_compute[230713]: 2025-10-02 12:09:41.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:41 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3853346440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:41 np0005465987 nova_compute[230713]: 2025-10-02 12:09:41.372 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:41 np0005465987 nova_compute[230713]: 2025-10-02 12:09:41.377 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:09:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:41.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:41 np0005465987 nova_compute[230713]: 2025-10-02 12:09:41.471 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:09:41 np0005465987 nova_compute[230713]: 2025-10-02 12:09:41.575 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:09:41 np0005465987 nova_compute[230713]: 2025-10-02 12:09:41.575 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.064 2 DEBUG oslo_concurrency.lockutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquiring lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.065 2 DEBUG oslo_concurrency.lockutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.065 2 DEBUG oslo_concurrency.lockutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquiring lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.066 2 DEBUG oslo_concurrency.lockutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.066 2 DEBUG oslo_concurrency.lockutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.067 2 INFO nova.compute.manager [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Terminating instance#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.068 2 DEBUG nova.compute.manager [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:09:42 np0005465987 kernel: tap9e8826e5-88 (unregistering): left promiscuous mode
Oct  2 08:09:42 np0005465987 NetworkManager[44910]: <info>  [1759406982.1526] device (tap9e8826e5-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:42Z|00117|binding|INFO|Releasing lport 9e8826e5-88b4-4834-9e63-a92924187d96 from this chassis (sb_readonly=0)
Oct  2 08:09:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:42Z|00118|binding|INFO|Setting lport 9e8826e5-88b4-4834-9e63-a92924187d96 down in Southbound
Oct  2 08:09:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:09:42Z|00119|binding|INFO|Removing iface tap9e8826e5-88 ovn-installed in OVS
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.185 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:51:ef 10.100.0.6'], port_security=['fa:16:3e:3d:51:ef 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '412cfeea-a5cf-4b59-b59b-679a7b7d4d62', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ac7c314-5717-432f-9a9d-1e92ec61cf23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d9c4d04247d43b086698f34cdea3ffb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34d2a56b-4b67-4c37-8020-bb8559e6c196', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bc1f92c-d069-4844-8f9d-c573877b2411, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=9e8826e5-88b4-4834-9e63-a92924187d96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.187 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 9e8826e5-88b4-4834-9e63-a92924187d96 in datapath 0ac7c314-5717-432f-9a9d-1e92ec61cf23 unbound from our chassis#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.189 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ac7c314-5717-432f-9a9d-1e92ec61cf23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.191 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[af0c8f14-d768-455b-9d4e-54f99888998e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.192 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23 namespace which is not needed anymore#033[00m
Oct  2 08:09:42 np0005465987 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct  2 08:09:42 np0005465987 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002c.scope: Consumed 13.556s CPU time.
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.224 2 DEBUG nova.network.neutron [req-f0fd9c48-6f4e-4fe0-bf6b-267bc488a440 req-7a7a37da-8a8b-44b4-9952-11861d8b2c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updated VIF entry in instance network info cache for port 9e8826e5-88b4-4834-9e63-a92924187d96. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.225 2 DEBUG nova.network.neutron [req-f0fd9c48-6f4e-4fe0-bf6b-267bc488a440 req-7a7a37da-8a8b-44b4-9952-11861d8b2c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updating instance_info_cache with network_info: [{"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:09:42 np0005465987 systemd-machined[188335]: Machine qemu-23-instance-0000002c terminated.
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.245 2 DEBUG oslo_concurrency.lockutils [req-f0fd9c48-6f4e-4fe0-bf6b-267bc488a440 req-7a7a37da-8a8b-44b4-9952-11861d8b2c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-412cfeea-a5cf-4b59-b59b-679a7b7d4d62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.302 2 INFO nova.virt.libvirt.driver [-] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Instance destroyed successfully.#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.302 2 DEBUG nova.objects.instance [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lazy-loading 'resources' on Instance uuid 412cfeea-a5cf-4b59-b59b-679a7b7d4d62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:42 np0005465987 neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23[249025]: [NOTICE]   (249029) : haproxy version is 2.8.14-c23fe91
Oct  2 08:09:42 np0005465987 neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23[249025]: [NOTICE]   (249029) : path to executable is /usr/sbin/haproxy
Oct  2 08:09:42 np0005465987 neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23[249025]: [WARNING]  (249029) : Exiting Master process...
Oct  2 08:09:42 np0005465987 neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23[249025]: [ALERT]    (249029) : Current worker (249031) exited with code 143 (Terminated)
Oct  2 08:09:42 np0005465987 neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23[249025]: [WARNING]  (249029) : All workers exited. Exiting... (0)
Oct  2 08:09:42 np0005465987 systemd[1]: libpod-2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541.scope: Deactivated successfully.
Oct  2 08:09:42 np0005465987 podman[249131]: 2025-10-02 12:09:42.311037547 +0000 UTC m=+0.038885070 container died 2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.320 2 DEBUG nova.virt.libvirt.vif [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:09:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-447739829',display_name='tempest-FloatingIPsAssociationTestJSON-server-447739829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-447739829',id=44,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:09:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1d9c4d04247d43b086698f34cdea3ffb',ramdisk_id='',reservation_id='r-wk70kvvt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1186616354',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1186616354-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:09:23Z,user_data=None,user_id='245477e4901945099a0da748199456bc',uuid=412cfeea-a5cf-4b59-b59b-679a7b7d4d62,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.321 2 DEBUG nova.network.os_vif_util [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Converting VIF {"id": "9e8826e5-88b4-4834-9e63-a92924187d96", "address": "fa:16:3e:3d:51:ef", "network": {"id": "0ac7c314-5717-432f-9a9d-1e92ec61cf23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2132560831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d9c4d04247d43b086698f34cdea3ffb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e8826e5-88", "ovs_interfaceid": "9e8826e5-88b4-4834-9e63-a92924187d96", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.321 2 DEBUG nova.network.os_vif_util [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:51:ef,bridge_name='br-int',has_traffic_filtering=True,id=9e8826e5-88b4-4834-9e63-a92924187d96,network=Network(0ac7c314-5717-432f-9a9d-1e92ec61cf23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e8826e5-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.322 2 DEBUG os_vif [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:51:ef,bridge_name='br-int',has_traffic_filtering=True,id=9e8826e5-88b4-4834-9e63-a92924187d96,network=Network(0ac7c314-5717-432f-9a9d-1e92ec61cf23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e8826e5-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e8826e5-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.357 2 INFO os_vif [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:51:ef,bridge_name='br-int',has_traffic_filtering=True,id=9e8826e5-88b4-4834-9e63-a92924187d96,network=Network(0ac7c314-5717-432f-9a9d-1e92ec61cf23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e8826e5-88')#033[00m
Oct  2 08:09:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541-userdata-shm.mount: Deactivated successfully.
Oct  2 08:09:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay-cc7dc130747d59c8e885c4876ecbf0a1c4717e961966d56ed8e2d541bd50ff0c-merged.mount: Deactivated successfully.
Oct  2 08:09:42 np0005465987 podman[249131]: 2025-10-02 12:09:42.394970255 +0000 UTC m=+0.122817778 container cleanup 2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:09:42 np0005465987 systemd[1]: libpod-conmon-2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541.scope: Deactivated successfully.
Oct  2 08:09:42 np0005465987 podman[249186]: 2025-10-02 12:09:42.463988422 +0000 UTC m=+0.049502441 container remove 2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.469 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[13529478-0f17-443e-86ab-cf1a4153b340]: (4, ('Thu Oct  2 12:09:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23 (2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541)\n2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541\nThu Oct  2 12:09:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23 (2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541)\n2968e8f1f9fa54723a1728b437731e719263fbb33bc6a72085c382e7e5479541\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.471 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cc496651-3b93-418a-946a-fa305135ef53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.472 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ac7c314-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 kernel: tap0ac7c314-50: left promiscuous mode
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.477 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f4736aaf-6212-4791-84f7-84a2a606ee7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.507 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5b3c8f-dbff-4bd1-bc16-95cbbffe1d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.508 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8593a84c-de8a-46f0-980b-285fb1e08e66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.523 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e633bdaa-444d-4a3f-aa2c-41052268fe0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503088, 'reachable_time': 32096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249202, 'error': None, 'target': 'ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:42 np0005465987 systemd[1]: run-netns-ovnmeta\x2d0ac7c314\x2d5717\x2d432f\x2d9a9d\x2d1e92ec61cf23.mount: Deactivated successfully.
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.528 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ac7c314-5717-432f-9a9d-1e92ec61cf23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:09:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:42.529 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[6248d28d-1e62-4baa-a5f4-be480dceb39d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.575 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.575 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.586 2 DEBUG nova.compute.manager [req-dcff5e96-418a-425c-933e-511ea2dcb291 req-7622f207-6f52-42dd-a5e1-ced6d854e124 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received event network-vif-unplugged-9e8826e5-88b4-4834-9e63-a92924187d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.587 2 DEBUG oslo_concurrency.lockutils [req-dcff5e96-418a-425c-933e-511ea2dcb291 req-7622f207-6f52-42dd-a5e1-ced6d854e124 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.587 2 DEBUG oslo_concurrency.lockutils [req-dcff5e96-418a-425c-933e-511ea2dcb291 req-7622f207-6f52-42dd-a5e1-ced6d854e124 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.587 2 DEBUG oslo_concurrency.lockutils [req-dcff5e96-418a-425c-933e-511ea2dcb291 req-7622f207-6f52-42dd-a5e1-ced6d854e124 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.587 2 DEBUG nova.compute.manager [req-dcff5e96-418a-425c-933e-511ea2dcb291 req-7622f207-6f52-42dd-a5e1-ced6d854e124 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] No waiting events found dispatching network-vif-unplugged-9e8826e5-88b4-4834-9e63-a92924187d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.587 2 DEBUG nova.compute.manager [req-dcff5e96-418a-425c-933e-511ea2dcb291 req-7622f207-6f52-42dd-a5e1-ced6d854e124 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received event network-vif-unplugged-9e8826e5-88b4-4834-9e63-a92924187d96 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:09:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:42.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.987 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:42 np0005465987 nova_compute[230713]: 2025-10-02 12:09:42.987 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:09:43 np0005465987 nova_compute[230713]: 2025-10-02 12:09:43.158 2 INFO nova.virt.libvirt.driver [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Deleting instance files /var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62_del#033[00m
Oct  2 08:09:43 np0005465987 nova_compute[230713]: 2025-10-02 12:09:43.158 2 INFO nova.virt.libvirt.driver [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Deletion of /var/lib/nova/instances/412cfeea-a5cf-4b59-b59b-679a7b7d4d62_del complete#033[00m
Oct  2 08:09:43 np0005465987 nova_compute[230713]: 2025-10-02 12:09:43.224 2 INFO nova.compute.manager [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:09:43 np0005465987 nova_compute[230713]: 2025-10-02 12:09:43.225 2 DEBUG oslo.service.loopingcall [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:09:43 np0005465987 nova_compute[230713]: 2025-10-02 12:09:43.225 2 DEBUG nova.compute.manager [-] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:09:43 np0005465987 nova_compute[230713]: 2025-10-02 12:09:43.225 2 DEBUG nova.network.neutron [-] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:09:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:43.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:09:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761977314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:09:43 np0005465987 nova_compute[230713]: 2025-10-02 12:09:43.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:43 np0005465987 NetworkManager[44910]: <info>  [1759406983.8077] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct  2 08:09:43 np0005465987 NetworkManager[44910]: <info>  [1759406983.8086] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct  2 08:09:43 np0005465987 podman[249205]: 2025-10-02 12:09:43.85493096 +0000 UTC m=+0.070758547 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:09:43 np0005465987 podman[249206]: 2025-10-02 12:09:43.882473447 +0000 UTC m=+0.095554948 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct  2 08:09:43 np0005465987 nova_compute[230713]: 2025-10-02 12:09:43.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:43 np0005465987 podman[249204]: 2025-10-02 12:09:43.94842369 +0000 UTC m=+0.164033441 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:09:43 np0005465987 nova_compute[230713]: 2025-10-02 12:09:43.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.206 2 DEBUG nova.network.neutron [-] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.222 2 INFO nova.compute.manager [-] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Took 1.00 seconds to deallocate network for instance.#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.288 2 DEBUG oslo_concurrency.lockutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.289 2 DEBUG oslo_concurrency.lockutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.352 2 DEBUG oslo_concurrency.processutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.412 2 DEBUG nova.compute.manager [req-aba9c989-4e54-4a11-a145-e4df894e9cce req-0dbb3523-8dd3-44fe-affe-3c5a35e35109 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received event network-vif-deleted-9e8826e5-88b4-4834-9e63-a92924187d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:09:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:44.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.690 2 DEBUG nova.compute.manager [req-01cacd8d-b2c5-45d6-b626-ad9ee0e755ba req-13982090-1968-40fc-8ecb-c8cd4c96de18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received event network-vif-plugged-9e8826e5-88b4-4834-9e63-a92924187d96 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.690 2 DEBUG oslo_concurrency.lockutils [req-01cacd8d-b2c5-45d6-b626-ad9ee0e755ba req-13982090-1968-40fc-8ecb-c8cd4c96de18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.691 2 DEBUG oslo_concurrency.lockutils [req-01cacd8d-b2c5-45d6-b626-ad9ee0e755ba req-13982090-1968-40fc-8ecb-c8cd4c96de18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.692 2 DEBUG oslo_concurrency.lockutils [req-01cacd8d-b2c5-45d6-b626-ad9ee0e755ba req-13982090-1968-40fc-8ecb-c8cd4c96de18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.692 2 DEBUG nova.compute.manager [req-01cacd8d-b2c5-45d6-b626-ad9ee0e755ba req-13982090-1968-40fc-8ecb-c8cd4c96de18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] No waiting events found dispatching network-vif-plugged-9e8826e5-88b4-4834-9e63-a92924187d96 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.693 2 WARNING nova.compute.manager [req-01cacd8d-b2c5-45d6-b626-ad9ee0e755ba req-13982090-1968-40fc-8ecb-c8cd4c96de18 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Received unexpected event network-vif-plugged-9e8826e5-88b4-4834-9e63-a92924187d96 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:09:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:44 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3936337925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.859 2 DEBUG oslo_concurrency.processutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.863 2 DEBUG nova.compute.provider_tree [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.878 2 DEBUG nova.scheduler.client.report [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.915 2 DEBUG oslo_concurrency.lockutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:44 np0005465987 nova_compute[230713]: 2025-10-02 12:09:44.964 2 INFO nova.scheduler.client.report [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Deleted allocations for instance 412cfeea-a5cf-4b59-b59b-679a7b7d4d62#033[00m
Oct  2 08:09:45 np0005465987 nova_compute[230713]: 2025-10-02 12:09:45.036 2 DEBUG oslo_concurrency.lockutils [None req-1c75bb1a-7388-443b-8472-b76e3ebbe8a4 245477e4901945099a0da748199456bc 1d9c4d04247d43b086698f34cdea3ffb - - default default] Lock "412cfeea-a5cf-4b59-b59b-679a7b7d4d62" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:45.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:46.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:47 np0005465987 nova_compute[230713]: 2025-10-02 12:09:47.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:47.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:47 np0005465987 nova_compute[230713]: 2025-10-02 12:09:47.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:49.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:51.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:52 np0005465987 nova_compute[230713]: 2025-10-02 12:09:52.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:52.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:52 np0005465987 nova_compute[230713]: 2025-10-02 12:09:52.648 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "908f1b97-bcb8-45f0-a30c-234bd252cab7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:52 np0005465987 nova_compute[230713]: 2025-10-02 12:09:52.649 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:52 np0005465987 nova_compute[230713]: 2025-10-02 12:09:52.735 2 DEBUG nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:09:52 np0005465987 nova_compute[230713]: 2025-10-02 12:09:52.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:52 np0005465987 nova_compute[230713]: 2025-10-02 12:09:52.904 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:52 np0005465987 nova_compute[230713]: 2025-10-02 12:09:52.905 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:52 np0005465987 nova_compute[230713]: 2025-10-02 12:09:52.911 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:09:52 np0005465987 nova_compute[230713]: 2025-10-02 12:09:52.911 2 INFO nova.compute.claims [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:09:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Oct  2 08:09:53 np0005465987 nova_compute[230713]: 2025-10-02 12:09:53.177 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:53.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:53 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1175183872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:53 np0005465987 nova_compute[230713]: 2025-10-02 12:09:53.617 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:53 np0005465987 nova_compute[230713]: 2025-10-02 12:09:53.622 2 DEBUG nova.compute.provider_tree [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:09:53 np0005465987 nova_compute[230713]: 2025-10-02 12:09:53.719 2 DEBUG nova.scheduler.client.report [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:09:53 np0005465987 nova_compute[230713]: 2025-10-02 12:09:53.908 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:53 np0005465987 nova_compute[230713]: 2025-10-02 12:09:53.909 2 DEBUG nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.130 2 DEBUG nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.130 2 DEBUG nova.network.neutron [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.243 2 INFO nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.407 2 DEBUG nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:09:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:54.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.657 2 DEBUG nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.660 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.661 2 INFO nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Creating image(s)#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.704 2 DEBUG nova.storage.rbd_utils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.743 2 DEBUG nova.storage.rbd_utils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.777 2 DEBUG nova.storage.rbd_utils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.782 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.809 2 DEBUG nova.policy [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5ca95bd9fa148c7948c062421011d76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95056cabad5b4f32916e46a46b10f677', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.858 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.859 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.860 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.860 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.881 2 DEBUG nova.storage.rbd_utils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:54 np0005465987 nova_compute[230713]: 2025-10-02 12:09:54.885 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:55 np0005465987 nova_compute[230713]: 2025-10-02 12:09:55.174 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:55 np0005465987 nova_compute[230713]: 2025-10-02 12:09:55.249 2 DEBUG nova.storage.rbd_utils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] resizing rbd image 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:09:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:55.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:55 np0005465987 nova_compute[230713]: 2025-10-02 12:09:55.538 2 DEBUG nova.objects.instance [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lazy-loading 'migration_context' on Instance uuid 908f1b97-bcb8-45f0-a30c-234bd252cab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:55 np0005465987 nova_compute[230713]: 2025-10-02 12:09:55.557 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:09:55 np0005465987 nova_compute[230713]: 2025-10-02 12:09:55.557 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Ensure instance console log exists: /var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:09:55 np0005465987 nova_compute[230713]: 2025-10-02 12:09:55.558 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:55 np0005465987 nova_compute[230713]: 2025-10-02 12:09:55.558 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:55 np0005465987 nova_compute[230713]: 2025-10-02 12:09:55.559 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:09:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:56.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:09:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:57 np0005465987 nova_compute[230713]: 2025-10-02 12:09:57.299 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759406982.2983823, 412cfeea-a5cf-4b59-b59b-679a7b7d4d62 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:09:57 np0005465987 nova_compute[230713]: 2025-10-02 12:09:57.300 2 INFO nova.compute.manager [-] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:09:57 np0005465987 nova_compute[230713]: 2025-10-02 12:09:57.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:57 np0005465987 nova_compute[230713]: 2025-10-02 12:09:57.379 2 DEBUG nova.compute.manager [None req-9615417b-dea3-4b92-ad5d-e8104ce07b46 - - - - - -] [instance: 412cfeea-a5cf-4b59-b59b-679a7b7d4d62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:09:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:57.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:57 np0005465987 nova_compute[230713]: 2025-10-02 12:09:57.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:58 np0005465987 nova_compute[230713]: 2025-10-02 12:09:58.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:09:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:58.503 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:09:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:09:58.504 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:09:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:58.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:58 np0005465987 nova_compute[230713]: 2025-10-02 12:09:58.813 2 DEBUG nova.network.neutron [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Successfully created port: d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:09:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:09:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:59.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 08:10:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:00.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:01.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:01 np0005465987 nova_compute[230713]: 2025-10-02 12:10:01.659 2 DEBUG nova.network.neutron [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Successfully updated port: d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:10:01 np0005465987 nova_compute[230713]: 2025-10-02 12:10:01.680 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "refresh_cache-908f1b97-bcb8-45f0-a30c-234bd252cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:01 np0005465987 nova_compute[230713]: 2025-10-02 12:10:01.680 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquired lock "refresh_cache-908f1b97-bcb8-45f0-a30c-234bd252cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:01 np0005465987 nova_compute[230713]: 2025-10-02 12:10:01.680 2 DEBUG nova.network.neutron [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:10:01 np0005465987 nova_compute[230713]: 2025-10-02 12:10:01.851 2 DEBUG nova.network.neutron [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:10:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:02 np0005465987 nova_compute[230713]: 2025-10-02 12:10:02.075 2 DEBUG nova.compute.manager [req-13881c86-8f1f-4f72-9b42-554c4f752305 req-caae24c3-e7aa-4475-beec-6b2ef7ecf37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Received event network-changed-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:02 np0005465987 nova_compute[230713]: 2025-10-02 12:10:02.076 2 DEBUG nova.compute.manager [req-13881c86-8f1f-4f72-9b42-554c4f752305 req-caae24c3-e7aa-4475-beec-6b2ef7ecf37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Refreshing instance network info cache due to event network-changed-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:10:02 np0005465987 nova_compute[230713]: 2025-10-02 12:10:02.077 2 DEBUG oslo_concurrency.lockutils [req-13881c86-8f1f-4f72-9b42-554c4f752305 req-caae24c3-e7aa-4475-beec-6b2ef7ecf37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-908f1b97-bcb8-45f0-a30c-234bd252cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:02 np0005465987 nova_compute[230713]: 2025-10-02 12:10:02.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:02.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:10:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:10:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:10:02 np0005465987 nova_compute[230713]: 2025-10-02 12:10:02.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:03.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.610 2 DEBUG nova.network.neutron [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Updating instance_info_cache with network_info: [{"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:04.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.648 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Releasing lock "refresh_cache-908f1b97-bcb8-45f0-a30c-234bd252cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.648 2 DEBUG nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Instance network_info: |[{"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.649 2 DEBUG oslo_concurrency.lockutils [req-13881c86-8f1f-4f72-9b42-554c4f752305 req-caae24c3-e7aa-4475-beec-6b2ef7ecf37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-908f1b97-bcb8-45f0-a30c-234bd252cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.649 2 DEBUG nova.network.neutron [req-13881c86-8f1f-4f72-9b42-554c4f752305 req-caae24c3-e7aa-4475-beec-6b2ef7ecf37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Refreshing network info cache for port d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.655 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Start _get_guest_xml network_info=[{"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.662 2 WARNING nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.668 2 DEBUG nova.virt.libvirt.host [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.669 2 DEBUG nova.virt.libvirt.host [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.677 2 DEBUG nova.virt.libvirt.host [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.677 2 DEBUG nova.virt.libvirt.host [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.679 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.679 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.680 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.680 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.680 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.680 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.681 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.681 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.681 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.682 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.682 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.682 2 DEBUG nova.virt.hardware [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:10:04 np0005465987 nova_compute[230713]: 2025-10-02 12:10:04.686 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2391065681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.159 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.182 2 DEBUG nova.storage.rbd_utils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.186 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:05.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/64052507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.587 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.589 2 DEBUG nova.virt.libvirt.vif [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-873935076',display_name='tempest-VolumesAdminNegativeTest-server-873935076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-873935076',id=45,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95056cabad5b4f32916e46a46b10f677',ramdisk_id='',reservation_id='r-i1n3fdo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-494036391',owner_user_name='tempest-VolumesAdminNegativeTest-494036391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:54Z,user_data=None,user_id='c5ca95bd9fa148c7948c062421011d76',uuid=908f1b97-bcb8-45f0-a30c-234bd252cab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.589 2 DEBUG nova.network.os_vif_util [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converting VIF {"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.590 2 DEBUG nova.network.os_vif_util [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:90:46,bridge_name='br-int',has_traffic_filtering=True,id=d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3ef3a14-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.591 2 DEBUG nova.objects.instance [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lazy-loading 'pci_devices' on Instance uuid 908f1b97-bcb8-45f0-a30c-234bd252cab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.676 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <uuid>908f1b97-bcb8-45f0-a30c-234bd252cab7</uuid>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <name>instance-0000002d</name>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <nova:name>tempest-VolumesAdminNegativeTest-server-873935076</nova:name>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:10:04</nova:creationTime>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <nova:user uuid="c5ca95bd9fa148c7948c062421011d76">tempest-VolumesAdminNegativeTest-494036391-project-member</nova:user>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <nova:project uuid="95056cabad5b4f32916e46a46b10f677">tempest-VolumesAdminNegativeTest-494036391</nova:project>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <nova:port uuid="d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <entry name="serial">908f1b97-bcb8-45f0-a30c-234bd252cab7</entry>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <entry name="uuid">908f1b97-bcb8-45f0-a30c-234bd252cab7</entry>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/908f1b97-bcb8-45f0-a30c-234bd252cab7_disk">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/908f1b97-bcb8-45f0-a30c-234bd252cab7_disk.config">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:65:90:46"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <target dev="tapd3ef3a14-3b"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7/console.log" append="off"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:10:05 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:10:05 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:10:05 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:10:05 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.678 2 DEBUG nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Preparing to wait for external event network-vif-plugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.678 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.678 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.679 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.679 2 DEBUG nova.virt.libvirt.vif [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-873935076',display_name='tempest-VolumesAdminNegativeTest-server-873935076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-873935076',id=45,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95056cabad5b4f32916e46a46b10f677',ramdisk_id='',reservation_id='r-i1n3fdo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-494036391',owner_user_name='tempest-VolumesAdminNegativeTest-494036391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:54Z,user_data=None,user_id='c5ca95bd9fa148c7948c062421011d76',uuid=908f1b97-bcb8-45f0-a30c-234bd252cab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.679 2 DEBUG nova.network.os_vif_util [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converting VIF {"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.680 2 DEBUG nova.network.os_vif_util [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:90:46,bridge_name='br-int',has_traffic_filtering=True,id=d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3ef3a14-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.680 2 DEBUG os_vif [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:90:46,bridge_name='br-int',has_traffic_filtering=True,id=d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3ef3a14-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3ef3a14-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3ef3a14-3b, col_values=(('external_ids', {'iface-id': 'd3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:90:46', 'vm-uuid': '908f1b97-bcb8-45f0-a30c-234bd252cab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:05 np0005465987 NetworkManager[44910]: <info>  [1759407005.6869] manager: (tapd3ef3a14-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.692 2 INFO os_vif [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:90:46,bridge_name='br-int',has_traffic_filtering=True,id=d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3ef3a14-3b')#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.790 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.791 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.792 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] No VIF found with MAC fa:16:3e:65:90:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.792 2 INFO nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Using config drive#033[00m
Oct  2 08:10:05 np0005465987 nova_compute[230713]: 2025-10-02 12:10:05.818 2 DEBUG nova.storage.rbd_utils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:06.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:06 np0005465987 nova_compute[230713]: 2025-10-02 12:10:06.803 2 DEBUG nova.network.neutron [req-13881c86-8f1f-4f72-9b42-554c4f752305 req-caae24c3-e7aa-4475-beec-6b2ef7ecf37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Updated VIF entry in instance network info cache for port d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:10:06 np0005465987 nova_compute[230713]: 2025-10-02 12:10:06.803 2 DEBUG nova.network.neutron [req-13881c86-8f1f-4f72-9b42-554c4f752305 req-caae24c3-e7aa-4475-beec-6b2ef7ecf37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Updating instance_info_cache with network_info: [{"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:06 np0005465987 podman[249689]: 2025-10-02 12:10:06.848071317 +0000 UTC m=+0.064845814 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:10:06 np0005465987 nova_compute[230713]: 2025-10-02 12:10:06.907 2 DEBUG oslo_concurrency.lockutils [req-13881c86-8f1f-4f72-9b42-554c4f752305 req-caae24c3-e7aa-4475-beec-6b2ef7ecf37a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-908f1b97-bcb8-45f0-a30c-234bd252cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:06 np0005465987 nova_compute[230713]: 2025-10-02 12:10:06.947 2 INFO nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Creating config drive at /var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7/disk.config#033[00m
Oct  2 08:10:06 np0005465987 nova_compute[230713]: 2025-10-02 12:10:06.951 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rc2a_em execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.076 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rc2a_em" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.102 2 DEBUG nova.storage.rbd_utils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] rbd image 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.105 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7/disk.config 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.450 2 DEBUG oslo_concurrency.processutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7/disk.config 908f1b97-bcb8-45f0-a30c-234bd252cab7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.451 2 INFO nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Deleting local config drive /var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7/disk.config because it was imported into RBD.#033[00m
Oct  2 08:10:07 np0005465987 kernel: tapd3ef3a14-3b: entered promiscuous mode
Oct  2 08:10:07 np0005465987 NetworkManager[44910]: <info>  [1759407007.5039] manager: (tapd3ef3a14-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Oct  2 08:10:07 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:07Z|00120|binding|INFO|Claiming lport d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 for this chassis.
Oct  2 08:10:07 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:07Z|00121|binding|INFO|d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7: Claiming fa:16:3e:65:90:46 10.100.0.12
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 NetworkManager[44910]: <info>  [1759407007.5232] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct  2 08:10:07 np0005465987 NetworkManager[44910]: <info>  [1759407007.5241] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Oct  2 08:10:07 np0005465987 systemd-udevd[249761]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:10:07 np0005465987 systemd-machined[188335]: New machine qemu-24-instance-0000002d.
Oct  2 08:10:07 np0005465987 NetworkManager[44910]: <info>  [1759407007.5518] device (tapd3ef3a14-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:10:07 np0005465987 NetworkManager[44910]: <info>  [1759407007.5526] device (tapd3ef3a14-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.556 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:90:46 10.100.0.12'], port_security=['fa:16:3e:65:90:46 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '908f1b97-bcb8-45f0-a30c-234bd252cab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95056cabad5b4f32916e46a46b10f677', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92292547-d192-4aee-be40-b1c814b5700e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c365843-a96d-4209-bcbf-c5f27ee9a0dd, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.557 138325 INFO neutron.agent.ovn.metadata.agent [-] Port d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 in datapath 032ac993-ac13-41ba-b86e-9d12c97f66b8 bound to our chassis#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.559 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 032ac993-ac13-41ba-b86e-9d12c97f66b8#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.574 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c62a877c-15a7-46d3-a221-c1afb04d8a9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.574 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap032ac993-a1 in ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:10:07 np0005465987 systemd[1]: Started Virtual Machine qemu-24-instance-0000002d.
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.577 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap032ac993-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.577 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dadfd67c-4957-4ff7-b577-cec62591e2ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.578 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[330edce8-81e6-4d56-96b0-78468b86afd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.593 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[5c38ed54-2074-4886-b3f3-71026d515047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.620 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae90a05-ebf3-40ac-b3ad-a3aa65a8730a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.670 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2893a7-0b21-42bb-9737-c51b16743052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 NetworkManager[44910]: <info>  [1759407007.6857] manager: (tap032ac993-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.684 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f2dd0f-70b0-4e65-a262-d14d23de4e92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.727 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[82236f6a-372a-418e-9185-8b1b34341130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.730 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[232168e0-7fee-4608-87ae-5f73995ae477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:07Z|00122|binding|INFO|Setting lport d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 ovn-installed in OVS
Oct  2 08:10:07 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:07Z|00123|binding|INFO|Setting lport d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 up in Southbound
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 NetworkManager[44910]: <info>  [1759407007.7566] device (tap032ac993-a0): carrier: link connected
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.764 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f573e87b-f063-4b22-b73b-da1c9d4263d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.781 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf69ad7-f64e-4f99-949a-2c9cfb27aee9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap032ac993-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:a3:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507633, 'reachable_time': 28381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249794, 'error': None, 'target': 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.805 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8c283304-a7ca-440a-bc79-d7e60f752921]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:a367'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507633, 'tstamp': 507633}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249795, 'error': None, 'target': 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.827 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[992ac8a5-5d65-4e78-84da-8cb87fb2eb75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap032ac993-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:a3:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507633, 'reachable_time': 28381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249796, 'error': None, 'target': 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.857 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[da44ebfd-0d12-4a6b-9c76-4c81ab6af991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.915 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f22282-10e7-40e9-9c8c-2198be12c409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.917 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap032ac993-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.917 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.918 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap032ac993-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 NetworkManager[44910]: <info>  [1759407007.9201] manager: (tap032ac993-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct  2 08:10:07 np0005465987 kernel: tap032ac993-a0: entered promiscuous mode
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.924 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap032ac993-a0, col_values=(('external_ids', {'iface-id': 'c7800c5b-63a7-46df-a2e3-8e9385f2b5c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:07 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:07Z|00124|binding|INFO|Releasing lport c7800c5b-63a7-46df-a2e3-8e9385f2b5c7 from this chassis (sb_readonly=0)
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.929 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/032ac993-ac13-41ba-b86e-9d12c97f66b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/032ac993-ac13-41ba-b86e-9d12c97f66b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.930 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[113f7f40-e26a-433d-bc70-71842dcbf53a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.931 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-032ac993-ac13-41ba-b86e-9d12c97f66b8
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/032ac993-ac13-41ba-b86e-9d12c97f66b8.pid.haproxy
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 032ac993-ac13-41ba-b86e-9d12c97f66b8
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:10:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:07.931 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'env', 'PROCESS_TAG=haproxy-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/032ac993-ac13-41ba-b86e-9d12c97f66b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:10:07 np0005465987 nova_compute[230713]: 2025-10-02 12:10:07.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:08 np0005465987 podman[249870]: 2025-10-02 12:10:08.318319415 +0000 UTC m=+0.054642604 container create 0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:10:08 np0005465987 systemd[1]: Started libpod-conmon-0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015.scope.
Oct  2 08:10:08 np0005465987 podman[249870]: 2025-10-02 12:10:08.288212527 +0000 UTC m=+0.024535736 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:10:08 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:10:08 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da488891b0b9b6121aeabb8343811fde6746970d02fdbd6e6ea1758a38fa0303/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:10:08 np0005465987 podman[249870]: 2025-10-02 12:10:08.431445185 +0000 UTC m=+0.167768464 container init 0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:10:08 np0005465987 podman[249870]: 2025-10-02 12:10:08.437396329 +0000 UTC m=+0.173719518 container start 0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:10:08 np0005465987 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[249886]: [NOTICE]   (249890) : New worker (249892) forked
Oct  2 08:10:08 np0005465987 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[249886]: [NOTICE]   (249890) : Loading success.
Oct  2 08:10:08 np0005465987 nova_compute[230713]: 2025-10-02 12:10:08.480 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407008.4799433, 908f1b97-bcb8-45f0-a30c-234bd252cab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:08 np0005465987 nova_compute[230713]: 2025-10-02 12:10:08.481 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] VM Started (Lifecycle Event)#033[00m
Oct  2 08:10:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:08.506 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:08 np0005465987 nova_compute[230713]: 2025-10-02 12:10:08.531 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005465987 nova_compute[230713]: 2025-10-02 12:10:08.535 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407008.481326, 908f1b97-bcb8-45f0-a30c-234bd252cab7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:08 np0005465987 nova_compute[230713]: 2025-10-02 12:10:08.536 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:10:08 np0005465987 nova_compute[230713]: 2025-10-02 12:10:08.612 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:08Z|00125|binding|INFO|Releasing lport c7800c5b-63a7-46df-a2e3-8e9385f2b5c7 from this chassis (sb_readonly=0)
Oct  2 08:10:08 np0005465987 nova_compute[230713]: 2025-10-02 12:10:08.617 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:08.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:08 np0005465987 nova_compute[230713]: 2025-10-02 12:10:08.644 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:08 np0005465987 nova_compute[230713]: 2025-10-02 12:10:08.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.735 2 DEBUG nova.compute.manager [req-6cd6d871-eac5-4905-8585-c94757c467b7 req-af2d6901-e71f-4d0f-aadb-da523d2fbc27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Received event network-vif-plugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.736 2 DEBUG oslo_concurrency.lockutils [req-6cd6d871-eac5-4905-8585-c94757c467b7 req-af2d6901-e71f-4d0f-aadb-da523d2fbc27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.736 2 DEBUG oslo_concurrency.lockutils [req-6cd6d871-eac5-4905-8585-c94757c467b7 req-af2d6901-e71f-4d0f-aadb-da523d2fbc27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.736 2 DEBUG oslo_concurrency.lockutils [req-6cd6d871-eac5-4905-8585-c94757c467b7 req-af2d6901-e71f-4d0f-aadb-da523d2fbc27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.736 2 DEBUG nova.compute.manager [req-6cd6d871-eac5-4905-8585-c94757c467b7 req-af2d6901-e71f-4d0f-aadb-da523d2fbc27 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Processing event network-vif-plugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.737 2 DEBUG nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.740 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407009.7402046, 908f1b97-bcb8-45f0-a30c-234bd252cab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.740 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.742 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.744 2 INFO nova.virt.libvirt.driver [-] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Instance spawned successfully.#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.744 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.797 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.804 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.811 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.812 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.813 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.813 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.814 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:09 np0005465987 nova_compute[230713]: 2025-10-02 12:10:09.815 2 DEBUG nova.virt.libvirt.driver [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:10 np0005465987 nova_compute[230713]: 2025-10-02 12:10:10.039 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:10:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:10:10 np0005465987 nova_compute[230713]: 2025-10-02 12:10:10.549 2 INFO nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Took 15.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:10:10 np0005465987 nova_compute[230713]: 2025-10-02 12:10:10.549 2 DEBUG nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:10.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:10 np0005465987 nova_compute[230713]: 2025-10-02 12:10:10.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:11 np0005465987 nova_compute[230713]: 2025-10-02 12:10:11.149 2 INFO nova.compute.manager [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Took 18.28 seconds to build instance.#033[00m
Oct  2 08:10:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:11.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:11 np0005465987 nova_compute[230713]: 2025-10-02 12:10:11.431 2 DEBUG oslo_concurrency.lockutils [None req-9bc7dc52-1e6d-4529-b668-77f41936cc0c c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:12 np0005465987 nova_compute[230713]: 2025-10-02 12:10:12.014 2 DEBUG nova.compute.manager [req-ad98b24b-3247-4609-ac01-d86d7e531682 req-ea66da22-5241-4c9e-83d4-93200bcfb582 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Received event network-vif-plugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:12 np0005465987 nova_compute[230713]: 2025-10-02 12:10:12.014 2 DEBUG oslo_concurrency.lockutils [req-ad98b24b-3247-4609-ac01-d86d7e531682 req-ea66da22-5241-4c9e-83d4-93200bcfb582 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:12 np0005465987 nova_compute[230713]: 2025-10-02 12:10:12.014 2 DEBUG oslo_concurrency.lockutils [req-ad98b24b-3247-4609-ac01-d86d7e531682 req-ea66da22-5241-4c9e-83d4-93200bcfb582 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:12 np0005465987 nova_compute[230713]: 2025-10-02 12:10:12.015 2 DEBUG oslo_concurrency.lockutils [req-ad98b24b-3247-4609-ac01-d86d7e531682 req-ea66da22-5241-4c9e-83d4-93200bcfb582 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:12 np0005465987 nova_compute[230713]: 2025-10-02 12:10:12.015 2 DEBUG nova.compute.manager [req-ad98b24b-3247-4609-ac01-d86d7e531682 req-ea66da22-5241-4c9e-83d4-93200bcfb582 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] No waiting events found dispatching network-vif-plugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:12 np0005465987 nova_compute[230713]: 2025-10-02 12:10:12.015 2 WARNING nova.compute.manager [req-ad98b24b-3247-4609-ac01-d86d7e531682 req-ea66da22-5241-4c9e-83d4-93200bcfb582 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Received unexpected event network-vif-plugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:10:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:12.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:12 np0005465987 nova_compute[230713]: 2025-10-02 12:10:12.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:13.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:14.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:14 np0005465987 podman[249952]: 2025-10-02 12:10:14.839096067 +0000 UTC m=+0.053471081 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:10:14 np0005465987 podman[249953]: 2025-10-02 12:10:14.869405821 +0000 UTC m=+0.082180540 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:10:14 np0005465987 podman[249951]: 2025-10-02 12:10:14.888401204 +0000 UTC m=+0.097030720 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:10:15 np0005465987 nova_compute[230713]: 2025-10-02 12:10:15.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:15.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:15 np0005465987 nova_compute[230713]: 2025-10-02 12:10:15.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:10:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2082683193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:10:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:10:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2082683193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:10:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:16.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:17.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.668 2 DEBUG oslo_concurrency.lockutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "908f1b97-bcb8-45f0-a30c-234bd252cab7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.668 2 DEBUG oslo_concurrency.lockutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.669 2 DEBUG oslo_concurrency.lockutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.669 2 DEBUG oslo_concurrency.lockutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.669 2 DEBUG oslo_concurrency.lockutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.670 2 INFO nova.compute.manager [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Terminating instance#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.671 2 DEBUG nova.compute.manager [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:10:17 np0005465987 kernel: tapd3ef3a14-3b (unregistering): left promiscuous mode
Oct  2 08:10:17 np0005465987 NetworkManager[44910]: <info>  [1759407017.7141] device (tapd3ef3a14-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:17Z|00126|binding|INFO|Releasing lport d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 from this chassis (sb_readonly=0)
Oct  2 08:10:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:17Z|00127|binding|INFO|Setting lport d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 down in Southbound
Oct  2 08:10:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:17Z|00128|binding|INFO|Removing iface tapd3ef3a14-3b ovn-installed in OVS
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:17.807 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:90:46 10.100.0.12'], port_security=['fa:16:3e:65:90:46 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '908f1b97-bcb8-45f0-a30c-234bd252cab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95056cabad5b4f32916e46a46b10f677', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92292547-d192-4aee-be40-b1c814b5700e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c365843-a96d-4209-bcbf-c5f27ee9a0dd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:17.808 138325 INFO neutron.agent.ovn.metadata.agent [-] Port d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 in datapath 032ac993-ac13-41ba-b86e-9d12c97f66b8 unbound from our chassis#033[00m
Oct  2 08:10:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:17.810 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 032ac993-ac13-41ba-b86e-9d12c97f66b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:10:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:17.811 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a2374906-1ce4-47d6-8e0b-bad4bee3586e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:17.812 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 namespace which is not needed anymore#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:17 np0005465987 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct  2 08:10:17 np0005465987 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000002d.scope: Consumed 8.905s CPU time.
Oct  2 08:10:17 np0005465987 systemd-machined[188335]: Machine qemu-24-instance-0000002d terminated.
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.908 2 INFO nova.virt.libvirt.driver [-] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Instance destroyed successfully.#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.909 2 DEBUG nova.objects.instance [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lazy-loading 'resources' on Instance uuid 908f1b97-bcb8-45f0-a30c-234bd252cab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.926 2 DEBUG nova.virt.libvirt.vif [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:09:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-873935076',display_name='tempest-VolumesAdminNegativeTest-server-873935076',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-873935076',id=45,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:10:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95056cabad5b4f32916e46a46b10f677',ramdisk_id='',reservation_id='r-i1n3fdo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-494036391',owner_user_name='tempest-VolumesAdminNegativeTest-494036391-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:10Z,user_data=None,user_id='c5ca95bd9fa148c7948c062421011d76',uuid=908f1b97-bcb8-45f0-a30c-234bd252cab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.926 2 DEBUG nova.network.os_vif_util [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converting VIF {"id": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "address": "fa:16:3e:65:90:46", "network": {"id": "032ac993-ac13-41ba-b86e-9d12c97f66b8", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1769597820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95056cabad5b4f32916e46a46b10f677", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3ef3a14-3b", "ovs_interfaceid": "d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.927 2 DEBUG nova.network.os_vif_util [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:90:46,bridge_name='br-int',has_traffic_filtering=True,id=d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3ef3a14-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.928 2 DEBUG os_vif [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:90:46,bridge_name='br-int',has_traffic_filtering=True,id=d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3ef3a14-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.930 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3ef3a14-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.937 2 INFO os_vif [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:90:46,bridge_name='br-int',has_traffic_filtering=True,id=d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7,network=Network(032ac993-ac13-41ba-b86e-9d12c97f66b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3ef3a14-3b')#033[00m
Oct  2 08:10:17 np0005465987 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[249886]: [NOTICE]   (249890) : haproxy version is 2.8.14-c23fe91
Oct  2 08:10:17 np0005465987 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[249886]: [NOTICE]   (249890) : path to executable is /usr/sbin/haproxy
Oct  2 08:10:17 np0005465987 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[249886]: [WARNING]  (249890) : Exiting Master process...
Oct  2 08:10:17 np0005465987 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[249886]: [ALERT]    (249890) : Current worker (249892) exited with code 143 (Terminated)
Oct  2 08:10:17 np0005465987 neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8[249886]: [WARNING]  (249890) : All workers exited. Exiting... (0)
Oct  2 08:10:17 np0005465987 systemd[1]: libpod-0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015.scope: Deactivated successfully.
Oct  2 08:10:17 np0005465987 nova_compute[230713]: 2025-10-02 12:10:17.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:17 np0005465987 podman[250048]: 2025-10-02 12:10:17.966703449 +0000 UTC m=+0.057397160 container died 0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:10:18 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015-userdata-shm.mount: Deactivated successfully.
Oct  2 08:10:18 np0005465987 systemd[1]: var-lib-containers-storage-overlay-da488891b0b9b6121aeabb8343811fde6746970d02fdbd6e6ea1758a38fa0303-merged.mount: Deactivated successfully.
Oct  2 08:10:18 np0005465987 podman[250048]: 2025-10-02 12:10:18.025793343 +0000 UTC m=+0.116487054 container cleanup 0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:10:18 np0005465987 systemd[1]: libpod-conmon-0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015.scope: Deactivated successfully.
Oct  2 08:10:18 np0005465987 podman[250100]: 2025-10-02 12:10:18.100204199 +0000 UTC m=+0.051181578 container remove 0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:10:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:18.108 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebaa5ec-ff34-4945-9005-d66c8529cf7b]: (4, ('Thu Oct  2 12:10:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 (0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015)\n0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015\nThu Oct  2 12:10:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 (0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015)\n0a875ecb970608ab7df9725d5cd2979ad998f895e43355f725b407ca7d7b8015\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:18.111 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff10556-7c05-4c11-9556-be62d3a8c36e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:18.112 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap032ac993-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:18 np0005465987 nova_compute[230713]: 2025-10-02 12:10:18.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:18 np0005465987 kernel: tap032ac993-a0: left promiscuous mode
Oct  2 08:10:18 np0005465987 nova_compute[230713]: 2025-10-02 12:10:18.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:18.135 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[241bba7d-c758-47c4-9bef-52e53f597e20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:18.157 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fedff9f0-2289-4148-b48f-d0119cc83de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:18.158 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4b82c59a-df65-47b9-a7d1-ca38cbf59928]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:18.171 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[423cb7e1-32c5-4b9f-8c71-acb7ee882474]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507624, 'reachable_time': 37385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250115, 'error': None, 'target': 'ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:18.173 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-032ac993-ac13-41ba-b86e-9d12c97f66b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:10:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:18.174 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[353380b9-273d-4f21-ba77-21c069109206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:18 np0005465987 systemd[1]: run-netns-ovnmeta\x2d032ac993\x2dac13\x2d41ba\x2db86e\x2d9d12c97f66b8.mount: Deactivated successfully.
Oct  2 08:10:18 np0005465987 nova_compute[230713]: 2025-10-02 12:10:18.562 2 INFO nova.virt.libvirt.driver [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Deleting instance files /var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7_del#033[00m
Oct  2 08:10:18 np0005465987 nova_compute[230713]: 2025-10-02 12:10:18.563 2 INFO nova.virt.libvirt.driver [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Deletion of /var/lib/nova/instances/908f1b97-bcb8-45f0-a30c-234bd252cab7_del complete#033[00m
Oct  2 08:10:18 np0005465987 nova_compute[230713]: 2025-10-02 12:10:18.618 2 INFO nova.compute.manager [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:10:18 np0005465987 nova_compute[230713]: 2025-10-02 12:10:18.619 2 DEBUG oslo.service.loopingcall [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:10:18 np0005465987 nova_compute[230713]: 2025-10-02 12:10:18.619 2 DEBUG nova.compute.manager [-] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:10:18 np0005465987 nova_compute[230713]: 2025-10-02 12:10:18.619 2 DEBUG nova.network.neutron [-] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:10:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:10:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:18.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.026 2 DEBUG nova.compute.manager [req-b1e90265-d9e1-4c37-9cf8-277498b8b4d5 req-18391530-b575-4509-8114-4a5a383a7d73 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Received event network-vif-unplugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.026 2 DEBUG oslo_concurrency.lockutils [req-b1e90265-d9e1-4c37-9cf8-277498b8b4d5 req-18391530-b575-4509-8114-4a5a383a7d73 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.027 2 DEBUG oslo_concurrency.lockutils [req-b1e90265-d9e1-4c37-9cf8-277498b8b4d5 req-18391530-b575-4509-8114-4a5a383a7d73 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.027 2 DEBUG oslo_concurrency.lockutils [req-b1e90265-d9e1-4c37-9cf8-277498b8b4d5 req-18391530-b575-4509-8114-4a5a383a7d73 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.027 2 DEBUG nova.compute.manager [req-b1e90265-d9e1-4c37-9cf8-277498b8b4d5 req-18391530-b575-4509-8114-4a5a383a7d73 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] No waiting events found dispatching network-vif-unplugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.027 2 DEBUG nova.compute.manager [req-b1e90265-d9e1-4c37-9cf8-277498b8b4d5 req-18391530-b575-4509-8114-4a5a383a7d73 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Received event network-vif-unplugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.360 2 DEBUG nova.network.neutron [-] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.382 2 INFO nova.compute.manager [-] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Took 0.76 seconds to deallocate network for instance.#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.433 2 DEBUG oslo_concurrency.lockutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.433 2 DEBUG oslo_concurrency.lockutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:19.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.496 2 DEBUG oslo_concurrency.processutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/819940276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.928 2 DEBUG oslo_concurrency.processutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.934 2 DEBUG nova.compute.provider_tree [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.953 2 DEBUG nova.scheduler.client.report [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:10:19 np0005465987 nova_compute[230713]: 2025-10-02 12:10:19.988 2 DEBUG oslo_concurrency.lockutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:20 np0005465987 nova_compute[230713]: 2025-10-02 12:10:20.018 2 INFO nova.scheduler.client.report [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Deleted allocations for instance 908f1b97-bcb8-45f0-a30c-234bd252cab7#033[00m
Oct  2 08:10:20 np0005465987 nova_compute[230713]: 2025-10-02 12:10:20.104 2 DEBUG oslo_concurrency.lockutils [None req-4cd1de80-dcf9-4a9f-af19-9a5e8693410b c5ca95bd9fa148c7948c062421011d76 95056cabad5b4f32916e46a46b10f677 - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:21 np0005465987 nova_compute[230713]: 2025-10-02 12:10:21.136 2 DEBUG nova.compute.manager [req-36bd4f3e-00b6-4f62-92b8-23d43ec24788 req-ff240ee7-d1a1-4b15-9e38-231d46ae65ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Received event network-vif-plugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:21 np0005465987 nova_compute[230713]: 2025-10-02 12:10:21.136 2 DEBUG oslo_concurrency.lockutils [req-36bd4f3e-00b6-4f62-92b8-23d43ec24788 req-ff240ee7-d1a1-4b15-9e38-231d46ae65ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:21 np0005465987 nova_compute[230713]: 2025-10-02 12:10:21.136 2 DEBUG oslo_concurrency.lockutils [req-36bd4f3e-00b6-4f62-92b8-23d43ec24788 req-ff240ee7-d1a1-4b15-9e38-231d46ae65ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:21 np0005465987 nova_compute[230713]: 2025-10-02 12:10:21.136 2 DEBUG oslo_concurrency.lockutils [req-36bd4f3e-00b6-4f62-92b8-23d43ec24788 req-ff240ee7-d1a1-4b15-9e38-231d46ae65ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "908f1b97-bcb8-45f0-a30c-234bd252cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:21 np0005465987 nova_compute[230713]: 2025-10-02 12:10:21.137 2 DEBUG nova.compute.manager [req-36bd4f3e-00b6-4f62-92b8-23d43ec24788 req-ff240ee7-d1a1-4b15-9e38-231d46ae65ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] No waiting events found dispatching network-vif-plugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:21 np0005465987 nova_compute[230713]: 2025-10-02 12:10:21.137 2 WARNING nova.compute.manager [req-36bd4f3e-00b6-4f62-92b8-23d43ec24788 req-ff240ee7-d1a1-4b15-9e38-231d46ae65ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Received unexpected event network-vif-plugged-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:10:21 np0005465987 nova_compute[230713]: 2025-10-02 12:10:21.137 2 DEBUG nova.compute.manager [req-36bd4f3e-00b6-4f62-92b8-23d43ec24788 req-ff240ee7-d1a1-4b15-9e38-231d46ae65ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Received event network-vif-deleted-d3ef3a14-3be6-4043-a2fb-ef5c2d6bd3b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:21.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:21 np0005465987 nova_compute[230713]: 2025-10-02 12:10:21.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:22.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:22 np0005465987 nova_compute[230713]: 2025-10-02 12:10:22.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:22 np0005465987 nova_compute[230713]: 2025-10-02 12:10:22.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:24.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:26.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:26 np0005465987 nova_compute[230713]: 2025-10-02 12:10:26.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:27.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:27.938 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:27.938 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:27.938 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:27 np0005465987 nova_compute[230713]: 2025-10-02 12:10:27.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:10:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:28.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:10:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:29.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:29 np0005465987 nova_compute[230713]: 2025-10-02 12:10:29.854 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "9a277afb-614b-4bf4-af98-8d08d450f263" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:29 np0005465987 nova_compute[230713]: 2025-10-02 12:10:29.855 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:29 np0005465987 nova_compute[230713]: 2025-10-02 12:10:29.871 2 DEBUG nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:10:29 np0005465987 nova_compute[230713]: 2025-10-02 12:10:29.964 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:29 np0005465987 nova_compute[230713]: 2025-10-02 12:10:29.965 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:29 np0005465987 nova_compute[230713]: 2025-10-02 12:10:29.974 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:10:29 np0005465987 nova_compute[230713]: 2025-10-02 12:10:29.975 2 INFO nova.compute.claims [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:10:30 np0005465987 nova_compute[230713]: 2025-10-02 12:10:30.081 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/962521600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:30 np0005465987 nova_compute[230713]: 2025-10-02 12:10:30.516 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:30 np0005465987 nova_compute[230713]: 2025-10-02 12:10:30.525 2 DEBUG nova.compute.provider_tree [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:10:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:30.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:30 np0005465987 nova_compute[230713]: 2025-10-02 12:10:30.738 2 DEBUG nova.scheduler.client.report [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:10:30 np0005465987 nova_compute[230713]: 2025-10-02 12:10:30.781 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:30 np0005465987 nova_compute[230713]: 2025-10-02 12:10:30.781 2 DEBUG nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:10:30 np0005465987 nova_compute[230713]: 2025-10-02 12:10:30.920 2 DEBUG nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:10:30 np0005465987 nova_compute[230713]: 2025-10-02 12:10:30.921 2 DEBUG nova.network.neutron [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:10:30 np0005465987 nova_compute[230713]: 2025-10-02 12:10:30.994 2 INFO nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.029 2 DEBUG nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.138 2 DEBUG nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.140 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.140 2 INFO nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Creating image(s)#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.169 2 DEBUG nova.storage.rbd_utils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 9a277afb-614b-4bf4-af98-8d08d450f263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.200 2 DEBUG nova.storage.rbd_utils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 9a277afb-614b-4bf4-af98-8d08d450f263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.228 2 DEBUG nova.storage.rbd_utils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 9a277afb-614b-4bf4-af98-8d08d450f263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.232 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.319 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.320 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.320 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.321 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.347 2 DEBUG nova.storage.rbd_utils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 9a277afb-614b-4bf4-af98-8d08d450f263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.351 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 9a277afb-614b-4bf4-af98-8d08d450f263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:31.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.678 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 9a277afb-614b-4bf4-af98-8d08d450f263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.745 2 DEBUG nova.storage.rbd_utils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] resizing rbd image 9a277afb-614b-4bf4-af98-8d08d450f263_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.782 2 DEBUG nova.policy [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0df47040f1ff4ce69a6fbdfd9eba4955', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:10:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.868 2 DEBUG nova.objects.instance [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'migration_context' on Instance uuid 9a277afb-614b-4bf4-af98-8d08d450f263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.931 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.932 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Ensure instance console log exists: /var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.932 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.933 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:31 np0005465987 nova_compute[230713]: 2025-10-02 12:10:31.933 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:32.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Oct  2 08:10:32 np0005465987 nova_compute[230713]: 2025-10-02 12:10:32.905 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407017.904904, 908f1b97-bcb8-45f0-a30c-234bd252cab7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:32 np0005465987 nova_compute[230713]: 2025-10-02 12:10:32.906 2 INFO nova.compute.manager [-] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:10:32 np0005465987 nova_compute[230713]: 2025-10-02 12:10:32.961 2 DEBUG nova.compute.manager [None req-bf7c874e-3080-49ef-9830-dbc8f43caa25 - - - - - -] [instance: 908f1b97-bcb8-45f0-a30c-234bd252cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:32 np0005465987 nova_compute[230713]: 2025-10-02 12:10:32.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:33 np0005465987 nova_compute[230713]: 2025-10-02 12:10:33.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:33.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:33 np0005465987 nova_compute[230713]: 2025-10-02 12:10:33.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:33 np0005465987 nova_compute[230713]: 2025-10-02 12:10:33.578 2 DEBUG nova.network.neutron [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Successfully created port: 178d0d5b-7796-4f82-8941-f3d45e6597df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:10:34 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  2 08:10:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.622 2 DEBUG nova.network.neutron [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Successfully updated port: 178d0d5b-7796-4f82-8941-f3d45e6597df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.651 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "refresh_cache-9a277afb-614b-4bf4-af98-8d08d450f263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.652 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquired lock "refresh_cache-9a277afb-614b-4bf4-af98-8d08d450f263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.652 2 DEBUG nova.network.neutron [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:10:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:34.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.782 2 DEBUG nova.compute.manager [req-ad53f92d-0b74-4d68-b5bd-b29607dd7044 req-fdc47a89-09c8-492d-bd5b-f65c274a0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Received event network-changed-178d0d5b-7796-4f82-8941-f3d45e6597df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.782 2 DEBUG nova.compute.manager [req-ad53f92d-0b74-4d68-b5bd-b29607dd7044 req-fdc47a89-09c8-492d-bd5b-f65c274a0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Refreshing instance network info cache due to event network-changed-178d0d5b-7796-4f82-8941-f3d45e6597df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.782 2 DEBUG oslo_concurrency.lockutils [req-ad53f92d-0b74-4d68-b5bd-b29607dd7044 req-fdc47a89-09c8-492d-bd5b-f65c274a0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-9a277afb-614b-4bf4-af98-8d08d450f263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.939 2 DEBUG nova.network.neutron [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:10:34 np0005465987 nova_compute[230713]: 2025-10-02 12:10:34.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.005 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.006 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:10:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:35.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.616 2 DEBUG nova.network.neutron [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Updating instance_info_cache with network_info: [{"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.709 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Releasing lock "refresh_cache-9a277afb-614b-4bf4-af98-8d08d450f263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.709 2 DEBUG nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Instance network_info: |[{"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.709 2 DEBUG oslo_concurrency.lockutils [req-ad53f92d-0b74-4d68-b5bd-b29607dd7044 req-fdc47a89-09c8-492d-bd5b-f65c274a0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-9a277afb-614b-4bf4-af98-8d08d450f263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.710 2 DEBUG nova.network.neutron [req-ad53f92d-0b74-4d68-b5bd-b29607dd7044 req-fdc47a89-09c8-492d-bd5b-f65c274a0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Refreshing network info cache for port 178d0d5b-7796-4f82-8941-f3d45e6597df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.713 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Start _get_guest_xml network_info=[{"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.717 2 WARNING nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.723 2 DEBUG nova.virt.libvirt.host [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.723 2 DEBUG nova.virt.libvirt.host [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.733 2 DEBUG nova.virt.libvirt.host [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.733 2 DEBUG nova.virt.libvirt.host [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.734 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.735 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.735 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.735 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.736 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.736 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.736 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.736 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.736 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.737 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.737 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.737 2 DEBUG nova.virt.hardware [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:10:35 np0005465987 nova_compute[230713]: 2025-10-02 12:10:35.739 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3447389853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.164 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.203 2 DEBUG nova.storage.rbd_utils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 9a277afb-614b-4bf4-af98-8d08d450f263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.209 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1647860312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.639 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.642 2 DEBUG nova.virt.libvirt.vif [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-701902188',display_name='tempest-ImagesTestJSON-server-701902188',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-701902188',id=48,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-tlseua0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:31Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=9a277afb-614b-4bf4-af98-8d08d450f263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.643 2 DEBUG nova.network.os_vif_util [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.644 2 DEBUG nova.network.os_vif_util [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3e:87,bridge_name='br-int',has_traffic_filtering=True,id=178d0d5b-7796-4f82-8941-f3d45e6597df,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178d0d5b-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.645 2 DEBUG nova.objects.instance [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a277afb-614b-4bf4-af98-8d08d450f263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.661 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <uuid>9a277afb-614b-4bf4-af98-8d08d450f263</uuid>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <name>instance-00000030</name>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <nova:name>tempest-ImagesTestJSON-server-701902188</nova:name>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:10:35</nova:creationTime>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <nova:user uuid="0df47040f1ff4ce69a6fbdfd9eba4955">tempest-ImagesTestJSON-2116266493-project-member</nova:user>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <nova:project uuid="55d20ae21b6d4f0abfff3bccc371ee7a">tempest-ImagesTestJSON-2116266493</nova:project>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <nova:port uuid="178d0d5b-7796-4f82-8941-f3d45e6597df">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <entry name="serial">9a277afb-614b-4bf4-af98-8d08d450f263</entry>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <entry name="uuid">9a277afb-614b-4bf4-af98-8d08d450f263</entry>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/9a277afb-614b-4bf4-af98-8d08d450f263_disk">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/9a277afb-614b-4bf4-af98-8d08d450f263_disk.config">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:d9:3e:87"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <target dev="tap178d0d5b-77"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263/console.log" append="off"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:10:36 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:10:36 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:10:36 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:10:36 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.663 2 DEBUG nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Preparing to wait for external event network-vif-plugged-178d0d5b-7796-4f82-8941-f3d45e6597df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.664 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.664 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.664 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.666 2 DEBUG nova.virt.libvirt.vif [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-701902188',display_name='tempest-ImagesTestJSON-server-701902188',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-701902188',id=48,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-tlseua0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:31Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=9a277afb-614b-4bf4-af98-8d08d450f263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.666 2 DEBUG nova.network.os_vif_util [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.667 2 DEBUG nova.network.os_vif_util [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3e:87,bridge_name='br-int',has_traffic_filtering=True,id=178d0d5b-7796-4f82-8941-f3d45e6597df,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178d0d5b-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.668 2 DEBUG os_vif [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3e:87,bridge_name='br-int',has_traffic_filtering=True,id=178d0d5b-7796-4f82-8941-f3d45e6597df,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178d0d5b-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.670 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:36.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.674 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap178d0d5b-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap178d0d5b-77, col_values=(('external_ids', {'iface-id': '178d0d5b-7796-4f82-8941-f3d45e6597df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:3e:87', 'vm-uuid': '9a277afb-614b-4bf4-af98-8d08d450f263'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:36 np0005465987 NetworkManager[44910]: <info>  [1759407036.6777] manager: (tap178d0d5b-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.683 2 INFO os_vif [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3e:87,bridge_name='br-int',has_traffic_filtering=True,id=178d0d5b-7796-4f82-8941-f3d45e6597df,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178d0d5b-77')#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.818 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.819 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.819 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No VIF found with MAC fa:16:3e:d9:3e:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.820 2 INFO nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Using config drive#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.842 2 DEBUG nova.storage.rbd_utils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 9a277afb-614b-4bf4-af98-8d08d450f263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:36 np0005465987 nova_compute[230713]: 2025-10-02 12:10:36.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:37.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:37 np0005465987 podman[250411]: 2025-10-02 12:10:37.825955585 +0000 UTC m=+0.048536084 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:10:37 np0005465987 nova_compute[230713]: 2025-10-02 12:10:37.957 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:37 np0005465987 nova_compute[230713]: 2025-10-02 12:10:37.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:37 np0005465987 nova_compute[230713]: 2025-10-02 12:10:37.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:38.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:38 np0005465987 nova_compute[230713]: 2025-10-02 12:10:38.796 2 INFO nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Creating config drive at /var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263/disk.config#033[00m
Oct  2 08:10:38 np0005465987 nova_compute[230713]: 2025-10-02 12:10:38.801 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc8_9zhw2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:38 np0005465987 nova_compute[230713]: 2025-10-02 12:10:38.939 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc8_9zhw2" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:38 np0005465987 nova_compute[230713]: 2025-10-02 12:10:38.968 2 DEBUG nova.storage.rbd_utils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 9a277afb-614b-4bf4-af98-8d08d450f263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:38 np0005465987 nova_compute[230713]: 2025-10-02 12:10:38.971 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263/disk.config 9a277afb-614b-4bf4-af98-8d08d450f263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:39 np0005465987 nova_compute[230713]: 2025-10-02 12:10:39.397 2 DEBUG oslo_concurrency.processutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263/disk.config 9a277afb-614b-4bf4-af98-8d08d450f263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:39 np0005465987 nova_compute[230713]: 2025-10-02 12:10:39.398 2 INFO nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Deleting local config drive /var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263/disk.config because it was imported into RBD.#033[00m
Oct  2 08:10:39 np0005465987 kernel: tap178d0d5b-77: entered promiscuous mode
Oct  2 08:10:39 np0005465987 NetworkManager[44910]: <info>  [1759407039.4454] manager: (tap178d0d5b-77): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Oct  2 08:10:39 np0005465987 nova_compute[230713]: 2025-10-02 12:10:39.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:39Z|00129|binding|INFO|Claiming lport 178d0d5b-7796-4f82-8941-f3d45e6597df for this chassis.
Oct  2 08:10:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:39Z|00130|binding|INFO|178d0d5b-7796-4f82-8941-f3d45e6597df: Claiming fa:16:3e:d9:3e:87 10.100.0.5
Oct  2 08:10:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:39.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:39 np0005465987 systemd-machined[188335]: New machine qemu-25-instance-00000030.
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.503 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:3e:87 10.100.0.5'], port_security=['fa:16:3e:d9:3e:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9a277afb-614b-4bf4-af98-8d08d450f263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a015800-2f8b-4fd4-818b-829a4dcb7912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9678136b-02f9-4c61-b96e-15935f11dca7, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=178d0d5b-7796-4f82-8941-f3d45e6597df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.504 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 178d0d5b-7796-4f82-8941-f3d45e6597df in datapath 6d00de8e-203c-4e94-b60f-36ba9ccef805 bound to our chassis#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.505 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d00de8e-203c-4e94-b60f-36ba9ccef805#033[00m
Oct  2 08:10:39 np0005465987 nova_compute[230713]: 2025-10-02 12:10:39.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:39Z|00131|binding|INFO|Setting lport 178d0d5b-7796-4f82-8941-f3d45e6597df ovn-installed in OVS
Oct  2 08:10:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:39Z|00132|binding|INFO|Setting lport 178d0d5b-7796-4f82-8941-f3d45e6597df up in Southbound
Oct  2 08:10:39 np0005465987 nova_compute[230713]: 2025-10-02 12:10:39.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.518 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[75edd50b-0ddc-4a17-8377-c6f73ca723c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.518 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d00de8e-21 in ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.520 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d00de8e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.520 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[42bc32c6-71fe-401e-bdb0-39270467da4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 systemd[1]: Started Virtual Machine qemu-25-instance-00000030.
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.521 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[faae8617-013c-44a3-baab-d1f6556608b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 systemd-udevd[250485]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.536 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[698cd694-b8e1-47dc-a169-a63552bc95f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 NetworkManager[44910]: <info>  [1759407039.5419] device (tap178d0d5b-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:10:39 np0005465987 NetworkManager[44910]: <info>  [1759407039.5435] device (tap178d0d5b-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.563 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[21f4c651-ce1f-495d-9d28-7276e4bff17f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.595 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[2b9b9487-bebb-4fc2-94a5-d51872820592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 NetworkManager[44910]: <info>  [1759407039.6026] manager: (tap6d00de8e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/83)
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.602 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d1bbdc81-c281-4fa4-a007-aabd0318d28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.636 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[246c4ba8-d2b3-4417-a431-9b9acd6cd598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.639 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[75111107-d35d-4bc2-91bb-ddb9071fff27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 NetworkManager[44910]: <info>  [1759407039.6626] device (tap6d00de8e-20): carrier: link connected
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.669 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[63ec85dd-f9d9-495c-bc03-baa6526fc036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.685 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[022d7bb2-a3cf-42ff-bdee-3645813d32bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d00de8e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:28:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510823, 'reachable_time': 19422, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250519, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.700 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d02266bb-f741-48f6-83d7-de509e5d4edb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:28f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510823, 'tstamp': 510823}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250520, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.715 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6a418956-e966-46bd-b00c-1bc0afb0118e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d00de8e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:28:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510823, 'reachable_time': 19422, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250521, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.743 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4e869ae9-e255-492c-8bc0-61dab64d5557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.794 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b48961be-9541-4488-9e38-bb8d3681d9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.796 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d00de8e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.796 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.796 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d00de8e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:39 np0005465987 nova_compute[230713]: 2025-10-02 12:10:39.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:39 np0005465987 NetworkManager[44910]: <info>  [1759407039.7986] manager: (tap6d00de8e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct  2 08:10:39 np0005465987 kernel: tap6d00de8e-20: entered promiscuous mode
Oct  2 08:10:39 np0005465987 nova_compute[230713]: 2025-10-02 12:10:39.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.806 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d00de8e-20, col_values=(('external_ids', {'iface-id': '4d0b2163-acbb-4b6a-b6d8-84f8212e1e02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:39 np0005465987 nova_compute[230713]: 2025-10-02 12:10:39.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:39Z|00133|binding|INFO|Releasing lport 4d0b2163-acbb-4b6a-b6d8-84f8212e1e02 from this chassis (sb_readonly=0)
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.817 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.819 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[314e8efa-8ed3-4f8b-a542-f476540c80b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.820 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-6d00de8e-203c-4e94-b60f-36ba9ccef805
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 6d00de8e-203c-4e94-b60f-36ba9ccef805
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:10:39 np0005465987 nova_compute[230713]: 2025-10-02 12:10:39.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:39.823 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'env', 'PROCESS_TAG=haproxy-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d00de8e-203c-4e94-b60f-36ba9ccef805.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:10:40 np0005465987 podman[250595]: 2025-10-02 12:10:40.191860324 +0000 UTC m=+0.069059388 container create 537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:10:40 np0005465987 systemd[1]: Started libpod-conmon-537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0.scope.
Oct  2 08:10:40 np0005465987 podman[250595]: 2025-10-02 12:10:40.141211383 +0000 UTC m=+0.018410467 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:10:40 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:10:40 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74b57c06dd44c1c4aa8ad5657bfdaea266c99bee0f5053169b99d9f041ffa73/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:10:40 np0005465987 podman[250595]: 2025-10-02 12:10:40.280241973 +0000 UTC m=+0.157441037 container init 537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:10:40 np0005465987 podman[250595]: 2025-10-02 12:10:40.286132934 +0000 UTC m=+0.163331998 container start 537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:10:40 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[250610]: [NOTICE]   (250614) : New worker (250616) forked
Oct  2 08:10:40 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[250610]: [NOTICE]   (250614) : Loading success.
Oct  2 08:10:40 np0005465987 nova_compute[230713]: 2025-10-02 12:10:40.428 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407040.4277973, 9a277afb-614b-4bf4-af98-8d08d450f263 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:40 np0005465987 nova_compute[230713]: 2025-10-02 12:10:40.428 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] VM Started (Lifecycle Event)#033[00m
Oct  2 08:10:40 np0005465987 nova_compute[230713]: 2025-10-02 12:10:40.541 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:40 np0005465987 nova_compute[230713]: 2025-10-02 12:10:40.547 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407040.4289093, 9a277afb-614b-4bf4-af98-8d08d450f263 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:40 np0005465987 nova_compute[230713]: 2025-10-02 12:10:40.548 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:10:40 np0005465987 nova_compute[230713]: 2025-10-02 12:10:40.667 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:40 np0005465987 nova_compute[230713]: 2025-10-02 12:10:40.671 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:40.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:40 np0005465987 nova_compute[230713]: 2025-10-02 12:10:40.778 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Oct  2 08:10:40 np0005465987 nova_compute[230713]: 2025-10-02 12:10:40.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.018 2 DEBUG nova.network.neutron [req-ad53f92d-0b74-4d68-b5bd-b29607dd7044 req-fdc47a89-09c8-492d-bd5b-f65c274a0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Updated VIF entry in instance network info cache for port 178d0d5b-7796-4f82-8941-f3d45e6597df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.019 2 DEBUG nova.network.neutron [req-ad53f92d-0b74-4d68-b5bd-b29607dd7044 req-fdc47a89-09c8-492d-bd5b-f65c274a0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Updating instance_info_cache with network_info: [{"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.184 2 DEBUG oslo_concurrency.lockutils [req-ad53f92d-0b74-4d68-b5bd-b29607dd7044 req-fdc47a89-09c8-492d-bd5b-f65c274a0810 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-9a277afb-614b-4bf4-af98-8d08d450f263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:41.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.739 2 DEBUG nova.compute.manager [req-f20c73da-262d-4586-9b79-36d46494a9d2 req-bcd7f40e-335c-4f4b-a697-eab9869bc00e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Received event network-vif-plugged-178d0d5b-7796-4f82-8941-f3d45e6597df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.739 2 DEBUG oslo_concurrency.lockutils [req-f20c73da-262d-4586-9b79-36d46494a9d2 req-bcd7f40e-335c-4f4b-a697-eab9869bc00e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.739 2 DEBUG oslo_concurrency.lockutils [req-f20c73da-262d-4586-9b79-36d46494a9d2 req-bcd7f40e-335c-4f4b-a697-eab9869bc00e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.739 2 DEBUG oslo_concurrency.lockutils [req-f20c73da-262d-4586-9b79-36d46494a9d2 req-bcd7f40e-335c-4f4b-a697-eab9869bc00e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.740 2 DEBUG nova.compute.manager [req-f20c73da-262d-4586-9b79-36d46494a9d2 req-bcd7f40e-335c-4f4b-a697-eab9869bc00e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Processing event network-vif-plugged-178d0d5b-7796-4f82-8941-f3d45e6597df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.740 2 DEBUG nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.743 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407041.7434332, 9a277afb-614b-4bf4-af98-8d08d450f263 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.743 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.745 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.747 2 INFO nova.virt.libvirt.driver [-] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Instance spawned successfully.#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.748 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.837 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.841 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.844 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.845 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.845 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.846 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.846 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.846 2 DEBUG nova.virt.libvirt.driver [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:41 np0005465987 nova_compute[230713]: 2025-10-02 12:10:41.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.046 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.081 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.081 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.082 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.082 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.082 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.155 2 INFO nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Took 11.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.156 2 DEBUG nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.229 2 INFO nova.compute.manager [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Took 12.30 seconds to build instance.#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.261 2 DEBUG oslo_concurrency.lockutils [None req-5195ffec-d136-4df0-9013-af7c23d58532 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2662862687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.546 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.625 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.626 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:42.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.771 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.772 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4702MB free_disk=20.8978271484375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.773 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.773 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.850 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 9a277afb-614b-4bf4-af98-8d08d450f263 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.851 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.851 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.915 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:42 np0005465987 nova_compute[230713]: 2025-10-02 12:10:42.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Oct  2 08:10:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2641519561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.352 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.357 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.372 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.392 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.392 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:43.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.798 2 INFO nova.compute.manager [None req-99dcc520-2bd7-4bd5-bda0-9c69b6ac0ad7 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Pausing#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.799 2 DEBUG nova.objects.instance [None req-99dcc520-2bd7-4bd5-bda0-9c69b6ac0ad7 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'flavor' on Instance uuid 9a277afb-614b-4bf4-af98-8d08d450f263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.830 2 DEBUG nova.compute.manager [req-529dc7be-fb92-403d-97e8-bbe39e2698c4 req-928602c4-ef7c-4a05-bfe7-2f348eac2fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Received event network-vif-plugged-178d0d5b-7796-4f82-8941-f3d45e6597df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.831 2 DEBUG oslo_concurrency.lockutils [req-529dc7be-fb92-403d-97e8-bbe39e2698c4 req-928602c4-ef7c-4a05-bfe7-2f348eac2fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.831 2 DEBUG oslo_concurrency.lockutils [req-529dc7be-fb92-403d-97e8-bbe39e2698c4 req-928602c4-ef7c-4a05-bfe7-2f348eac2fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.831 2 DEBUG oslo_concurrency.lockutils [req-529dc7be-fb92-403d-97e8-bbe39e2698c4 req-928602c4-ef7c-4a05-bfe7-2f348eac2fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.832 2 DEBUG nova.compute.manager [req-529dc7be-fb92-403d-97e8-bbe39e2698c4 req-928602c4-ef7c-4a05-bfe7-2f348eac2fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] No waiting events found dispatching network-vif-plugged-178d0d5b-7796-4f82-8941-f3d45e6597df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.832 2 WARNING nova.compute.manager [req-529dc7be-fb92-403d-97e8-bbe39e2698c4 req-928602c4-ef7c-4a05-bfe7-2f348eac2fb5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Received unexpected event network-vif-plugged-178d0d5b-7796-4f82-8941-f3d45e6597df for instance with vm_state active and task_state pausing.#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.833 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407043.8285174, 9a277afb-614b-4bf4-af98-8d08d450f263 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.833 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.836 2 DEBUG nova.compute.manager [None req-99dcc520-2bd7-4bd5-bda0-9c69b6ac0ad7 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.858 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.861 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:43 np0005465987 nova_compute[230713]: 2025-10-02 12:10:43.895 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 08:10:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Oct  2 08:10:44 np0005465987 nova_compute[230713]: 2025-10-02 12:10:44.392 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:44 np0005465987 nova_compute[230713]: 2025-10-02 12:10:44.393 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:44 np0005465987 nova_compute[230713]: 2025-10-02 12:10:44.393 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:10:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:10:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:44.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:10:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:45.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:45 np0005465987 podman[250671]: 2025-10-02 12:10:45.848827888 +0000 UTC m=+0.057538943 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:10:45 np0005465987 podman[250672]: 2025-10-02 12:10:45.854672218 +0000 UTC m=+0.067846394 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 08:10:45 np0005465987 podman[250670]: 2025-10-02 12:10:45.897737391 +0000 UTC m=+0.107561426 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:10:46 np0005465987 nova_compute[230713]: 2025-10-02 12:10:46.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:46.036 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:46.038 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:10:46 np0005465987 nova_compute[230713]: 2025-10-02 12:10:46.514 2 DEBUG nova.compute.manager [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:46 np0005465987 nova_compute[230713]: 2025-10-02 12:10:46.554 2 INFO nova.compute.manager [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] instance snapshotting#033[00m
Oct  2 08:10:46 np0005465987 nova_compute[230713]: 2025-10-02 12:10:46.555 2 WARNING nova.compute.manager [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Oct  2 08:10:46 np0005465987 nova_compute[230713]: 2025-10-02 12:10:46.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:46.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:46 np0005465987 nova_compute[230713]: 2025-10-02 12:10:46.806 2 INFO nova.virt.libvirt.driver [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Beginning live snapshot process#033[00m
Oct  2 08:10:46 np0005465987 nova_compute[230713]: 2025-10-02 12:10:46.920 2 DEBUG nova.virt.libvirt.imagebackend [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:10:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:47 np0005465987 nova_compute[230713]: 2025-10-02 12:10:47.213 2 DEBUG nova.storage.rbd_utils [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] creating snapshot(9d8ef47bf3fc45ee8f682fe8978d3289) on rbd image(9a277afb-614b-4bf4-af98-8d08d450f263_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:10:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:47.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Oct  2 08:10:48 np0005465987 nova_compute[230713]: 2025-10-02 12:10:48.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:48.040 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:48.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:49 np0005465987 nova_compute[230713]: 2025-10-02 12:10:49.017 2 DEBUG nova.storage.rbd_utils [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] cloning vms/9a277afb-614b-4bf4-af98-8d08d450f263_disk@9d8ef47bf3fc45ee8f682fe8978d3289 to images/6e6f7317-b757-4c70-b900-2266125b88d4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:10:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Oct  2 08:10:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:49.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:49 np0005465987 nova_compute[230713]: 2025-10-02 12:10:49.685 2 DEBUG nova.storage.rbd_utils [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] flattening images/6e6f7317-b757-4c70-b900-2266125b88d4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:10:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:50.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:51.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:51 np0005465987 nova_compute[230713]: 2025-10-02 12:10:51.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:51 np0005465987 nova_compute[230713]: 2025-10-02 12:10:51.854 2 DEBUG nova.storage.rbd_utils [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] removing snapshot(9d8ef47bf3fc45ee8f682fe8978d3289) on rbd image(9a277afb-614b-4bf4-af98-8d08d450f263_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:10:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:10:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:52.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:10:53 np0005465987 nova_compute[230713]: 2025-10-02 12:10:53.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Oct  2 08:10:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:53.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:53 np0005465987 nova_compute[230713]: 2025-10-02 12:10:53.528 2 DEBUG nova.storage.rbd_utils [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] creating snapshot(snap) on rbd image(6e6f7317-b757-4c70-b900-2266125b88d4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:10:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:54.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Oct  2 08:10:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:10:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:55.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:10:56 np0005465987 nova_compute[230713]: 2025-10-02 12:10:56.460 2 INFO nova.virt.libvirt.driver [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Snapshot image upload complete#033[00m
Oct  2 08:10:56 np0005465987 nova_compute[230713]: 2025-10-02 12:10:56.460 2 INFO nova.compute.manager [None req-a72c8b69-391d-4f77-ae9e-2eb36c6491e5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Took 9.90 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:10:56 np0005465987 nova_compute[230713]: 2025-10-02 12:10:56.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:56.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:57.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Oct  2 08:10:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Oct  2 08:10:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:10:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:58.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.788 2 DEBUG oslo_concurrency.lockutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "9a277afb-614b-4bf4-af98-8d08d450f263" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.789 2 DEBUG oslo_concurrency.lockutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.789 2 DEBUG oslo_concurrency.lockutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.789 2 DEBUG oslo_concurrency.lockutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.789 2 DEBUG oslo_concurrency.lockutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.790 2 INFO nova.compute.manager [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Terminating instance#033[00m
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.791 2 DEBUG nova.compute.manager [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:10:58 np0005465987 kernel: tap178d0d5b-77 (unregistering): left promiscuous mode
Oct  2 08:10:58 np0005465987 NetworkManager[44910]: <info>  [1759407058.8389] device (tap178d0d5b-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:10:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:58Z|00134|binding|INFO|Releasing lport 178d0d5b-7796-4f82-8941-f3d45e6597df from this chassis (sb_readonly=0)
Oct  2 08:10:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:58Z|00135|binding|INFO|Setting lport 178d0d5b-7796-4f82-8941-f3d45e6597df down in Southbound
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:10:58Z|00136|binding|INFO|Removing iface tap178d0d5b-77 ovn-installed in OVS
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:58.866 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:3e:87 10.100.0.5'], port_security=['fa:16:3e:d9:3e:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9a277afb-614b-4bf4-af98-8d08d450f263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a015800-2f8b-4fd4-818b-829a4dcb7912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9678136b-02f9-4c61-b96e-15935f11dca7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=178d0d5b-7796-4f82-8941-f3d45e6597df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:58.869 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 178d0d5b-7796-4f82-8941-f3d45e6597df in datapath 6d00de8e-203c-4e94-b60f-36ba9ccef805 unbound from our chassis#033[00m
Oct  2 08:10:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:58.871 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d00de8e-203c-4e94-b60f-36ba9ccef805, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:10:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:58.873 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[705e82d9-1747-4389-8afe-0b85b249684f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:58.874 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 namespace which is not needed anymore#033[00m
Oct  2 08:10:58 np0005465987 nova_compute[230713]: 2025-10-02 12:10:58.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:58 np0005465987 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct  2 08:10:58 np0005465987 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000030.scope: Consumed 3.069s CPU time.
Oct  2 08:10:58 np0005465987 systemd-machined[188335]: Machine qemu-25-instance-00000030 terminated.
Oct  2 08:10:59 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[250610]: [NOTICE]   (250614) : haproxy version is 2.8.14-c23fe91
Oct  2 08:10:59 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[250610]: [NOTICE]   (250614) : path to executable is /usr/sbin/haproxy
Oct  2 08:10:59 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[250610]: [WARNING]  (250614) : Exiting Master process...
Oct  2 08:10:59 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[250610]: [ALERT]    (250614) : Current worker (250616) exited with code 143 (Terminated)
Oct  2 08:10:59 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[250610]: [WARNING]  (250614) : All workers exited. Exiting... (0)
Oct  2 08:10:59 np0005465987 systemd[1]: libpod-537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0.scope: Deactivated successfully.
Oct  2 08:10:59 np0005465987 podman[250900]: 2025-10-02 12:10:59.036688538 +0000 UTC m=+0.052828012 container died 537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.036 2 INFO nova.virt.libvirt.driver [-] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Instance destroyed successfully.#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.036 2 DEBUG nova.objects.instance [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'resources' on Instance uuid 9a277afb-614b-4bf4-af98-8d08d450f263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.052 2 DEBUG nova.virt.libvirt.vif [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:10:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-701902188',display_name='tempest-ImagesTestJSON-server-701902188',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-701902188',id=48,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:10:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-tlseua0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:56Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=9a277afb-614b-4bf4-af98-8d08d450f263,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.053 2 DEBUG nova.network.os_vif_util [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "178d0d5b-7796-4f82-8941-f3d45e6597df", "address": "fa:16:3e:d9:3e:87", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap178d0d5b-77", "ovs_interfaceid": "178d0d5b-7796-4f82-8941-f3d45e6597df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.054 2 DEBUG nova.network.os_vif_util [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3e:87,bridge_name='br-int',has_traffic_filtering=True,id=178d0d5b-7796-4f82-8941-f3d45e6597df,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178d0d5b-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.054 2 DEBUG os_vif [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3e:87,bridge_name='br-int',has_traffic_filtering=True,id=178d0d5b-7796-4f82-8941-f3d45e6597df,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178d0d5b-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap178d0d5b-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.063 2 INFO os_vif [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3e:87,bridge_name='br-int',has_traffic_filtering=True,id=178d0d5b-7796-4f82-8941-f3d45e6597df,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap178d0d5b-77')#033[00m
Oct  2 08:10:59 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0-userdata-shm.mount: Deactivated successfully.
Oct  2 08:10:59 np0005465987 systemd[1]: var-lib-containers-storage-overlay-a74b57c06dd44c1c4aa8ad5657bfdaea266c99bee0f5053169b99d9f041ffa73-merged.mount: Deactivated successfully.
Oct  2 08:10:59 np0005465987 podman[250900]: 2025-10-02 12:10:59.078375223 +0000 UTC m=+0.094514717 container cleanup 537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:10:59 np0005465987 systemd[1]: libpod-conmon-537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0.scope: Deactivated successfully.
Oct  2 08:10:59 np0005465987 podman[250956]: 2025-10-02 12:10:59.146028272 +0000 UTC m=+0.044711210 container remove 537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:10:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:59.152 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3f857625-7c49-4eda-8faf-ae57632e99a0]: (4, ('Thu Oct  2 12:10:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 (537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0)\n537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0\nThu Oct  2 12:10:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 (537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0)\n537f3c9cc9a3caa7401a1059756c7bf91b1f8ca83e4243e68c4415cbe42b25c0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:59.153 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b93201ab-af24-41c1-beb3-49026b31df31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:59.154 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d00de8e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:59 np0005465987 kernel: tap6d00de8e-20: left promiscuous mode
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:59.187 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[eb58bc80-2353-461d-80f6-a9b8f8106030]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:59.213 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[392b95eb-cdce-47c8-b82d-1bde2cbe3935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:59.214 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b913f492-1cfa-40ee-b4e0-d18ae539a537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:59.234 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c6baad-7a4f-4d6d-82f4-63488ac4d86d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510816, 'reachable_time': 37985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250971, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:59.237 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:10:59 np0005465987 systemd[1]: run-netns-ovnmeta\x2d6d00de8e\x2d203c\x2d4e94\x2db60f\x2d36ba9ccef805.mount: Deactivated successfully.
Oct  2 08:10:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:10:59.237 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[a12317d0-5a7d-49ed-99cc-5c9f6a5eb897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:10:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:59.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.502 2 INFO nova.virt.libvirt.driver [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Deleting instance files /var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263_del#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.503 2 INFO nova.virt.libvirt.driver [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Deletion of /var/lib/nova/instances/9a277afb-614b-4bf4-af98-8d08d450f263_del complete#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.583 2 INFO nova.compute.manager [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.583 2 DEBUG oslo.service.loopingcall [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.584 2 DEBUG nova.compute.manager [-] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:10:59 np0005465987 nova_compute[230713]: 2025-10-02 12:10:59.584 2 DEBUG nova.network.neutron [-] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.189 2 DEBUG nova.network.neutron [-] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.213 2 INFO nova.compute.manager [-] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Took 0.63 seconds to deallocate network for instance.#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.257 2 DEBUG oslo_concurrency.lockutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.258 2 DEBUG oslo_concurrency.lockutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.289 2 DEBUG nova.compute.manager [req-b9b3b27a-600e-4cdf-89ac-ae2046edafda req-84225081-a562-4c7d-9a14-02f9627a4957 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Received event network-vif-deleted-178d0d5b-7796-4f82-8941-f3d45e6597df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.316 2 DEBUG oslo_concurrency.processutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:00.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4058958109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.731 2 DEBUG oslo_concurrency.processutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.736 2 DEBUG nova.compute.provider_tree [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.753 2 DEBUG nova.scheduler.client.report [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.773 2 DEBUG oslo_concurrency.lockutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.807 2 INFO nova.scheduler.client.report [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Deleted allocations for instance 9a277afb-614b-4bf4-af98-8d08d450f263#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.921 2 DEBUG oslo_concurrency.lockutils [None req-529c3369-3454-4247-8664-44924ae85a07 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.928 2 DEBUG nova.compute.manager [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Received event network-vif-unplugged-178d0d5b-7796-4f82-8941-f3d45e6597df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.928 2 DEBUG oslo_concurrency.lockutils [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.929 2 DEBUG oslo_concurrency.lockutils [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.929 2 DEBUG oslo_concurrency.lockutils [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.929 2 DEBUG nova.compute.manager [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] No waiting events found dispatching network-vif-unplugged-178d0d5b-7796-4f82-8941-f3d45e6597df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.929 2 WARNING nova.compute.manager [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Received unexpected event network-vif-unplugged-178d0d5b-7796-4f82-8941-f3d45e6597df for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.929 2 DEBUG nova.compute.manager [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Received event network-vif-plugged-178d0d5b-7796-4f82-8941-f3d45e6597df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.930 2 DEBUG oslo_concurrency.lockutils [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.930 2 DEBUG oslo_concurrency.lockutils [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.930 2 DEBUG oslo_concurrency.lockutils [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "9a277afb-614b-4bf4-af98-8d08d450f263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.930 2 DEBUG nova.compute.manager [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] No waiting events found dispatching network-vif-plugged-178d0d5b-7796-4f82-8941-f3d45e6597df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:11:00 np0005465987 nova_compute[230713]: 2025-10-02 12:11:00.930 2 WARNING nova.compute.manager [req-35f30b2b-afec-4de1-a664-92fccdf097dc req-7ef061ad-41dd-4d9d-b74b-e7bb61fa11f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Received unexpected event network-vif-plugged-178d0d5b-7796-4f82-8941-f3d45e6597df for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:11:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:01.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:02 np0005465987 nova_compute[230713]: 2025-10-02 12:11:02.263 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "6332947f-08a8-4242-8a69-f3d57b8112e7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:02 np0005465987 nova_compute[230713]: 2025-10-02 12:11:02.263 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:02 np0005465987 nova_compute[230713]: 2025-10-02 12:11:02.285 2 DEBUG nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:11:02 np0005465987 nova_compute[230713]: 2025-10-02 12:11:02.400 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:02 np0005465987 nova_compute[230713]: 2025-10-02 12:11:02.401 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:02 np0005465987 nova_compute[230713]: 2025-10-02 12:11:02.405 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:11:02 np0005465987 nova_compute[230713]: 2025-10-02 12:11:02.406 2 INFO nova.compute.claims [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:11:02 np0005465987 nova_compute[230713]: 2025-10-02 12:11:02.524 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:02.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2952046040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.005 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.012 2 DEBUG nova.compute.provider_tree [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.033 2 DEBUG nova.scheduler.client.report [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.120 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.121 2 DEBUG nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.171 2 DEBUG nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.172 2 DEBUG nova.network.neutron [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.188 2 INFO nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.202 2 DEBUG nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:11:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.318 2 DEBUG nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.321 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.321 2 INFO nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Creating image(s)#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.348 2 DEBUG nova.storage.rbd_utils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 6332947f-08a8-4242-8a69-f3d57b8112e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.378 2 DEBUG nova.storage.rbd_utils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 6332947f-08a8-4242-8a69-f3d57b8112e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.409 2 DEBUG nova.storage.rbd_utils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 6332947f-08a8-4242-8a69-f3d57b8112e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.413 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.475 2 DEBUG nova.policy [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0df47040f1ff4ce69a6fbdfd9eba4955', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.477 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.477 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.478 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.478 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:03.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.500 2 DEBUG nova.storage.rbd_utils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 6332947f-08a8-4242-8a69-f3d57b8112e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.504 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6332947f-08a8-4242-8a69-f3d57b8112e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.768 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6332947f-08a8-4242-8a69-f3d57b8112e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.843 2 DEBUG nova.storage.rbd_utils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] resizing rbd image 6332947f-08a8-4242-8a69-f3d57b8112e7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.964 2 DEBUG nova.objects.instance [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'migration_context' on Instance uuid 6332947f-08a8-4242-8a69-f3d57b8112e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.992 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.993 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Ensure instance console log exists: /var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.993 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.993 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:03 np0005465987 nova_compute[230713]: 2025-10-02 12:11:03.994 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:04 np0005465987 nova_compute[230713]: 2025-10-02 12:11:04.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:04 np0005465987 nova_compute[230713]: 2025-10-02 12:11:04.238 2 DEBUG nova.network.neutron [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Successfully created port: 8b314691-de9b-481a-94a9-4a7da28dc808 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:11:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:04.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:05 np0005465987 nova_compute[230713]: 2025-10-02 12:11:05.205 2 DEBUG nova.network.neutron [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Successfully updated port: 8b314691-de9b-481a-94a9-4a7da28dc808 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:11:05 np0005465987 nova_compute[230713]: 2025-10-02 12:11:05.221 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "refresh_cache-6332947f-08a8-4242-8a69-f3d57b8112e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:11:05 np0005465987 nova_compute[230713]: 2025-10-02 12:11:05.221 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquired lock "refresh_cache-6332947f-08a8-4242-8a69-f3d57b8112e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:11:05 np0005465987 nova_compute[230713]: 2025-10-02 12:11:05.222 2 DEBUG nova.network.neutron [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:11:05 np0005465987 nova_compute[230713]: 2025-10-02 12:11:05.322 2 DEBUG nova.compute.manager [req-d3fcb7d8-3105-4aa6-afb5-0bdd295e8e92 req-f8d32b13-d777-4bb1-9e6d-6d02589246b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Received event network-changed-8b314691-de9b-481a-94a9-4a7da28dc808 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:05 np0005465987 nova_compute[230713]: 2025-10-02 12:11:05.323 2 DEBUG nova.compute.manager [req-d3fcb7d8-3105-4aa6-afb5-0bdd295e8e92 req-f8d32b13-d777-4bb1-9e6d-6d02589246b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Refreshing instance network info cache due to event network-changed-8b314691-de9b-481a-94a9-4a7da28dc808. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:11:05 np0005465987 nova_compute[230713]: 2025-10-02 12:11:05.324 2 DEBUG oslo_concurrency.lockutils [req-d3fcb7d8-3105-4aa6-afb5-0bdd295e8e92 req-f8d32b13-d777-4bb1-9e6d-6d02589246b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6332947f-08a8-4242-8a69-f3d57b8112e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:11:05 np0005465987 nova_compute[230713]: 2025-10-02 12:11:05.437 2 DEBUG nova.network.neutron [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:11:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:05.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.449 2 DEBUG nova.network.neutron [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Updating instance_info_cache with network_info: [{"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.471 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Releasing lock "refresh_cache-6332947f-08a8-4242-8a69-f3d57b8112e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.471 2 DEBUG nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Instance network_info: |[{"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.472 2 DEBUG oslo_concurrency.lockutils [req-d3fcb7d8-3105-4aa6-afb5-0bdd295e8e92 req-f8d32b13-d777-4bb1-9e6d-6d02589246b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6332947f-08a8-4242-8a69-f3d57b8112e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.472 2 DEBUG nova.network.neutron [req-d3fcb7d8-3105-4aa6-afb5-0bdd295e8e92 req-f8d32b13-d777-4bb1-9e6d-6d02589246b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Refreshing network info cache for port 8b314691-de9b-481a-94a9-4a7da28dc808 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.474 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Start _get_guest_xml network_info=[{"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.479 2 WARNING nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.484 2 DEBUG nova.virt.libvirt.host [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.485 2 DEBUG nova.virt.libvirt.host [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.493 2 DEBUG nova.virt.libvirt.host [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.494 2 DEBUG nova.virt.libvirt.host [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.495 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.495 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.496 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.496 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.496 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.497 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.497 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.497 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.498 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.498 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.498 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.498 2 DEBUG nova.virt.hardware [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.501 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:06.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:11:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/612810927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.937 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.979 2 DEBUG nova.storage.rbd_utils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 6332947f-08a8-4242-8a69-f3d57b8112e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:06 np0005465987 nova_compute[230713]: 2025-10-02 12:11:06.985 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:11:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1638063544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.427 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.429 2 DEBUG nova.virt.libvirt.vif [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:11:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1472700175',display_name='tempest-ImagesTestJSON-server-1472700175',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1472700175',id=49,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-da6ns1zn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:11:03Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=6332947f-08a8-4242-8a69-f3d57b8112e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.429 2 DEBUG nova.network.os_vif_util [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.430 2 DEBUG nova.network.os_vif_util [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:5f:d2,bridge_name='br-int',has_traffic_filtering=True,id=8b314691-de9b-481a-94a9-4a7da28dc808,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b314691-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.431 2 DEBUG nova.objects.instance [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6332947f-08a8-4242-8a69-f3d57b8112e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.449 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <uuid>6332947f-08a8-4242-8a69-f3d57b8112e7</uuid>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <name>instance-00000031</name>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <nova:name>tempest-ImagesTestJSON-server-1472700175</nova:name>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:11:06</nova:creationTime>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <nova:user uuid="0df47040f1ff4ce69a6fbdfd9eba4955">tempest-ImagesTestJSON-2116266493-project-member</nova:user>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <nova:project uuid="55d20ae21b6d4f0abfff3bccc371ee7a">tempest-ImagesTestJSON-2116266493</nova:project>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <nova:port uuid="8b314691-de9b-481a-94a9-4a7da28dc808">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <entry name="serial">6332947f-08a8-4242-8a69-f3d57b8112e7</entry>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <entry name="uuid">6332947f-08a8-4242-8a69-f3d57b8112e7</entry>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6332947f-08a8-4242-8a69-f3d57b8112e7_disk">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6332947f-08a8-4242-8a69-f3d57b8112e7_disk.config">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:63:5f:d2"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <target dev="tap8b314691-de"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7/console.log" append="off"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:11:07 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:11:07 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:11:07 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:11:07 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.451 2 DEBUG nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Preparing to wait for external event network-vif-plugged-8b314691-de9b-481a-94a9-4a7da28dc808 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.451 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.451 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.452 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.452 2 DEBUG nova.virt.libvirt.vif [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:11:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1472700175',display_name='tempest-ImagesTestJSON-server-1472700175',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1472700175',id=49,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-da6ns1zn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:11:03Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=6332947f-08a8-4242-8a69-f3d57b8112e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.453 2 DEBUG nova.network.os_vif_util [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.453 2 DEBUG nova.network.os_vif_util [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:5f:d2,bridge_name='br-int',has_traffic_filtering=True,id=8b314691-de9b-481a-94a9-4a7da28dc808,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b314691-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.453 2 DEBUG os_vif [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:5f:d2,bridge_name='br-int',has_traffic_filtering=True,id=8b314691-de9b-481a-94a9-4a7da28dc808,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b314691-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.454 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.455 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b314691-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b314691-de, col_values=(('external_ids', {'iface-id': '8b314691-de9b-481a-94a9-4a7da28dc808', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:5f:d2', 'vm-uuid': '6332947f-08a8-4242-8a69-f3d57b8112e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:07 np0005465987 NetworkManager[44910]: <info>  [1759407067.4597] manager: (tap8b314691-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.464 2 INFO os_vif [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:5f:d2,bridge_name='br-int',has_traffic_filtering=True,id=8b314691-de9b-481a-94a9-4a7da28dc808,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b314691-de')#033[00m
Oct  2 08:11:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:07.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.515 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.515 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.515 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No VIF found with MAC fa:16:3e:63:5f:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.516 2 INFO nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Using config drive#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.546 2 DEBUG nova.storage.rbd_utils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 6332947f-08a8-4242-8a69-f3d57b8112e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.986 2 INFO nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Creating config drive at /var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7/disk.config#033[00m
Oct  2 08:11:07 np0005465987 nova_compute[230713]: 2025-10-02 12:11:07.992 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpagswxqjq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.120 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpagswxqjq" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.151 2 DEBUG nova.storage.rbd_utils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 6332947f-08a8-4242-8a69-f3d57b8112e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.155 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7/disk.config 6332947f-08a8-4242-8a69-f3d57b8112e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.182 2 DEBUG nova.network.neutron [req-d3fcb7d8-3105-4aa6-afb5-0bdd295e8e92 req-f8d32b13-d777-4bb1-9e6d-6d02589246b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Updated VIF entry in instance network info cache for port 8b314691-de9b-481a-94a9-4a7da28dc808. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.183 2 DEBUG nova.network.neutron [req-d3fcb7d8-3105-4aa6-afb5-0bdd295e8e92 req-f8d32b13-d777-4bb1-9e6d-6d02589246b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Updating instance_info_cache with network_info: [{"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.202 2 DEBUG oslo_concurrency.lockutils [req-d3fcb7d8-3105-4aa6-afb5-0bdd295e8e92 req-f8d32b13-d777-4bb1-9e6d-6d02589246b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6332947f-08a8-4242-8a69-f3d57b8112e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:11:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e184 e184: 3 total, 3 up, 3 in
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.342 2 DEBUG oslo_concurrency.processutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7/disk.config 6332947f-08a8-4242-8a69-f3d57b8112e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.342 2 INFO nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Deleting local config drive /var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7/disk.config because it was imported into RBD.#033[00m
Oct  2 08:11:08 np0005465987 kernel: tap8b314691-de: entered promiscuous mode
Oct  2 08:11:08 np0005465987 NetworkManager[44910]: <info>  [1759407068.3940] manager: (tap8b314691-de): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Oct  2 08:11:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:08Z|00137|binding|INFO|Claiming lport 8b314691-de9b-481a-94a9-4a7da28dc808 for this chassis.
Oct  2 08:11:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:08Z|00138|binding|INFO|8b314691-de9b-481a-94a9-4a7da28dc808: Claiming fa:16:3e:63:5f:d2 10.100.0.9
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.403 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:5f:d2 10.100.0.9'], port_security=['fa:16:3e:63:5f:d2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6332947f-08a8-4242-8a69-f3d57b8112e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a015800-2f8b-4fd4-818b-829a4dcb7912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9678136b-02f9-4c61-b96e-15935f11dca7, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=8b314691-de9b-481a-94a9-4a7da28dc808) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.404 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 8b314691-de9b-481a-94a9-4a7da28dc808 in datapath 6d00de8e-203c-4e94-b60f-36ba9ccef805 bound to our chassis#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.405 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d00de8e-203c-4e94-b60f-36ba9ccef805#033[00m
Oct  2 08:11:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:08Z|00139|binding|INFO|Setting lport 8b314691-de9b-481a-94a9-4a7da28dc808 ovn-installed in OVS
Oct  2 08:11:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:08Z|00140|binding|INFO|Setting lport 8b314691-de9b-481a-94a9-4a7da28dc808 up in Southbound
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.416 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a86454a2-0185-49d8-a559-287a1552da14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.417 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d00de8e-21 in ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.418 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d00de8e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.418 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a6cd71ce-0092-4c64-9c0a-f0b76d4e0b3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.420 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5113f579-2700-438b-ac5d-f6b110c2d3d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.430 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[68edc433-5fb5-4c31-b14f-fd05a18a9545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 systemd-machined[188335]: New machine qemu-26-instance-00000031.
Oct  2 08:11:08 np0005465987 systemd[1]: Started Virtual Machine qemu-26-instance-00000031.
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.455 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd0d8ef-bfd4-4925-b771-3109d68ceeda]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 systemd-udevd[251337]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:11:08 np0005465987 NetworkManager[44910]: <info>  [1759407068.4845] device (tap8b314691-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:11:08 np0005465987 NetworkManager[44910]: <info>  [1759407068.4853] device (tap8b314691-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.489 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5debfa4b-7292-4e39-ab77-ce9215a5baf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 NetworkManager[44910]: <info>  [1759407068.4949] manager: (tap6d00de8e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.494 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6083ff3f-65c6-465e-a953-bee2aa0d24f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 systemd-udevd[251345]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:11:08 np0005465987 podman[251316]: 2025-10-02 12:11:08.499045768 +0000 UTC m=+0.067702670 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.536 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1e852bdb-533b-497b-bf8c-3ab1e2d67a5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.539 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f4736a65-ce08-481b-9d25-b7bc5be7baa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 NetworkManager[44910]: <info>  [1759407068.5602] device (tap6d00de8e-20): carrier: link connected
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.566 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5b2a4f-2198-4e94-bd7c-f10e35d04ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.583 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[03068553-aaca-47d9-bda6-3c456fe5b1c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d00de8e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:28:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513713, 'reachable_time': 23990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251371, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.599 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[61cb0c25-f4ab-44ab-9893-0e03bf267fd8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:28f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 513713, 'tstamp': 513713}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251372, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.616 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cc014fa8-c576-4525-9c28-4510e641961a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d00de8e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:28:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513713, 'reachable_time': 23990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251373, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.647 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[58495b65-e75b-423e-a383-943b4da0b491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.708 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a96c830e-2994-4570-acfc-a9c6e9324dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.710 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d00de8e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.710 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.710 2 DEBUG nova.compute.manager [req-0e7813fa-be52-45f6-a3ee-121515def434 req-0f0274f0-5d79-4c43-9c3a-71dc5e57a3d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Received event network-vif-plugged-8b314691-de9b-481a-94a9-4a7da28dc808 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.710 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d00de8e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.711 2 DEBUG oslo_concurrency.lockutils [req-0e7813fa-be52-45f6-a3ee-121515def434 req-0f0274f0-5d79-4c43-9c3a-71dc5e57a3d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.711 2 DEBUG oslo_concurrency.lockutils [req-0e7813fa-be52-45f6-a3ee-121515def434 req-0f0274f0-5d79-4c43-9c3a-71dc5e57a3d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.711 2 DEBUG oslo_concurrency.lockutils [req-0e7813fa-be52-45f6-a3ee-121515def434 req-0f0274f0-5d79-4c43-9c3a-71dc5e57a3d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.711 2 DEBUG nova.compute.manager [req-0e7813fa-be52-45f6-a3ee-121515def434 req-0f0274f0-5d79-4c43-9c3a-71dc5e57a3d6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Processing event network-vif-plugged-8b314691-de9b-481a-94a9-4a7da28dc808 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:08 np0005465987 kernel: tap6d00de8e-20: entered promiscuous mode
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:08 np0005465987 NetworkManager[44910]: <info>  [1759407068.7149] manager: (tap6d00de8e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.717 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d00de8e-20, col_values=(('external_ids', {'iface-id': '4d0b2163-acbb-4b6a-b6d8-84f8212e1e02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:08Z|00141|binding|INFO|Releasing lport 4d0b2163-acbb-4b6a-b6d8-84f8212e1e02 from this chassis (sb_readonly=0)
Oct  2 08:11:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:08.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:08 np0005465987 nova_compute[230713]: 2025-10-02 12:11:08.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.738 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.739 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[74bb2303-aa7c-4ad9-ad3a-f603ed7388cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.740 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-6d00de8e-203c-4e94-b60f-36ba9ccef805
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 6d00de8e-203c-4e94-b60f-36ba9ccef805
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:11:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:08.741 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'env', 'PROCESS_TAG=haproxy-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d00de8e-203c-4e94-b60f-36ba9ccef805.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:11:09 np0005465987 podman[251405]: 2025-10-02 12:11:09.108862932 +0000 UTC m=+0.049159112 container create fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:11:09 np0005465987 systemd[1]: Started libpod-conmon-fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464.scope.
Oct  2 08:11:09 np0005465987 podman[251405]: 2025-10-02 12:11:09.081207142 +0000 UTC m=+0.021503352 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:11:09 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:11:09 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cbdf4e82a00a906ecccba203d950d740c57f262d7705f71f3eee81e3b5b724f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:11:09 np0005465987 podman[251405]: 2025-10-02 12:11:09.202471394 +0000 UTC m=+0.142767634 container init fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 08:11:09 np0005465987 podman[251405]: 2025-10-02 12:11:09.208472038 +0000 UTC m=+0.148768228 container start fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:11:09 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[251420]: [NOTICE]   (251424) : New worker (251426) forked
Oct  2 08:11:09 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[251420]: [NOTICE]   (251424) : Loading success.
Oct  2 08:11:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:09.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.406 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407070.4057992, 6332947f-08a8-4242-8a69-f3d57b8112e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.407 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] VM Started (Lifecycle Event)#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.409 2 DEBUG nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.412 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.414 2 INFO nova.virt.libvirt.driver [-] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Instance spawned successfully.#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.414 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.426 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.430 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.434 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.434 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.434 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.435 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.435 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.435 2 DEBUG nova.virt.libvirt.driver [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.462 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.462 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407070.4066656, 6332947f-08a8-4242-8a69-f3d57b8112e7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.462 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.496 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.500 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407070.4110823, 6332947f-08a8-4242-8a69-f3d57b8112e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.501 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.511 2 INFO nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Took 7.19 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.513 2 DEBUG nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.525 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.529 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.566 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.599 2 INFO nova.compute.manager [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Took 8.26 seconds to build instance.#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.616 2 DEBUG oslo_concurrency.lockutils [None req-1f4a3c87-7ebb-493b-b2f2-e9f9c08f7d28 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:10.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.807 2 DEBUG nova.compute.manager [req-ed0a75b9-fd60-4dbb-b8a8-079edc19fd7f req-89a7f534-6291-4166-a32a-7ceb15cde5ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Received event network-vif-plugged-8b314691-de9b-481a-94a9-4a7da28dc808 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.807 2 DEBUG oslo_concurrency.lockutils [req-ed0a75b9-fd60-4dbb-b8a8-079edc19fd7f req-89a7f534-6291-4166-a32a-7ceb15cde5ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.807 2 DEBUG oslo_concurrency.lockutils [req-ed0a75b9-fd60-4dbb-b8a8-079edc19fd7f req-89a7f534-6291-4166-a32a-7ceb15cde5ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.808 2 DEBUG oslo_concurrency.lockutils [req-ed0a75b9-fd60-4dbb-b8a8-079edc19fd7f req-89a7f534-6291-4166-a32a-7ceb15cde5ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.808 2 DEBUG nova.compute.manager [req-ed0a75b9-fd60-4dbb-b8a8-079edc19fd7f req-89a7f534-6291-4166-a32a-7ceb15cde5ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] No waiting events found dispatching network-vif-plugged-8b314691-de9b-481a-94a9-4a7da28dc808 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:11:10 np0005465987 nova_compute[230713]: 2025-10-02 12:11:10.808 2 WARNING nova.compute.manager [req-ed0a75b9-fd60-4dbb-b8a8-079edc19fd7f req-89a7f534-6291-4166-a32a-7ceb15cde5ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Received unexpected event network-vif-plugged-8b314691-de9b-481a-94a9-4a7da28dc808 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:11:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:11.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:11:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:11:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:11:11 np0005465987 nova_compute[230713]: 2025-10-02 12:11:11.694 2 DEBUG oslo_concurrency.lockutils [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "6332947f-08a8-4242-8a69-f3d57b8112e7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:11 np0005465987 nova_compute[230713]: 2025-10-02 12:11:11.695 2 DEBUG oslo_concurrency.lockutils [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:11 np0005465987 nova_compute[230713]: 2025-10-02 12:11:11.695 2 DEBUG nova.compute.manager [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:11 np0005465987 nova_compute[230713]: 2025-10-02 12:11:11.699 2 DEBUG nova.compute.manager [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:11:11 np0005465987 nova_compute[230713]: 2025-10-02 12:11:11.700 2 DEBUG nova.objects.instance [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'flavor' on Instance uuid 6332947f-08a8-4242-8a69-f3d57b8112e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:11 np0005465987 nova_compute[230713]: 2025-10-02 12:11:11.723 2 DEBUG nova.virt.libvirt.driver [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:11:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:12 np0005465987 nova_compute[230713]: 2025-10-02 12:11:12.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:12.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:13 np0005465987 nova_compute[230713]: 2025-10-02 12:11:13.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:13.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:14 np0005465987 nova_compute[230713]: 2025-10-02 12:11:14.033 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407059.0326996, 9a277afb-614b-4bf4-af98-8d08d450f263 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:14 np0005465987 nova_compute[230713]: 2025-10-02 12:11:14.034 2 INFO nova.compute.manager [-] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:11:14 np0005465987 nova_compute[230713]: 2025-10-02 12:11:14.051 2 DEBUG nova.compute.manager [None req-558c831f-fadc-41aa-a038-d7387e4356f4 - - - - - -] [instance: 9a277afb-614b-4bf4-af98-8d08d450f263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:14.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:15.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:16 np0005465987 podman[251634]: 2025-10-02 12:11:16.65069464 +0000 UTC m=+0.070125788 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:11:16 np0005465987 podman[251633]: 2025-10-02 12:11:16.650862744 +0000 UTC m=+0.071225387 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:11:16 np0005465987 podman[251632]: 2025-10-02 12:11:16.680477578 +0000 UTC m=+0.101918011 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:11:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:11:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:11:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:16.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:17 np0005465987 nova_compute[230713]: 2025-10-02 12:11:17.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:17.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:18 np0005465987 nova_compute[230713]: 2025-10-02 12:11:18.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:18.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:11:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:19.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:11:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:20.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:21.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:21 np0005465987 nova_compute[230713]: 2025-10-02 12:11:21.760 2 DEBUG nova.virt.libvirt.driver [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:11:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:22 np0005465987 nova_compute[230713]: 2025-10-02 12:11:22.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:22.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:23 np0005465987 nova_compute[230713]: 2025-10-02 12:11:23.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:23.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:24Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:5f:d2 10.100.0.9
Oct  2 08:11:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:24Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:5f:d2 10.100.0.9
Oct  2 08:11:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:24.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:25.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:26.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:27 np0005465987 nova_compute[230713]: 2025-10-02 12:11:27.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:27.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:27.939 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:27.939 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:27.940 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:28 np0005465987 nova_compute[230713]: 2025-10-02 12:11:28.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:28.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:29.425 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:11:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:29.426 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:11:29 np0005465987 nova_compute[230713]: 2025-10-02 12:11:29.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:29.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:30.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:31.427 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:31.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:32 np0005465987 nova_compute[230713]: 2025-10-02 12:11:32.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:32.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:32 np0005465987 nova_compute[230713]: 2025-10-02 12:11:32.804 2 DEBUG nova.virt.libvirt.driver [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:11:33 np0005465987 nova_compute[230713]: 2025-10-02 12:11:33.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:11:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:33.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:11:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:34.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:34 np0005465987 nova_compute[230713]: 2025-10-02 12:11:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:34 np0005465987 nova_compute[230713]: 2025-10-02 12:11:34.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:11:34 np0005465987 nova_compute[230713]: 2025-10-02 12:11:34.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:11:34 np0005465987 nova_compute[230713]: 2025-10-02 12:11:34.991 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-6332947f-08a8-4242-8a69-f3d57b8112e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:11:34 np0005465987 nova_compute[230713]: 2025-10-02 12:11:34.991 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-6332947f-08a8-4242-8a69-f3d57b8112e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:11:34 np0005465987 nova_compute[230713]: 2025-10-02 12:11:34.991 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:11:34 np0005465987 nova_compute[230713]: 2025-10-02 12:11:34.991 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6332947f-08a8-4242-8a69-f3d57b8112e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:35.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:35 np0005465987 nova_compute[230713]: 2025-10-02 12:11:35.818 2 INFO nova.virt.libvirt.driver [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Instance shutdown successfully after 24 seconds.#033[00m
Oct  2 08:11:35 np0005465987 kernel: tap8b314691-de (unregistering): left promiscuous mode
Oct  2 08:11:35 np0005465987 NetworkManager[44910]: <info>  [1759407095.9771] device (tap8b314691-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:11:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:35Z|00142|binding|INFO|Releasing lport 8b314691-de9b-481a-94a9-4a7da28dc808 from this chassis (sb_readonly=0)
Oct  2 08:11:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:35Z|00143|binding|INFO|Setting lport 8b314691-de9b-481a-94a9-4a7da28dc808 down in Southbound
Oct  2 08:11:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:11:35Z|00144|binding|INFO|Removing iface tap8b314691-de ovn-installed in OVS
Oct  2 08:11:35 np0005465987 nova_compute[230713]: 2025-10-02 12:11:35.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:35.995 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:5f:d2 10.100.0.9'], port_security=['fa:16:3e:63:5f:d2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6332947f-08a8-4242-8a69-f3d57b8112e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a015800-2f8b-4fd4-818b-829a4dcb7912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9678136b-02f9-4c61-b96e-15935f11dca7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=8b314691-de9b-481a-94a9-4a7da28dc808) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:11:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:35.997 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 8b314691-de9b-481a-94a9-4a7da28dc808 in datapath 6d00de8e-203c-4e94-b60f-36ba9ccef805 unbound from our chassis#033[00m
Oct  2 08:11:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:35.998 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d00de8e-203c-4e94-b60f-36ba9ccef805, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:35.999 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6c16ba8a-5079-4e9a-9ee7-860324a2cba6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.000 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 namespace which is not needed anymore#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:36 np0005465987 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct  2 08:11:36 np0005465987 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000031.scope: Consumed 14.862s CPU time.
Oct  2 08:11:36 np0005465987 systemd-machined[188335]: Machine qemu-26-instance-00000031 terminated.
Oct  2 08:11:36 np0005465987 NetworkManager[44910]: <info>  [1759407096.2342] manager: (tap8b314691-de): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.254 2 INFO nova.virt.libvirt.driver [-] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Instance destroyed successfully.#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.255 2 DEBUG nova.objects.instance [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'numa_topology' on Instance uuid 6332947f-08a8-4242-8a69-f3d57b8112e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.279 2 DEBUG nova.compute.manager [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.290 2 DEBUG nova.compute.manager [req-ccb9eb3f-629a-47ee-8934-23f8a2a19129 req-6ac66d98-fa83-48c9-8fc6-874ef6198cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Received event network-vif-unplugged-8b314691-de9b-481a-94a9-4a7da28dc808 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.291 2 DEBUG oslo_concurrency.lockutils [req-ccb9eb3f-629a-47ee-8934-23f8a2a19129 req-6ac66d98-fa83-48c9-8fc6-874ef6198cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.291 2 DEBUG oslo_concurrency.lockutils [req-ccb9eb3f-629a-47ee-8934-23f8a2a19129 req-6ac66d98-fa83-48c9-8fc6-874ef6198cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.291 2 DEBUG oslo_concurrency.lockutils [req-ccb9eb3f-629a-47ee-8934-23f8a2a19129 req-6ac66d98-fa83-48c9-8fc6-874ef6198cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.291 2 DEBUG nova.compute.manager [req-ccb9eb3f-629a-47ee-8934-23f8a2a19129 req-6ac66d98-fa83-48c9-8fc6-874ef6198cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] No waiting events found dispatching network-vif-unplugged-8b314691-de9b-481a-94a9-4a7da28dc808 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.292 2 WARNING nova.compute.manager [req-ccb9eb3f-629a-47ee-8934-23f8a2a19129 req-6ac66d98-fa83-48c9-8fc6-874ef6198cbc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Received unexpected event network-vif-unplugged-8b314691-de9b-481a-94a9-4a7da28dc808 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 08:11:36 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[251420]: [NOTICE]   (251424) : haproxy version is 2.8.14-c23fe91
Oct  2 08:11:36 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[251420]: [NOTICE]   (251424) : path to executable is /usr/sbin/haproxy
Oct  2 08:11:36 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[251420]: [WARNING]  (251424) : Exiting Master process...
Oct  2 08:11:36 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[251420]: [WARNING]  (251424) : Exiting Master process...
Oct  2 08:11:36 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[251420]: [ALERT]    (251424) : Current worker (251426) exited with code 143 (Terminated)
Oct  2 08:11:36 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[251420]: [WARNING]  (251424) : All workers exited. Exiting... (0)
Oct  2 08:11:36 np0005465987 systemd[1]: libpod-fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464.scope: Deactivated successfully.
Oct  2 08:11:36 np0005465987 podman[251745]: 2025-10-02 12:11:36.305975472 +0000 UTC m=+0.225845555 container died fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.333 2 DEBUG oslo_concurrency.lockutils [None req-e7a91584-a587-42a1-bd76-2dbfd12c03de 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:36 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464-userdata-shm.mount: Deactivated successfully.
Oct  2 08:11:36 np0005465987 systemd[1]: var-lib-containers-storage-overlay-6cbdf4e82a00a906ecccba203d950d740c57f262d7705f71f3eee81e3b5b724f-merged.mount: Deactivated successfully.
Oct  2 08:11:36 np0005465987 podman[251745]: 2025-10-02 12:11:36.496435954 +0000 UTC m=+0.416306037 container cleanup fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:11:36 np0005465987 systemd[1]: libpod-conmon-fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464.scope: Deactivated successfully.
Oct  2 08:11:36 np0005465987 podman[251786]: 2025-10-02 12:11:36.574363135 +0000 UTC m=+0.056828262 container remove fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.580 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[78196c8b-5e30-4dd5-b3fd-e2e6c2dc67cf]: (4, ('Thu Oct  2 12:11:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 (fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464)\nfbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464\nThu Oct  2 12:11:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 (fbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464)\nfbad3959bede490d635b865306525b857932aeb710917ed731359ee1631fc464\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.582 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a0096505-1121-4dad-8ca9-543394725e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.583 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d00de8e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:36 np0005465987 kernel: tap6d00de8e-20: left promiscuous mode
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.603 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[183f04e1-01fc-444d-abb8-80b595c0e2e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.618 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Updating instance_info_cache with network_info: [{"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.635 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a5563f17-eb47-4eb7-a736-9ef2c7874f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.636 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd912bc-f699-4ebb-b012-4e7f5973ae91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.642 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-6332947f-08a8-4242-8a69-f3d57b8112e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:11:36 np0005465987 nova_compute[230713]: 2025-10-02 12:11:36.642 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.650 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[af0762af-ac58-49d4-8f05-082cceb895f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513705, 'reachable_time': 38896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251806, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:36 np0005465987 systemd[1]: run-netns-ovnmeta\x2d6d00de8e\x2d203c\x2d4e94\x2db60f\x2d36ba9ccef805.mount: Deactivated successfully.
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.656 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:11:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:11:36.657 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6b24d6-4490-4615-b920-821074c2c032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:36.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:37 np0005465987 nova_compute[230713]: 2025-10-02 12:11:37.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:37.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:37 np0005465987 nova_compute[230713]: 2025-10-02 12:11:37.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:38 np0005465987 nova_compute[230713]: 2025-10-02 12:11:38.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:38 np0005465987 nova_compute[230713]: 2025-10-02 12:11:38.604 2 DEBUG nova.compute.manager [req-698d312c-c3ed-4224-bc2a-ed90052ca1f3 req-7b652a1a-4777-4c7b-8bcc-d05a6df2e701 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Received event network-vif-plugged-8b314691-de9b-481a-94a9-4a7da28dc808 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:38 np0005465987 nova_compute[230713]: 2025-10-02 12:11:38.605 2 DEBUG oslo_concurrency.lockutils [req-698d312c-c3ed-4224-bc2a-ed90052ca1f3 req-7b652a1a-4777-4c7b-8bcc-d05a6df2e701 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:38 np0005465987 nova_compute[230713]: 2025-10-02 12:11:38.605 2 DEBUG oslo_concurrency.lockutils [req-698d312c-c3ed-4224-bc2a-ed90052ca1f3 req-7b652a1a-4777-4c7b-8bcc-d05a6df2e701 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:38 np0005465987 nova_compute[230713]: 2025-10-02 12:11:38.606 2 DEBUG oslo_concurrency.lockutils [req-698d312c-c3ed-4224-bc2a-ed90052ca1f3 req-7b652a1a-4777-4c7b-8bcc-d05a6df2e701 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:38 np0005465987 nova_compute[230713]: 2025-10-02 12:11:38.606 2 DEBUG nova.compute.manager [req-698d312c-c3ed-4224-bc2a-ed90052ca1f3 req-7b652a1a-4777-4c7b-8bcc-d05a6df2e701 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] No waiting events found dispatching network-vif-plugged-8b314691-de9b-481a-94a9-4a7da28dc808 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:11:38 np0005465987 nova_compute[230713]: 2025-10-02 12:11:38.606 2 WARNING nova.compute.manager [req-698d312c-c3ed-4224-bc2a-ed90052ca1f3 req-7b652a1a-4777-4c7b-8bcc-d05a6df2e701 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Received unexpected event network-vif-plugged-8b314691-de9b-481a-94a9-4a7da28dc808 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:11:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:38.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:38 np0005465987 podman[251807]: 2025-10-02 12:11:38.842419227 +0000 UTC m=+0.057862071 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  2 08:11:38 np0005465987 nova_compute[230713]: 2025-10-02 12:11:38.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:38 np0005465987 nova_compute[230713]: 2025-10-02 12:11:38.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:39 np0005465987 nova_compute[230713]: 2025-10-02 12:11:39.142 2 DEBUG nova.compute.manager [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:39 np0005465987 nova_compute[230713]: 2025-10-02 12:11:39.196 2 INFO nova.compute.manager [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] instance snapshotting#033[00m
Oct  2 08:11:39 np0005465987 nova_compute[230713]: 2025-10-02 12:11:39.196 2 WARNING nova.compute.manager [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct  2 08:11:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:39.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:39 np0005465987 nova_compute[230713]: 2025-10-02 12:11:39.662 2 INFO nova.virt.libvirt.driver [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Beginning cold snapshot process#033[00m
Oct  2 08:11:39 np0005465987 nova_compute[230713]: 2025-10-02 12:11:39.818 2 DEBUG nova.virt.libvirt.imagebackend [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:11:40 np0005465987 nova_compute[230713]: 2025-10-02 12:11:40.209 2 DEBUG nova.storage.rbd_utils [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] creating snapshot(0fbb69f51e0843ad833bee1ba508a710) on rbd image(6332947f-08a8-4242-8a69-f3d57b8112e7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:11:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e185 e185: 3 total, 3 up, 3 in
Oct  2 08:11:40 np0005465987 nova_compute[230713]: 2025-10-02 12:11:40.637 2 DEBUG nova.storage.rbd_utils [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] cloning vms/6332947f-08a8-4242-8a69-f3d57b8112e7_disk@0fbb69f51e0843ad833bee1ba508a710 to images/f396cccc-8bcd-476b-8403-dd0820708c77 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:11:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:40.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:40 np0005465987 nova_compute[230713]: 2025-10-02 12:11:40.778 2 DEBUG nova.storage.rbd_utils [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] flattening images/f396cccc-8bcd-476b-8403-dd0820708c77 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:11:41 np0005465987 nova_compute[230713]: 2025-10-02 12:11:41.263 2 DEBUG nova.storage.rbd_utils [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] removing snapshot(0fbb69f51e0843ad833bee1ba508a710) on rbd image(6332947f-08a8-4242-8a69-f3d57b8112e7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:11:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:41.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e186 e186: 3 total, 3 up, 3 in
Oct  2 08:11:41 np0005465987 nova_compute[230713]: 2025-10-02 12:11:41.710 2 DEBUG nova.storage.rbd_utils [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] creating snapshot(snap) on rbd image(f396cccc-8bcd-476b-8403-dd0820708c77) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:11:41 np0005465987 nova_compute[230713]: 2025-10-02 12:11:41.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:42 np0005465987 nova_compute[230713]: 2025-10-02 12:11:42.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e187 e187: 3 total, 3 up, 3 in
Oct  2 08:11:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:42.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:43 np0005465987 nova_compute[230713]: 2025-10-02 12:11:43.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:43.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:43 np0005465987 nova_compute[230713]: 2025-10-02 12:11:43.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:43 np0005465987 nova_compute[230713]: 2025-10-02 12:11:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:43 np0005465987 nova_compute[230713]: 2025-10-02 12:11:43.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:11:43 np0005465987 nova_compute[230713]: 2025-10-02 12:11:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:44.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:44 np0005465987 nova_compute[230713]: 2025-10-02 12:11:44.928 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:44 np0005465987 nova_compute[230713]: 2025-10-02 12:11:44.929 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:44 np0005465987 nova_compute[230713]: 2025-10-02 12:11:44.929 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:44 np0005465987 nova_compute[230713]: 2025-10-02 12:11:44.929 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:11:44 np0005465987 nova_compute[230713]: 2025-10-02 12:11:44.929 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1037535637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.472 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:45.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.581 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.581 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.719 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.720 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4726MB free_disk=20.921836853027344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.721 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.721 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.892 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 6332947f-08a8-4242-8a69-f3d57b8112e7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.892 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:11:45 np0005465987 nova_compute[230713]: 2025-10-02 12:11:45.892 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:11:46 np0005465987 nova_compute[230713]: 2025-10-02 12:11:46.062 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3852036067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:46 np0005465987 nova_compute[230713]: 2025-10-02 12:11:46.519 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:46 np0005465987 nova_compute[230713]: 2025-10-02 12:11:46.525 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:46 np0005465987 nova_compute[230713]: 2025-10-02 12:11:46.545 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:46 np0005465987 nova_compute[230713]: 2025-10-02 12:11:46.571 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:11:46 np0005465987 nova_compute[230713]: 2025-10-02 12:11:46.572 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:46.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:46 np0005465987 podman[252016]: 2025-10-02 12:11:46.852907147 +0000 UTC m=+0.064073002 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 08:11:46 np0005465987 podman[252015]: 2025-10-02 12:11:46.898555041 +0000 UTC m=+0.115092943 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  2 08:11:46 np0005465987 podman[252017]: 2025-10-02 12:11:46.902312854 +0000 UTC m=+0.108639825 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct  2 08:11:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:47 np0005465987 nova_compute[230713]: 2025-10-02 12:11:47.328 2 INFO nova.virt.libvirt.driver [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Snapshot image upload complete#033[00m
Oct  2 08:11:47 np0005465987 nova_compute[230713]: 2025-10-02 12:11:47.329 2 INFO nova.compute.manager [None req-276ca0ba-c461-4df0-b47a-370a6cffbff4 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Took 8.13 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:11:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:47.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:47 np0005465987 nova_compute[230713]: 2025-10-02 12:11:47.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:48 np0005465987 nova_compute[230713]: 2025-10-02 12:11:48.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:48 np0005465987 nova_compute[230713]: 2025-10-02 12:11:48.565 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:48 np0005465987 nova_compute[230713]: 2025-10-02 12:11:48.679 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e188 e188: 3 total, 3 up, 3 in
Oct  2 08:11:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:48.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:49.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e189 e189: 3 total, 3 up, 3 in
Oct  2 08:11:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:50.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:51 np0005465987 nova_compute[230713]: 2025-10-02 12:11:51.253 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407096.252079, 6332947f-08a8-4242-8a69-f3d57b8112e7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:51 np0005465987 nova_compute[230713]: 2025-10-02 12:11:51.253 2 INFO nova.compute.manager [-] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:11:51 np0005465987 nova_compute[230713]: 2025-10-02 12:11:51.283 2 DEBUG nova.compute.manager [None req-2a006b26-429d-442a-b4a8-7bd1ae780641 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:51 np0005465987 nova_compute[230713]: 2025-10-02 12:11:51.286 2 DEBUG nova.compute.manager [None req-2a006b26-429d-442a-b4a8-7bd1ae780641 - - - - - -] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:11:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:51.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.701 2 DEBUG oslo_concurrency.lockutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "6332947f-08a8-4242-8a69-f3d57b8112e7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.702 2 DEBUG oslo_concurrency.lockutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.702 2 DEBUG oslo_concurrency.lockutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.702 2 DEBUG oslo_concurrency.lockutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.702 2 DEBUG oslo_concurrency.lockutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.703 2 INFO nova.compute.manager [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Terminating instance#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.704 2 DEBUG nova.compute.manager [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.709 2 INFO nova.virt.libvirt.driver [-] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Instance destroyed successfully.#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.709 2 DEBUG nova.objects.instance [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'resources' on Instance uuid 6332947f-08a8-4242-8a69-f3d57b8112e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.728 2 DEBUG nova.virt.libvirt.vif [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:11:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1472700175',display_name='tempest-ImagesTestJSON-server-1472700175',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1472700175',id=49,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:11:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-da6ns1zn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:11:48Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=6332947f-08a8-4242-8a69-f3d57b8112e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.728 2 DEBUG nova.network.os_vif_util [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "8b314691-de9b-481a-94a9-4a7da28dc808", "address": "fa:16:3e:63:5f:d2", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b314691-de", "ovs_interfaceid": "8b314691-de9b-481a-94a9-4a7da28dc808", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.729 2 DEBUG nova.network.os_vif_util [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:5f:d2,bridge_name='br-int',has_traffic_filtering=True,id=8b314691-de9b-481a-94a9-4a7da28dc808,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b314691-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.730 2 DEBUG os_vif [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:5f:d2,bridge_name='br-int',has_traffic_filtering=True,id=8b314691-de9b-481a-94a9-4a7da28dc808,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b314691-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.732 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b314691-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:52 np0005465987 nova_compute[230713]: 2025-10-02 12:11:52.738 2 INFO os_vif [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:5f:d2,bridge_name='br-int',has_traffic_filtering=True,id=8b314691-de9b-481a-94a9-4a7da28dc808,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b314691-de')#033[00m
Oct  2 08:11:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:52.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:53 np0005465987 nova_compute[230713]: 2025-10-02 12:11:53.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.243957) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113244013, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2759, "num_deletes": 518, "total_data_size": 5516023, "memory_usage": 5600464, "flush_reason": "Manual Compaction"}
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113315456, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3596897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30808, "largest_seqno": 33562, "table_properties": {"data_size": 3586221, "index_size": 6338, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25900, "raw_average_key_size": 20, "raw_value_size": 3562533, "raw_average_value_size": 2755, "num_data_blocks": 274, "num_entries": 1293, "num_filter_entries": 1293, "num_deletions": 518, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406930, "oldest_key_time": 1759406930, "file_creation_time": 1759407113, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 71541 microseconds, and 13825 cpu microseconds.
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.315499) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3596897 bytes OK
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.315518) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.329813) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.329855) EVENT_LOG_v1 {"time_micros": 1759407113329846, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.329877) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5502936, prev total WAL file size 5502936, number of live WAL files 2.
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.331495) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3512KB)], [60(8382KB)]
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113331553, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12180126, "oldest_snapshot_seqno": -1}
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 5617 keys, 10139627 bytes, temperature: kUnknown
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113456423, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 10139627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10100379, "index_size": 24098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 144423, "raw_average_key_size": 25, "raw_value_size": 9997658, "raw_average_value_size": 1779, "num_data_blocks": 968, "num_entries": 5617, "num_filter_entries": 5617, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759407113, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.456665) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10139627 bytes
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.478790) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.5 rd, 81.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 6669, records dropped: 1052 output_compression: NoCompression
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.478824) EVENT_LOG_v1 {"time_micros": 1759407113478811, "job": 36, "event": "compaction_finished", "compaction_time_micros": 124925, "compaction_time_cpu_micros": 23091, "output_level": 6, "num_output_files": 1, "total_output_size": 10139627, "num_input_records": 6669, "num_output_records": 5617, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113480235, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407113483737, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.331427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.483832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.483837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.483839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.483842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:11:53.483844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:53 np0005465987 nova_compute[230713]: 2025-10-02 12:11:53.820 2 INFO nova.virt.libvirt.driver [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Deleting instance files /var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7_del#033[00m
Oct  2 08:11:53 np0005465987 nova_compute[230713]: 2025-10-02 12:11:53.821 2 INFO nova.virt.libvirt.driver [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Deletion of /var/lib/nova/instances/6332947f-08a8-4242-8a69-f3d57b8112e7_del complete#033[00m
Oct  2 08:11:53 np0005465987 nova_compute[230713]: 2025-10-02 12:11:53.903 2 INFO nova.compute.manager [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:11:53 np0005465987 nova_compute[230713]: 2025-10-02 12:11:53.904 2 DEBUG oslo.service.loopingcall [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:11:53 np0005465987 nova_compute[230713]: 2025-10-02 12:11:53.904 2 DEBUG nova.compute.manager [-] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:11:53 np0005465987 nova_compute[230713]: 2025-10-02 12:11:53.905 2 DEBUG nova.network.neutron [-] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:11:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:54.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:55 np0005465987 nova_compute[230713]: 2025-10-02 12:11:55.903 2 DEBUG nova.network.neutron [-] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:55 np0005465987 nova_compute[230713]: 2025-10-02 12:11:55.935 2 INFO nova.compute.manager [-] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Took 2.03 seconds to deallocate network for instance.#033[00m
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.015 2 DEBUG oslo_concurrency.lockutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.015 2 DEBUG oslo_concurrency.lockutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.077 2 DEBUG oslo_concurrency.processutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.105 2 DEBUG nova.compute.manager [req-fe70ad97-0c07-4340-9e8f-324bb77be2fa req-4198fe7b-0fd1-437e-a8ea-92c4a72b356d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6332947f-08a8-4242-8a69-f3d57b8112e7] Received event network-vif-deleted-8b314691-de9b-481a-94a9-4a7da28dc808 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3287134201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.504 2 DEBUG oslo_concurrency.processutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.511 2 DEBUG nova.compute.provider_tree [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.534 2 DEBUG nova.scheduler.client.report [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.572 2 DEBUG oslo_concurrency.lockutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.650 2 INFO nova.scheduler.client.report [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Deleted allocations for instance 6332947f-08a8-4242-8a69-f3d57b8112e7#033[00m
Oct  2 08:11:56 np0005465987 nova_compute[230713]: 2025-10-02 12:11:56.757 2 DEBUG oslo_concurrency.lockutils [None req-4b3bd6c3-ed55-4347-bbb0-b92b26f5fdb5 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "6332947f-08a8-4242-8a69-f3d57b8112e7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:56.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:57.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:57 np0005465987 nova_compute[230713]: 2025-10-02 12:11:57.637 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "130a4f54-6947-473f-bedd-5a3464805fb0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:57 np0005465987 nova_compute[230713]: 2025-10-02 12:11:57.638 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:57 np0005465987 nova_compute[230713]: 2025-10-02 12:11:57.667 2 DEBUG nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:11:57 np0005465987 nova_compute[230713]: 2025-10-02 12:11:57.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:57 np0005465987 nova_compute[230713]: 2025-10-02 12:11:57.770 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:57 np0005465987 nova_compute[230713]: 2025-10-02 12:11:57.771 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:57 np0005465987 nova_compute[230713]: 2025-10-02 12:11:57.776 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:11:57 np0005465987 nova_compute[230713]: 2025-10-02 12:11:57.777 2 INFO nova.compute.claims [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:11:57 np0005465987 nova_compute[230713]: 2025-10-02 12:11:57.930 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3524384916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.402 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.409 2 DEBUG nova.compute.provider_tree [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.426 2 DEBUG nova.scheduler.client.report [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.458 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.459 2 DEBUG nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:11:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e190 e190: 3 total, 3 up, 3 in
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.579 2 DEBUG nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.579 2 DEBUG nova.network.neutron [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.609 2 INFO nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.637 2 DEBUG nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:11:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:58.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.829 2 DEBUG nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.831 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.832 2 INFO nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Creating image(s)#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.862 2 DEBUG nova.storage.rbd_utils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 130a4f54-6947-473f-bedd-5a3464805fb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.901 2 DEBUG nova.storage.rbd_utils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 130a4f54-6947-473f-bedd-5a3464805fb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.934 2 DEBUG nova.storage.rbd_utils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 130a4f54-6947-473f-bedd-5a3464805fb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:58 np0005465987 nova_compute[230713]: 2025-10-02 12:11:58.939 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.011 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.011 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.012 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.012 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.046 2 DEBUG nova.storage.rbd_utils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 130a4f54-6947-473f-bedd-5a3464805fb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.051 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 130a4f54-6947-473f-bedd-5a3464805fb0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.425 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 130a4f54-6947-473f-bedd-5a3464805fb0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.520 2 DEBUG nova.storage.rbd_utils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] resizing rbd image 130a4f54-6947-473f-bedd-5a3464805fb0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:11:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:11:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:11:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:59.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.653 2 DEBUG nova.objects.instance [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'migration_context' on Instance uuid 130a4f54-6947-473f-bedd-5a3464805fb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.673 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.674 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Ensure instance console log exists: /var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.674 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.674 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.675 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:59 np0005465987 nova_compute[230713]: 2025-10-02 12:11:59.902 2 DEBUG nova.policy [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0df47040f1ff4ce69a6fbdfd9eba4955', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:12:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:00.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:01 np0005465987 nova_compute[230713]: 2025-10-02 12:12:01.573 2 DEBUG nova.network.neutron [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Successfully created port: 91ad4a1f-b6ca-454b-8e65-5a21034a5570 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:12:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:01.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:02 np0005465987 nova_compute[230713]: 2025-10-02 12:12:02.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:02.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:03 np0005465987 nova_compute[230713]: 2025-10-02 12:12:03.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:03.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:04 np0005465987 nova_compute[230713]: 2025-10-02 12:12:04.643 2 DEBUG nova.network.neutron [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Successfully updated port: 91ad4a1f-b6ca-454b-8e65-5a21034a5570 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:12:04 np0005465987 nova_compute[230713]: 2025-10-02 12:12:04.663 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "refresh_cache-130a4f54-6947-473f-bedd-5a3464805fb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:12:04 np0005465987 nova_compute[230713]: 2025-10-02 12:12:04.663 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquired lock "refresh_cache-130a4f54-6947-473f-bedd-5a3464805fb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:12:04 np0005465987 nova_compute[230713]: 2025-10-02 12:12:04.664 2 DEBUG nova.network.neutron [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:12:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:04.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:04 np0005465987 nova_compute[230713]: 2025-10-02 12:12:04.864 2 DEBUG nova.compute.manager [req-38e780ca-3e3b-4436-94ad-36fcdfabfccd req-4357d547-7680-41c1-b303-8602fd814129 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Received event network-changed-91ad4a1f-b6ca-454b-8e65-5a21034a5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:12:04 np0005465987 nova_compute[230713]: 2025-10-02 12:12:04.864 2 DEBUG nova.compute.manager [req-38e780ca-3e3b-4436-94ad-36fcdfabfccd req-4357d547-7680-41c1-b303-8602fd814129 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Refreshing instance network info cache due to event network-changed-91ad4a1f-b6ca-454b-8e65-5a21034a5570. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:12:04 np0005465987 nova_compute[230713]: 2025-10-02 12:12:04.865 2 DEBUG oslo_concurrency.lockutils [req-38e780ca-3e3b-4436-94ad-36fcdfabfccd req-4357d547-7680-41c1-b303-8602fd814129 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-130a4f54-6947-473f-bedd-5a3464805fb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:12:05 np0005465987 nova_compute[230713]: 2025-10-02 12:12:05.021 2 DEBUG nova.network.neutron [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:12:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:05.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.647 2 DEBUG nova.network.neutron [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Updating instance_info_cache with network_info: [{"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.685 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Releasing lock "refresh_cache-130a4f54-6947-473f-bedd-5a3464805fb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.685 2 DEBUG nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Instance network_info: |[{"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.686 2 DEBUG oslo_concurrency.lockutils [req-38e780ca-3e3b-4436-94ad-36fcdfabfccd req-4357d547-7680-41c1-b303-8602fd814129 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-130a4f54-6947-473f-bedd-5a3464805fb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.687 2 DEBUG nova.network.neutron [req-38e780ca-3e3b-4436-94ad-36fcdfabfccd req-4357d547-7680-41c1-b303-8602fd814129 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Refreshing network info cache for port 91ad4a1f-b6ca-454b-8e65-5a21034a5570 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.692 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Start _get_guest_xml network_info=[{"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.697 2 WARNING nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.703 2 DEBUG nova.virt.libvirt.host [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.704 2 DEBUG nova.virt.libvirt.host [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.716 2 DEBUG nova.virt.libvirt.host [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.717 2 DEBUG nova.virt.libvirt.host [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.719 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.720 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.721 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.722 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.722 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.723 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.723 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.724 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.724 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.725 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.725 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.726 2 DEBUG nova.virt.hardware [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:12:06 np0005465987 nova_compute[230713]: 2025-10-02 12:12:06.731 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:12:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:06.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:12:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:12:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1826822819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:12:07 np0005465987 nova_compute[230713]: 2025-10-02 12:12:07.185 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:07 np0005465987 nova_compute[230713]: 2025-10-02 12:12:07.600 2 DEBUG nova.storage.rbd_utils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 130a4f54-6947-473f-bedd-5a3464805fb0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:12:07 np0005465987 nova_compute[230713]: 2025-10-02 12:12:07.604 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:07.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:07 np0005465987 nova_compute[230713]: 2025-10-02 12:12:07.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:12:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2447159947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.049 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.051 2 DEBUG nova.virt.libvirt.vif [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:11:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-897872820',display_name='tempest-ImagesTestJSON-server-897872820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-897872820',id=53,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-psfws3by',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:11:58Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=130a4f54-6947-473f-bedd-5a3464805fb0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.052 2 DEBUG nova.network.os_vif_util [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.053 2 DEBUG nova.network.os_vif_util [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:6c:31,bridge_name='br-int',has_traffic_filtering=True,id=91ad4a1f-b6ca-454b-8e65-5a21034a5570,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91ad4a1f-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.055 2 DEBUG nova.objects.instance [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'pci_devices' on Instance uuid 130a4f54-6947-473f-bedd-5a3464805fb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.076 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <uuid>130a4f54-6947-473f-bedd-5a3464805fb0</uuid>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <name>instance-00000035</name>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <nova:name>tempest-ImagesTestJSON-server-897872820</nova:name>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:12:06</nova:creationTime>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <nova:user uuid="0df47040f1ff4ce69a6fbdfd9eba4955">tempest-ImagesTestJSON-2116266493-project-member</nova:user>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <nova:project uuid="55d20ae21b6d4f0abfff3bccc371ee7a">tempest-ImagesTestJSON-2116266493</nova:project>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <nova:port uuid="91ad4a1f-b6ca-454b-8e65-5a21034a5570">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <entry name="serial">130a4f54-6947-473f-bedd-5a3464805fb0</entry>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <entry name="uuid">130a4f54-6947-473f-bedd-5a3464805fb0</entry>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/130a4f54-6947-473f-bedd-5a3464805fb0_disk">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/130a4f54-6947-473f-bedd-5a3464805fb0_disk.config">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:7c:6c:31"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <target dev="tap91ad4a1f-b6"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0/console.log" append="off"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:12:08 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:12:08 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:12:08 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:12:08 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.079 2 DEBUG nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Preparing to wait for external event network-vif-plugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.079 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.080 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.080 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.082 2 DEBUG nova.virt.libvirt.vif [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:11:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-897872820',display_name='tempest-ImagesTestJSON-server-897872820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-897872820',id=53,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-psfws3by',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:11:58Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=130a4f54-6947-473f-bedd-5a3464805fb0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.082 2 DEBUG nova.network.os_vif_util [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.083 2 DEBUG nova.network.os_vif_util [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:6c:31,bridge_name='br-int',has_traffic_filtering=True,id=91ad4a1f-b6ca-454b-8e65-5a21034a5570,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91ad4a1f-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.084 2 DEBUG os_vif [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:6c:31,bridge_name='br-int',has_traffic_filtering=True,id=91ad4a1f-b6ca-454b-8e65-5a21034a5570,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91ad4a1f-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.091 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91ad4a1f-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.091 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91ad4a1f-b6, col_values=(('external_ids', {'iface-id': '91ad4a1f-b6ca-454b-8e65-5a21034a5570', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:6c:31', 'vm-uuid': '130a4f54-6947-473f-bedd-5a3464805fb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:08 np0005465987 NetworkManager[44910]: <info>  [1759407128.0945] manager: (tap91ad4a1f-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.103 2 INFO os_vif [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:6c:31,bridge_name='br-int',has_traffic_filtering=True,id=91ad4a1f-b6ca-454b-8e65-5a21034a5570,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91ad4a1f-b6')#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.298 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.298 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.298 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No VIF found with MAC fa:16:3e:7c:6c:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.299 2 INFO nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Using config drive#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.325 2 DEBUG nova.storage.rbd_utils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 130a4f54-6947-473f-bedd-5a3464805fb0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:12:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:08.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.874 2 INFO nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Creating config drive at /var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0/disk.config#033[00m
Oct  2 08:12:08 np0005465987 nova_compute[230713]: 2025-10-02 12:12:08.879 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywguj8l5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.026 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywguj8l5" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.058 2 DEBUG nova.storage.rbd_utils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] rbd image 130a4f54-6947-473f-bedd-5a3464805fb0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.061 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0/disk.config 130a4f54-6947-473f-bedd-5a3464805fb0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.148 2 DEBUG nova.network.neutron [req-38e780ca-3e3b-4436-94ad-36fcdfabfccd req-4357d547-7680-41c1-b303-8602fd814129 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Updated VIF entry in instance network info cache for port 91ad4a1f-b6ca-454b-8e65-5a21034a5570. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.149 2 DEBUG nova.network.neutron [req-38e780ca-3e3b-4436-94ad-36fcdfabfccd req-4357d547-7680-41c1-b303-8602fd814129 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Updating instance_info_cache with network_info: [{"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.179 2 DEBUG oslo_concurrency.lockutils [req-38e780ca-3e3b-4436-94ad-36fcdfabfccd req-4357d547-7680-41c1-b303-8602fd814129 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-130a4f54-6947-473f-bedd-5a3464805fb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.454 2 DEBUG oslo_concurrency.processutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0/disk.config 130a4f54-6947-473f-bedd-5a3464805fb0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.455 2 INFO nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Deleting local config drive /var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0/disk.config because it was imported into RBD.#033[00m
Oct  2 08:12:09 np0005465987 kernel: tap91ad4a1f-b6: entered promiscuous mode
Oct  2 08:12:09 np0005465987 NetworkManager[44910]: <info>  [1759407129.5295] manager: (tap91ad4a1f-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct  2 08:12:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:12:09Z|00145|binding|INFO|Claiming lport 91ad4a1f-b6ca-454b-8e65-5a21034a5570 for this chassis.
Oct  2 08:12:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:12:09Z|00146|binding|INFO|91ad4a1f-b6ca-454b-8e65-5a21034a5570: Claiming fa:16:3e:7c:6c:31 10.100.0.6
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.551 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:6c:31 10.100.0.6'], port_security=['fa:16:3e:7c:6c:31 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '130a4f54-6947-473f-bedd-5a3464805fb0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a015800-2f8b-4fd4-818b-829a4dcb7912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9678136b-02f9-4c61-b96e-15935f11dca7, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=91ad4a1f-b6ca-454b-8e65-5a21034a5570) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.553 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 91ad4a1f-b6ca-454b-8e65-5a21034a5570 in datapath 6d00de8e-203c-4e94-b60f-36ba9ccef805 bound to our chassis#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.554 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d00de8e-203c-4e94-b60f-36ba9ccef805#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.569 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6a69638b-188e-4345-8edf-3c3ac350c39c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.570 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d00de8e-21 in ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.573 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d00de8e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.573 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5cfd7709-1151-4c3a-b0db-c54a185eb807]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.574 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e41c92b3-08b6-4d0d-87b3-37aa446c1a50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:12:09Z|00147|binding|INFO|Setting lport 91ad4a1f-b6ca-454b-8e65-5a21034a5570 ovn-installed in OVS
Oct  2 08:12:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:12:09Z|00148|binding|INFO|Setting lport 91ad4a1f-b6ca-454b-8e65-5a21034a5570 up in Southbound
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:09 np0005465987 systemd-machined[188335]: New machine qemu-27-instance-00000035.
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.592 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[51ecd281-05b9-4187-98c7-55deaf6e278b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 systemd[1]: Started Virtual Machine qemu-27-instance-00000035.
Oct  2 08:12:09 np0005465987 systemd-udevd[252463]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.621 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[989b9752-a557-4877-9cfb-d9b10bddc85e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 NetworkManager[44910]: <info>  [1759407129.6384] device (tap91ad4a1f-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:12:09 np0005465987 NetworkManager[44910]: <info>  [1759407129.6399] device (tap91ad4a1f-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:12:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:12:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:09.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.659 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[06229c26-1e94-42cc-9432-91063d716b93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 podman[252442]: 2025-10-02 12:12:09.664957093 +0000 UTC m=+0.103744191 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.670 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e52efca5-339c-40f4-aea2-3369bfbfcf28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 systemd-udevd[252472]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:12:09 np0005465987 NetworkManager[44910]: <info>  [1759407129.6734] manager: (tap6d00de8e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.703 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1316dc-0ce6-4af9-8e8b-2879a89c07b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.707 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ecde8dc3-e4ab-4686-9d3f-155e86911394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 NetworkManager[44910]: <info>  [1759407129.7369] device (tap6d00de8e-20): carrier: link connected
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.739 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[de027050-a70b-4384-91c7-a5da7d46bce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.759 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[38f908ef-4f12-4b00-9307-b05d0c33c07f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d00de8e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:28:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519830, 'reachable_time': 27477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252495, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.773 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[27d3fa90-a826-4170-8ebd-0e1b9604e30d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:28f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519830, 'tstamp': 519830}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252496, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.798 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba383f9-9e7f-4867-9d52-397182e96900]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d00de8e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:28:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519830, 'reachable_time': 27477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252497, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.829 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[50e5ffb3-260c-410b-82ad-7366cbc3f0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.892 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a35fd1-a245-476a-9981-66c34710303c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.893 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d00de8e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.893 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.894 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d00de8e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:09 np0005465987 NetworkManager[44910]: <info>  [1759407129.8963] manager: (tap6d00de8e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Oct  2 08:12:09 np0005465987 kernel: tap6d00de8e-20: entered promiscuous mode
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.904 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d00de8e-20, col_values=(('external_ids', {'iface-id': '4d0b2163-acbb-4b6a-b6d8-84f8212e1e02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:12:09Z|00149|binding|INFO|Releasing lport 4d0b2163-acbb-4b6a-b6d8-84f8212e1e02 from this chassis (sb_readonly=0)
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:09 np0005465987 nova_compute[230713]: 2025-10-02 12:12:09.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.928 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.929 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed69241-6825-4a11-85d5-59b00677c82c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.930 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-6d00de8e-203c-4e94-b60f-36ba9ccef805
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/6d00de8e-203c-4e94-b60f-36ba9ccef805.pid.haproxy
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 6d00de8e-203c-4e94-b60f-36ba9ccef805
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:12:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:09.931 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'env', 'PROCESS_TAG=haproxy-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d00de8e-203c-4e94-b60f-36ba9ccef805.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:12:10 np0005465987 podman[252528]: 2025-10-02 12:12:10.275146457 +0000 UTC m=+0.027509717 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:12:10 np0005465987 nova_compute[230713]: 2025-10-02 12:12:10.519 2 DEBUG nova.compute.manager [req-946b9e7e-3636-43bd-b755-ef21c1ec1281 req-ee30cc72-ee8d-41f4-9ca9-ca64e03eab9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Received event network-vif-plugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:12:10 np0005465987 nova_compute[230713]: 2025-10-02 12:12:10.520 2 DEBUG oslo_concurrency.lockutils [req-946b9e7e-3636-43bd-b755-ef21c1ec1281 req-ee30cc72-ee8d-41f4-9ca9-ca64e03eab9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:10 np0005465987 nova_compute[230713]: 2025-10-02 12:12:10.520 2 DEBUG oslo_concurrency.lockutils [req-946b9e7e-3636-43bd-b755-ef21c1ec1281 req-ee30cc72-ee8d-41f4-9ca9-ca64e03eab9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:10 np0005465987 nova_compute[230713]: 2025-10-02 12:12:10.521 2 DEBUG oslo_concurrency.lockutils [req-946b9e7e-3636-43bd-b755-ef21c1ec1281 req-ee30cc72-ee8d-41f4-9ca9-ca64e03eab9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:10 np0005465987 nova_compute[230713]: 2025-10-02 12:12:10.521 2 DEBUG nova.compute.manager [req-946b9e7e-3636-43bd-b755-ef21c1ec1281 req-ee30cc72-ee8d-41f4-9ca9-ca64e03eab9d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Processing event network-vif-plugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:12:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:12:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:10.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:12:10 np0005465987 podman[252528]: 2025-10-02 12:12:10.911482358 +0000 UTC m=+0.663845608 container create 3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:12:11 np0005465987 systemd[1]: Started libpod-conmon-3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e.scope.
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.136 2 DEBUG nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.137 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407131.135388, 130a4f54-6947-473f-bedd-5a3464805fb0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.137 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] VM Started (Lifecycle Event)#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.141 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.146 2 INFO nova.virt.libvirt.driver [-] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Instance spawned successfully.#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.147 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:12:11 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:12:11 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167b6de2902d17669574cfddb8fcc3063108442bc086dae51e3c82e38b53facf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.179 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.182 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.189 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.190 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.190 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.191 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.191 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.192 2 DEBUG nova.virt.libvirt.driver [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.206 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.207 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407131.1358185, 130a4f54-6947-473f-bedd-5a3464805fb0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.207 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.237 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.240 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407131.1412482, 130a4f54-6947-473f-bedd-5a3464805fb0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.240 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.268 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.271 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.280 2 INFO nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Took 12.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.280 2 DEBUG nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.290 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.415 2 INFO nova.compute.manager [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Took 13.67 seconds to build instance.#033[00m
Oct  2 08:12:11 np0005465987 nova_compute[230713]: 2025-10-02 12:12:11.445 2 DEBUG oslo_concurrency.lockutils [None req-e0b43ae0-d05c-4136-a371-3cd06a18748a 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:11 np0005465987 podman[252528]: 2025-10-02 12:12:11.538395812 +0000 UTC m=+1.290759062 container init 3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:12:11 np0005465987 podman[252528]: 2025-10-02 12:12:11.543256376 +0000 UTC m=+1.295619606 container start 3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:12:11 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[252585]: [NOTICE]   (252589) : New worker (252591) forked
Oct  2 08:12:11 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[252585]: [NOTICE]   (252589) : Loading success.
Oct  2 08:12:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:11.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:12 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:12:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:12.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:12 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:12:12 np0005465987 nova_compute[230713]: 2025-10-02 12:12:12.998 2 DEBUG nova.compute.manager [req-59cd5259-3988-42a3-af9f-a1ee909ae84d req-2ed1238b-4f3c-4764-a1e4-bb6b5f18e622 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Received event network-vif-plugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:12:13 np0005465987 nova_compute[230713]: 2025-10-02 12:12:13.000 2 DEBUG oslo_concurrency.lockutils [req-59cd5259-3988-42a3-af9f-a1ee909ae84d req-2ed1238b-4f3c-4764-a1e4-bb6b5f18e622 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:13 np0005465987 nova_compute[230713]: 2025-10-02 12:12:13.001 2 DEBUG oslo_concurrency.lockutils [req-59cd5259-3988-42a3-af9f-a1ee909ae84d req-2ed1238b-4f3c-4764-a1e4-bb6b5f18e622 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:13 np0005465987 nova_compute[230713]: 2025-10-02 12:12:13.001 2 DEBUG oslo_concurrency.lockutils [req-59cd5259-3988-42a3-af9f-a1ee909ae84d req-2ed1238b-4f3c-4764-a1e4-bb6b5f18e622 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:13 np0005465987 nova_compute[230713]: 2025-10-02 12:12:13.002 2 DEBUG nova.compute.manager [req-59cd5259-3988-42a3-af9f-a1ee909ae84d req-2ed1238b-4f3c-4764-a1e4-bb6b5f18e622 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] No waiting events found dispatching network-vif-plugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:12:13 np0005465987 nova_compute[230713]: 2025-10-02 12:12:13.003 2 WARNING nova.compute.manager [req-59cd5259-3988-42a3-af9f-a1ee909ae84d req-2ed1238b-4f3c-4764-a1e4-bb6b5f18e622 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Received unexpected event network-vif-plugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:12:13 np0005465987 nova_compute[230713]: 2025-10-02 12:12:13.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:13 np0005465987 nova_compute[230713]: 2025-10-02 12:12:13.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:13.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:14 np0005465987 nova_compute[230713]: 2025-10-02 12:12:14.454 2 DEBUG nova.objects.instance [None req-d8d4ae80-a26f-4670-b399-55b186816926 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'pci_devices' on Instance uuid 130a4f54-6947-473f-bedd-5a3464805fb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:12:14 np0005465987 nova_compute[230713]: 2025-10-02 12:12:14.484 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407134.4841337, 130a4f54-6947-473f-bedd-5a3464805fb0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:12:14 np0005465987 nova_compute[230713]: 2025-10-02 12:12:14.484 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:12:14 np0005465987 nova_compute[230713]: 2025-10-02 12:12:14.525 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:12:14 np0005465987 nova_compute[230713]: 2025-10-02 12:12:14.529 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:12:14 np0005465987 nova_compute[230713]: 2025-10-02 12:12:14.588 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 08:12:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:14.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:15 np0005465987 kernel: tap91ad4a1f-b6 (unregistering): left promiscuous mode
Oct  2 08:12:15 np0005465987 NetworkManager[44910]: <info>  [1759407135.1775] device (tap91ad4a1f-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:15 np0005465987 ovn_controller[129172]: 2025-10-02T12:12:15Z|00150|binding|INFO|Releasing lport 91ad4a1f-b6ca-454b-8e65-5a21034a5570 from this chassis (sb_readonly=0)
Oct  2 08:12:15 np0005465987 ovn_controller[129172]: 2025-10-02T12:12:15Z|00151|binding|INFO|Setting lport 91ad4a1f-b6ca-454b-8e65-5a21034a5570 down in Southbound
Oct  2 08:12:15 np0005465987 ovn_controller[129172]: 2025-10-02T12:12:15Z|00152|binding|INFO|Removing iface tap91ad4a1f-b6 ovn-installed in OVS
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:15.206 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:6c:31 10.100.0.6'], port_security=['fa:16:3e:7c:6c:31 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '130a4f54-6947-473f-bedd-5a3464805fb0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d20ae21b6d4f0abfff3bccc371ee7a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a015800-2f8b-4fd4-818b-829a4dcb7912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9678136b-02f9-4c61-b96e-15935f11dca7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=91ad4a1f-b6ca-454b-8e65-5a21034a5570) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:12:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:15.207 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 91ad4a1f-b6ca-454b-8e65-5a21034a5570 in datapath 6d00de8e-203c-4e94-b60f-36ba9ccef805 unbound from our chassis#033[00m
Oct  2 08:12:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:15.208 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d00de8e-203c-4e94-b60f-36ba9ccef805, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:12:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:15.209 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[315c0b66-a253-4476-a803-7e0d2a5b168c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:15.210 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 namespace which is not needed anymore#033[00m
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:15 np0005465987 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct  2 08:12:15 np0005465987 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000035.scope: Consumed 4.908s CPU time.
Oct  2 08:12:15 np0005465987 systemd-machined[188335]: Machine qemu-27-instance-00000035 terminated.
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.367 2 DEBUG nova.compute.manager [None req-d8d4ae80-a26f-4670-b399-55b186816926 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:12:15 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[252585]: [NOTICE]   (252589) : haproxy version is 2.8.14-c23fe91
Oct  2 08:12:15 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[252585]: [NOTICE]   (252589) : path to executable is /usr/sbin/haproxy
Oct  2 08:12:15 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[252585]: [WARNING]  (252589) : Exiting Master process...
Oct  2 08:12:15 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[252585]: [ALERT]    (252589) : Current worker (252591) exited with code 143 (Terminated)
Oct  2 08:12:15 np0005465987 neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805[252585]: [WARNING]  (252589) : All workers exited. Exiting... (0)
Oct  2 08:12:15 np0005465987 systemd[1]: libpod-3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e.scope: Deactivated successfully.
Oct  2 08:12:15 np0005465987 podman[252628]: 2025-10-02 12:12:15.47342255 +0000 UTC m=+0.164231434 container died 3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:12:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:15.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:15 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e-userdata-shm.mount: Deactivated successfully.
Oct  2 08:12:15 np0005465987 systemd[1]: var-lib-containers-storage-overlay-167b6de2902d17669574cfddb8fcc3063108442bc086dae51e3c82e38b53facf-merged.mount: Deactivated successfully.
Oct  2 08:12:15 np0005465987 podman[252628]: 2025-10-02 12:12:15.939581487 +0000 UTC m=+0.630390371 container cleanup 3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:12:15 np0005465987 systemd[1]: libpod-conmon-3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e.scope: Deactivated successfully.
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.985 2 DEBUG nova.compute.manager [req-e699fb1e-539b-4c7a-9f86-6ea6bad7d4ab req-2b99da14-d28d-41fc-a3c7-a3c06d921705 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Received event network-vif-unplugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.985 2 DEBUG oslo_concurrency.lockutils [req-e699fb1e-539b-4c7a-9f86-6ea6bad7d4ab req-2b99da14-d28d-41fc-a3c7-a3c06d921705 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.986 2 DEBUG oslo_concurrency.lockutils [req-e699fb1e-539b-4c7a-9f86-6ea6bad7d4ab req-2b99da14-d28d-41fc-a3c7-a3c06d921705 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.986 2 DEBUG oslo_concurrency.lockutils [req-e699fb1e-539b-4c7a-9f86-6ea6bad7d4ab req-2b99da14-d28d-41fc-a3c7-a3c06d921705 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.986 2 DEBUG nova.compute.manager [req-e699fb1e-539b-4c7a-9f86-6ea6bad7d4ab req-2b99da14-d28d-41fc-a3c7-a3c06d921705 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] No waiting events found dispatching network-vif-unplugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:12:15 np0005465987 nova_compute[230713]: 2025-10-02 12:12:15.986 2 WARNING nova.compute.manager [req-e699fb1e-539b-4c7a-9f86-6ea6bad7d4ab req-2b99da14-d28d-41fc-a3c7-a3c06d921705 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Received unexpected event network-vif-unplugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 08:12:16 np0005465987 podman[252669]: 2025-10-02 12:12:16.141815502 +0000 UTC m=+0.170038842 container remove 3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:12:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:16.148 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[401b19a9-c66b-4d38-be91-9d2651d36899]: (4, ('Thu Oct  2 12:12:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 (3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e)\n3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e\nThu Oct  2 12:12:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 (3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e)\n3ed9b72b595a0212a464ca5890ba5b36495c7ea9c6285a0e732097a7c445996e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:16.150 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a7e544-5bdb-4a26-8d53-922f2996bd60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:16.151 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d00de8e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:16 np0005465987 kernel: tap6d00de8e-20: left promiscuous mode
Oct  2 08:12:16 np0005465987 nova_compute[230713]: 2025-10-02 12:12:16.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:16 np0005465987 nova_compute[230713]: 2025-10-02 12:12:16.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:16.174 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e29557e3-7d77-4b00-9746-9b6b68f98c1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:16.198 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b6d7c0-750e-4c17-bfdd-3c41a0eec3c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:16.199 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6744c323-5c2d-4920-a4de-394ffc4e9c7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:16.221 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0fec32ee-701d-4acc-bf90-c91136e8f341]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519822, 'reachable_time': 15653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252687, 'error': None, 'target': 'ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:16 np0005465987 systemd[1]: run-netns-ovnmeta\x2d6d00de8e\x2d203c\x2d4e94\x2db60f\x2d36ba9ccef805.mount: Deactivated successfully.
Oct  2 08:12:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:16.226 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d00de8e-203c-4e94-b60f-36ba9ccef805 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:12:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:16.226 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[3b031c93-8748-4b6e-8e3f-3ac7bd4d46e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:12:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:16.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:16 np0005465987 podman[252712]: 2025-10-02 12:12:16.976896204 +0000 UTC m=+0.071389482 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:12:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:17.046 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:12:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:17.046 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:12:17 np0005465987 nova_compute[230713]: 2025-10-02 12:12:17.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:17 np0005465987 podman[252757]: 2025-10-02 12:12:17.069520339 +0000 UTC m=+0.066181789 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:12:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:17 np0005465987 podman[252755]: 2025-10-02 12:12:17.113297992 +0000 UTC m=+0.103485435 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  2 08:12:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:17.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:18 np0005465987 nova_compute[230713]: 2025-10-02 12:12:18.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:18 np0005465987 nova_compute[230713]: 2025-10-02 12:12:18.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:18 np0005465987 nova_compute[230713]: 2025-10-02 12:12:18.237 2 DEBUG nova.compute.manager [req-d367ed26-3a91-4913-bf80-063fe777a83d req-492929f1-def5-48ac-b2d6-7c4c0378a405 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Received event network-vif-plugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:12:18 np0005465987 nova_compute[230713]: 2025-10-02 12:12:18.237 2 DEBUG oslo_concurrency.lockutils [req-d367ed26-3a91-4913-bf80-063fe777a83d req-492929f1-def5-48ac-b2d6-7c4c0378a405 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:18 np0005465987 nova_compute[230713]: 2025-10-02 12:12:18.237 2 DEBUG oslo_concurrency.lockutils [req-d367ed26-3a91-4913-bf80-063fe777a83d req-492929f1-def5-48ac-b2d6-7c4c0378a405 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:18 np0005465987 nova_compute[230713]: 2025-10-02 12:12:18.237 2 DEBUG oslo_concurrency.lockutils [req-d367ed26-3a91-4913-bf80-063fe777a83d req-492929f1-def5-48ac-b2d6-7c4c0378a405 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:18 np0005465987 nova_compute[230713]: 2025-10-02 12:12:18.238 2 DEBUG nova.compute.manager [req-d367ed26-3a91-4913-bf80-063fe777a83d req-492929f1-def5-48ac-b2d6-7c4c0378a405 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] No waiting events found dispatching network-vif-plugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:12:18 np0005465987 nova_compute[230713]: 2025-10-02 12:12:18.238 2 WARNING nova.compute.manager [req-d367ed26-3a91-4913-bf80-063fe777a83d req-492929f1-def5-48ac-b2d6-7c4c0378a405 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Received unexpected event network-vif-plugged-91ad4a1f-b6ca-454b-8e65-5a21034a5570 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 08:12:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:18.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:18 np0005465987 nova_compute[230713]: 2025-10-02 12:12:18.926 2 DEBUG nova.compute.manager [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:12:19 np0005465987 nova_compute[230713]: 2025-10-02 12:12:19.005 2 INFO nova.compute.manager [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] instance snapshotting#033[00m
Oct  2 08:12:19 np0005465987 nova_compute[230713]: 2025-10-02 12:12:19.006 2 WARNING nova.compute.manager [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct  2 08:12:19 np0005465987 nova_compute[230713]: 2025-10-02 12:12:19.550 2 INFO nova.virt.libvirt.driver [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Beginning cold snapshot process#033[00m
Oct  2 08:12:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:19.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:19 np0005465987 nova_compute[230713]: 2025-10-02 12:12:19.755 2 DEBUG nova.virt.libvirt.imagebackend [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:12:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:12:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:12:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:12:20 np0005465987 nova_compute[230713]: 2025-10-02 12:12:20.136 2 DEBUG nova.storage.rbd_utils [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] creating snapshot(46e2acf5de6c406a872c9f2009934a4e) on rbd image(130a4f54-6947-473f-bedd-5a3464805fb0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:12:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:20.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:12:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:12:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:21.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e191 e191: 3 total, 3 up, 3 in
Oct  2 08:12:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:22 np0005465987 nova_compute[230713]: 2025-10-02 12:12:22.133 2 DEBUG nova.storage.rbd_utils [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] cloning vms/130a4f54-6947-473f-bedd-5a3464805fb0_disk@46e2acf5de6c406a872c9f2009934a4e to images/2008c515-5405-4e2a-af36-731c08b54231 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:12:22 np0005465987 nova_compute[230713]: 2025-10-02 12:12:22.788 2 DEBUG nova.storage.rbd_utils [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] flattening images/2008c515-5405-4e2a-af36-731c08b54231 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:12:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:22.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:23.049 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:23 np0005465987 nova_compute[230713]: 2025-10-02 12:12:23.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:23 np0005465987 nova_compute[230713]: 2025-10-02 12:12:23.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:23 np0005465987 nova_compute[230713]: 2025-10-02 12:12:23.624 2 DEBUG nova.storage.rbd_utils [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] removing snapshot(46e2acf5de6c406a872c9f2009934a4e) on rbd image(130a4f54-6947-473f-bedd-5a3464805fb0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:12:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:23.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e192 e192: 3 total, 3 up, 3 in
Oct  2 08:12:24 np0005465987 nova_compute[230713]: 2025-10-02 12:12:24.098 2 DEBUG nova.storage.rbd_utils [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] creating snapshot(snap) on rbd image(2008c515-5405-4e2a-af36-731c08b54231) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:12:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:24.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e193 e193: 3 total, 3 up, 3 in
Oct  2 08:12:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:12:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552808507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:12:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:25.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:12:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:12:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:26.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:12:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:27.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:27.940 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:27.941 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:27.941 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:12:28 np0005465987 nova_compute[230713]: 2025-10-02 12:12:28.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:28 np0005465987 nova_compute[230713]: 2025-10-02 12:12:28.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:28.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:28 np0005465987 nova_compute[230713]: 2025-10-02 12:12:28.912 2 INFO nova.virt.libvirt.driver [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Snapshot image upload complete#033[00m
Oct  2 08:12:28 np0005465987 nova_compute[230713]: 2025-10-02 12:12:28.913 2 INFO nova.compute.manager [None req-82763cac-6a7d-4c4c-9934-393f56d62c5c 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Took 9.91 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:12:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:29.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:30 np0005465987 nova_compute[230713]: 2025-10-02 12:12:30.368 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407135.366967, 130a4f54-6947-473f-bedd-5a3464805fb0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:12:30 np0005465987 nova_compute[230713]: 2025-10-02 12:12:30.368 2 INFO nova.compute.manager [-] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:12:30 np0005465987 nova_compute[230713]: 2025-10-02 12:12:30.394 2 DEBUG nova.compute.manager [None req-1ef0dcab-0015-4bba-bf38-55ab96b6099f - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:12:30 np0005465987 nova_compute[230713]: 2025-10-02 12:12:30.397 2 DEBUG nova.compute.manager [None req-1ef0dcab-0015-4bba-bf38-55ab96b6099f - - - - - -] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:12:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e194 e194: 3 total, 3 up, 3 in
Oct  2 08:12:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:30.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.667 2 DEBUG oslo_concurrency.lockutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "130a4f54-6947-473f-bedd-5a3464805fb0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.668 2 DEBUG oslo_concurrency.lockutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.668 2 DEBUG oslo_concurrency.lockutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.668 2 DEBUG oslo_concurrency.lockutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.669 2 DEBUG oslo_concurrency.lockutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.669 2 INFO nova.compute.manager [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Terminating instance#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.670 2 DEBUG nova.compute.manager [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:12:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:31.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.677 2 INFO nova.virt.libvirt.driver [-] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Instance destroyed successfully.#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.677 2 DEBUG nova.objects.instance [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lazy-loading 'resources' on Instance uuid 130a4f54-6947-473f-bedd-5a3464805fb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.692 2 DEBUG nova.virt.libvirt.vif [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:11:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-897872820',display_name='tempest-ImagesTestJSON-server-897872820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-897872820',id=53,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:12:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='55d20ae21b6d4f0abfff3bccc371ee7a',ramdisk_id='',reservation_id='r-psfws3by',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-2116266493',owner_user_name='tempest-ImagesTestJSON-2116266493-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:12:28Z,user_data=None,user_id='0df47040f1ff4ce69a6fbdfd9eba4955',uuid=130a4f54-6947-473f-bedd-5a3464805fb0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.693 2 DEBUG nova.network.os_vif_util [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converting VIF {"id": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "address": "fa:16:3e:7c:6c:31", "network": {"id": "6d00de8e-203c-4e94-b60f-36ba9ccef805", "bridge": "br-int", "label": "tempest-ImagesTestJSON-833660691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d20ae21b6d4f0abfff3bccc371ee7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91ad4a1f-b6", "ovs_interfaceid": "91ad4a1f-b6ca-454b-8e65-5a21034a5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.693 2 DEBUG nova.network.os_vif_util [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:6c:31,bridge_name='br-int',has_traffic_filtering=True,id=91ad4a1f-b6ca-454b-8e65-5a21034a5570,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91ad4a1f-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.694 2 DEBUG os_vif [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:6c:31,bridge_name='br-int',has_traffic_filtering=True,id=91ad4a1f-b6ca-454b-8e65-5a21034a5570,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91ad4a1f-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91ad4a1f-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:31 np0005465987 nova_compute[230713]: 2025-10-02 12:12:31.701 2 INFO os_vif [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:6c:31,bridge_name='br-int',has_traffic_filtering=True,id=91ad4a1f-b6ca-454b-8e65-5a21034a5570,network=Network(6d00de8e-203c-4e94-b60f-36ba9ccef805),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91ad4a1f-b6')#033[00m
Oct  2 08:12:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:32 np0005465987 nova_compute[230713]: 2025-10-02 12:12:32.188 2 INFO nova.virt.libvirt.driver [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Deleting instance files /var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0_del#033[00m
Oct  2 08:12:32 np0005465987 nova_compute[230713]: 2025-10-02 12:12:32.189 2 INFO nova.virt.libvirt.driver [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Deletion of /var/lib/nova/instances/130a4f54-6947-473f-bedd-5a3464805fb0_del complete#033[00m
Oct  2 08:12:32 np0005465987 nova_compute[230713]: 2025-10-02 12:12:32.271 2 INFO nova.compute.manager [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:12:32 np0005465987 nova_compute[230713]: 2025-10-02 12:12:32.271 2 DEBUG oslo.service.loopingcall [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:12:32 np0005465987 nova_compute[230713]: 2025-10-02 12:12:32.272 2 DEBUG nova.compute.manager [-] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:12:32 np0005465987 nova_compute[230713]: 2025-10-02 12:12:32.272 2 DEBUG nova.network.neutron [-] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:12:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:32.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:33 np0005465987 nova_compute[230713]: 2025-10-02 12:12:33.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e195 e195: 3 total, 3 up, 3 in
Oct  2 08:12:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:33.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:34 np0005465987 nova_compute[230713]: 2025-10-02 12:12:34.353 2 DEBUG nova.network.neutron [-] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:12:34 np0005465987 nova_compute[230713]: 2025-10-02 12:12:34.570 2 INFO nova.compute.manager [-] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Took 2.30 seconds to deallocate network for instance.#033[00m
Oct  2 08:12:34 np0005465987 nova_compute[230713]: 2025-10-02 12:12:34.655 2 DEBUG oslo_concurrency.lockutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:34 np0005465987 nova_compute[230713]: 2025-10-02 12:12:34.655 2 DEBUG oslo_concurrency.lockutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:34 np0005465987 nova_compute[230713]: 2025-10-02 12:12:34.712 2 DEBUG oslo_concurrency.processutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:34.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:12:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3094586200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.135 2 DEBUG oslo_concurrency.processutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.144 2 DEBUG nova.compute.provider_tree [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.169 2 DEBUG nova.scheduler.client.report [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.209 2 DEBUG oslo_concurrency.lockutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.261 2 INFO nova.scheduler.client.report [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Deleted allocations for instance 130a4f54-6947-473f-bedd-5a3464805fb0#033[00m
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.421 2 DEBUG oslo_concurrency.lockutils [None req-22bc59f6-aa87-4b5b-9d4d-b17aa5144a12 0df47040f1ff4ce69a6fbdfd9eba4955 55d20ae21b6d4f0abfff3bccc371ee7a - - default default] Lock "130a4f54-6947-473f-bedd-5a3464805fb0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:35.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:12:35 np0005465987 nova_compute[230713]: 2025-10-02 12:12:35.984 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:12:36 np0005465987 nova_compute[230713]: 2025-10-02 12:12:36.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:36.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:36 np0005465987 nova_compute[230713]: 2025-10-02 12:12:36.943 2 DEBUG nova.compute.manager [req-443b5155-c447-4702-a2a5-173fa7b65328 req-dcdaf051-f978-4a37-8d0c-92459efc41a7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 130a4f54-6947-473f-bedd-5a3464805fb0] Received event network-vif-deleted-91ad4a1f-b6ca-454b-8e65-5a21034a5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:12:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:37.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:38 np0005465987 nova_compute[230713]: 2025-10-02 12:12:38.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e196 e196: 3 total, 3 up, 3 in
Oct  2 08:12:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:38.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:38 np0005465987 nova_compute[230713]: 2025-10-02 12:12:38.977 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:12:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:12:39 np0005465987 podman[253119]: 2025-10-02 12:12:39.826285198 +0000 UTC m=+0.045655025 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:12:39 np0005465987 nova_compute[230713]: 2025-10-02 12:12:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:39 np0005465987 nova_compute[230713]: 2025-10-02 12:12:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:12:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:40.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:12:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:41.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:41 np0005465987 nova_compute[230713]: 2025-10-02 12:12:41.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:42.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:43 np0005465987 nova_compute[230713]: 2025-10-02 12:12:43.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:43.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:43 np0005465987 nova_compute[230713]: 2025-10-02 12:12:43.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:44.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:44 np0005465987 nova_compute[230713]: 2025-10-02 12:12:44.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.117 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.118 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.119 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.119 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.120 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:12:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1393102552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.608 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:12:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:45.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.822 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.823 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4709MB free_disk=20.922027587890625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.823 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.824 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.942 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:12:45 np0005465987 nova_compute[230713]: 2025-10-02 12:12:45.942 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:12:46 np0005465987 nova_compute[230713]: 2025-10-02 12:12:46.014 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:12:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2819117286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:12:46 np0005465987 nova_compute[230713]: 2025-10-02 12:12:46.447 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:46 np0005465987 nova_compute[230713]: 2025-10-02 12:12:46.454 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:12:46 np0005465987 nova_compute[230713]: 2025-10-02 12:12:46.503 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:12:46 np0005465987 nova_compute[230713]: 2025-10-02 12:12:46.552 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:12:46 np0005465987 nova_compute[230713]: 2025-10-02 12:12:46.553 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:46 np0005465987 nova_compute[230713]: 2025-10-02 12:12:46.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:46.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:47 np0005465987 nova_compute[230713]: 2025-10-02 12:12:47.554 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:47 np0005465987 nova_compute[230713]: 2025-10-02 12:12:47.555 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:47 np0005465987 nova_compute[230713]: 2025-10-02 12:12:47.555 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:47 np0005465987 nova_compute[230713]: 2025-10-02 12:12:47.556 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:12:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:12:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:47.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:12:47 np0005465987 podman[253184]: 2025-10-02 12:12:47.867535915 +0000 UTC m=+0.074207239 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:12:47 np0005465987 podman[253185]: 2025-10-02 12:12:47.87244968 +0000 UTC m=+0.072055140 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:12:47 np0005465987 podman[253183]: 2025-10-02 12:12:47.908332526 +0000 UTC m=+0.112075610 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:12:48 np0005465987 nova_compute[230713]: 2025-10-02 12:12:48.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:48.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:12:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:49.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:12:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:50.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:51 np0005465987 nova_compute[230713]: 2025-10-02 12:12:51.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:51.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:52.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:53 np0005465987 nova_compute[230713]: 2025-10-02 12:12:53.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:12:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:53.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:12:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:54.457 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:12:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:54.457 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:12:54 np0005465987 nova_compute[230713]: 2025-10-02 12:12:54.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:54.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:12:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 20K writes, 80K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 20K writes, 6498 syncs, 3.13 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8624 writes, 32K keys, 8624 commit groups, 1.0 writes per commit group, ingest: 35.76 MB, 0.06 MB/s#012Interval WAL: 8624 writes, 3394 syncs, 2.54 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:12:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:12:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1688446545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:12:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:12:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1688446545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:12:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:55.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e197 e197: 3 total, 3 up, 3 in
Oct  2 08:12:56 np0005465987 nova_compute[230713]: 2025-10-02 12:12:56.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:56.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e198 e198: 3 total, 3 up, 3 in
Oct  2 08:12:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:12:57.459 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:57.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e199 e199: 3 total, 3 up, 3 in
Oct  2 08:12:58 np0005465987 nova_compute[230713]: 2025-10-02 12:12:58.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:58.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:59 np0005465987 nova_compute[230713]: 2025-10-02 12:12:59.634 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquiring lock "eaca80c4-d12a-4524-8733-32e74e062dd6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:59 np0005465987 nova_compute[230713]: 2025-10-02 12:12:59.634 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:59 np0005465987 nova_compute[230713]: 2025-10-02 12:12:59.670 2 DEBUG nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:12:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:12:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:59.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:59 np0005465987 nova_compute[230713]: 2025-10-02 12:12:59.759 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:59 np0005465987 nova_compute[230713]: 2025-10-02 12:12:59.760 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:59 np0005465987 nova_compute[230713]: 2025-10-02 12:12:59.765 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:12:59 np0005465987 nova_compute[230713]: 2025-10-02 12:12:59.766 2 INFO nova.compute.claims [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:12:59 np0005465987 nova_compute[230713]: 2025-10-02 12:12:59.948 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:13:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3142733455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.403 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.410 2 DEBUG nova.compute.provider_tree [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.429 2 DEBUG nova.scheduler.client.report [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.457 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.458 2 DEBUG nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.552 2 DEBUG nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.553 2 DEBUG nova.network.neutron [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.599 2 INFO nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.625 2 DEBUG nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.729 2 DEBUG nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.730 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.730 2 INFO nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Creating image(s)#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.759 2 DEBUG nova.storage.rbd_utils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] rbd image eaca80c4-d12a-4524-8733-32e74e062dd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.793 2 DEBUG nova.storage.rbd_utils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] rbd image eaca80c4-d12a-4524-8733-32e74e062dd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.826 2 DEBUG nova.storage.rbd_utils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] rbd image eaca80c4-d12a-4524-8733-32e74e062dd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.833 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:00.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.903 2 DEBUG nova.policy [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea7aba6956024282ad73d716eded27ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6a848d3ce964c8a830dc094b1fabda3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.908 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.908 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.909 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.910 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.943 2 DEBUG nova.storage.rbd_utils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] rbd image eaca80c4-d12a-4524-8733-32e74e062dd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:00 np0005465987 nova_compute[230713]: 2025-10-02 12:13:00.948 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 eaca80c4-d12a-4524-8733-32e74e062dd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:01 np0005465987 nova_compute[230713]: 2025-10-02 12:13:01.249 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 eaca80c4-d12a-4524-8733-32e74e062dd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:01 np0005465987 nova_compute[230713]: 2025-10-02 12:13:01.327 2 DEBUG nova.storage.rbd_utils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] resizing rbd image eaca80c4-d12a-4524-8733-32e74e062dd6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:13:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:01.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:01 np0005465987 nova_compute[230713]: 2025-10-02 12:13:01.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:01 np0005465987 nova_compute[230713]: 2025-10-02 12:13:01.790 2 DEBUG nova.objects.instance [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lazy-loading 'migration_context' on Instance uuid eaca80c4-d12a-4524-8733-32e74e062dd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:13:01 np0005465987 nova_compute[230713]: 2025-10-02 12:13:01.810 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:13:01 np0005465987 nova_compute[230713]: 2025-10-02 12:13:01.811 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Ensure instance console log exists: /var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:13:01 np0005465987 nova_compute[230713]: 2025-10-02 12:13:01.811 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:01 np0005465987 nova_compute[230713]: 2025-10-02 12:13:01.812 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:01 np0005465987 nova_compute[230713]: 2025-10-02 12:13:01.812 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:02.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:03 np0005465987 nova_compute[230713]: 2025-10-02 12:13:03.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e200 e200: 3 total, 3 up, 3 in
Oct  2 08:13:03 np0005465987 nova_compute[230713]: 2025-10-02 12:13:03.718 2 DEBUG nova.network.neutron [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Successfully created port: 77e05704-3f59-4efd-94a2-c2636c9bcf39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:13:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:03.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:04.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:05 np0005465987 nova_compute[230713]: 2025-10-02 12:13:05.186 2 DEBUG nova.network.neutron [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Successfully updated port: 77e05704-3f59-4efd-94a2-c2636c9bcf39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:13:05 np0005465987 nova_compute[230713]: 2025-10-02 12:13:05.211 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquiring lock "refresh_cache-eaca80c4-d12a-4524-8733-32e74e062dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:13:05 np0005465987 nova_compute[230713]: 2025-10-02 12:13:05.212 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquired lock "refresh_cache-eaca80c4-d12a-4524-8733-32e74e062dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:13:05 np0005465987 nova_compute[230713]: 2025-10-02 12:13:05.212 2 DEBUG nova.network.neutron [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:13:05 np0005465987 nova_compute[230713]: 2025-10-02 12:13:05.502 2 DEBUG nova.compute.manager [req-b87d55cc-1038-4a3c-b267-019a374906b3 req-1d820bb7-808d-424b-8e47-56f7fcaca03c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received event network-changed-77e05704-3f59-4efd-94a2-c2636c9bcf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:05 np0005465987 nova_compute[230713]: 2025-10-02 12:13:05.503 2 DEBUG nova.compute.manager [req-b87d55cc-1038-4a3c-b267-019a374906b3 req-1d820bb7-808d-424b-8e47-56f7fcaca03c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Refreshing instance network info cache due to event network-changed-77e05704-3f59-4efd-94a2-c2636c9bcf39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:13:05 np0005465987 nova_compute[230713]: 2025-10-02 12:13:05.503 2 DEBUG oslo_concurrency.lockutils [req-b87d55cc-1038-4a3c-b267-019a374906b3 req-1d820bb7-808d-424b-8e47-56f7fcaca03c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-eaca80c4-d12a-4524-8733-32e74e062dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:13:05 np0005465987 nova_compute[230713]: 2025-10-02 12:13:05.695 2 DEBUG nova.network.neutron [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:13:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:05.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:06 np0005465987 nova_compute[230713]: 2025-10-02 12:13:06.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:06.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.454 2 DEBUG nova.network.neutron [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Updating instance_info_cache with network_info: [{"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.504 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Releasing lock "refresh_cache-eaca80c4-d12a-4524-8733-32e74e062dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.505 2 DEBUG nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Instance network_info: |[{"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.505 2 DEBUG oslo_concurrency.lockutils [req-b87d55cc-1038-4a3c-b267-019a374906b3 req-1d820bb7-808d-424b-8e47-56f7fcaca03c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-eaca80c4-d12a-4524-8733-32e74e062dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.506 2 DEBUG nova.network.neutron [req-b87d55cc-1038-4a3c-b267-019a374906b3 req-1d820bb7-808d-424b-8e47-56f7fcaca03c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Refreshing network info cache for port 77e05704-3f59-4efd-94a2-c2636c9bcf39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.511 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Start _get_guest_xml network_info=[{"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.519 2 WARNING nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.526 2 DEBUG nova.virt.libvirt.host [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.527 2 DEBUG nova.virt.libvirt.host [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.532 2 DEBUG nova.virt.libvirt.host [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.533 2 DEBUG nova.virt.libvirt.host [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.535 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.535 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.536 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.537 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.537 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.538 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.538 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.539 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.539 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.540 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.540 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.540 2 DEBUG nova.virt.hardware [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.546 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:07.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:13:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3968065184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:13:07 np0005465987 nova_compute[230713]: 2025-10-02 12:13:07.991 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.017 2 DEBUG nova.storage.rbd_utils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] rbd image eaca80c4-d12a-4524-8733-32e74e062dd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.022 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:13:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3230038949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.514 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.516 2 DEBUG nova.virt.libvirt.vif [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:12:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=56,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFihbxAjweHdajHgV7gFNpUayrs027OFrTN+QnpkkKeSFN3vmKwe8jU6x/xmmr52UmwPTTGUjrtTBDKf7dNO2YlgpiMp+JI2/DpRJDwi/07BC63zRuQvCsUDWXklq5stw==',key_name='tempest-keypair-187724139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6a848d3ce964c8a830dc094b1fabda3',ramdisk_id='',reservation_id='r-92wehvzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-221039657',owner_user_name='tempest-ServersV294TestFqdnHostnames-221039657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea7aba6956024282ad73d716eded27ce',uuid=eaca80c4-d12a-4524-8733-32e74e062dd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.516 2 DEBUG nova.network.os_vif_util [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Converting VIF {"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.517 2 DEBUG nova.network.os_vif_util [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:73:45,bridge_name='br-int',has_traffic_filtering=True,id=77e05704-3f59-4efd-94a2-c2636c9bcf39,network=Network(406cfa48-d21f-4499-b743-9679013fa8f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e05704-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.518 2 DEBUG nova.objects.instance [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lazy-loading 'pci_devices' on Instance uuid eaca80c4-d12a-4524-8733-32e74e062dd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.539 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <uuid>eaca80c4-d12a-4524-8733-32e74e062dd6</uuid>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <name>instance-00000038</name>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <nova:name>guest-instance-1</nova:name>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:13:07</nova:creationTime>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <nova:user uuid="ea7aba6956024282ad73d716eded27ce">tempest-ServersV294TestFqdnHostnames-221039657-project-member</nova:user>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <nova:project uuid="b6a848d3ce964c8a830dc094b1fabda3">tempest-ServersV294TestFqdnHostnames-221039657</nova:project>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <nova:port uuid="77e05704-3f59-4efd-94a2-c2636c9bcf39">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <entry name="serial">eaca80c4-d12a-4524-8733-32e74e062dd6</entry>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <entry name="uuid">eaca80c4-d12a-4524-8733-32e74e062dd6</entry>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/eaca80c4-d12a-4524-8733-32e74e062dd6_disk">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/eaca80c4-d12a-4524-8733-32e74e062dd6_disk.config">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:72:73:45"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <target dev="tap77e05704-3f"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6/console.log" append="off"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:13:08 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:13:08 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:13:08 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:13:08 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.541 2 DEBUG nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Preparing to wait for external event network-vif-plugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.541 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquiring lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.542 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.542 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.543 2 DEBUG nova.virt.libvirt.vif [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:12:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=56,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFihbxAjweHdajHgV7gFNpUayrs027OFrTN+QnpkkKeSFN3vmKwe8jU6x/xmmr52UmwPTTGUjrtTBDKf7dNO2YlgpiMp+JI2/DpRJDwi/07BC63zRuQvCsUDWXklq5stw==',key_name='tempest-keypair-187724139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6a848d3ce964c8a830dc094b1fabda3',ramdisk_id='',reservation_id='r-92wehvzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-221039657',owner_user_name='tempest-ServersV294TestFqdnHostnames-221039657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea7aba6956024282ad73d716eded27ce',uuid=eaca80c4-d12a-4524-8733-32e74e062dd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.543 2 DEBUG nova.network.os_vif_util [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Converting VIF {"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.544 2 DEBUG nova.network.os_vif_util [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:73:45,bridge_name='br-int',has_traffic_filtering=True,id=77e05704-3f59-4efd-94a2-c2636c9bcf39,network=Network(406cfa48-d21f-4499-b743-9679013fa8f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e05704-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.545 2 DEBUG os_vif [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:73:45,bridge_name='br-int',has_traffic_filtering=True,id=77e05704-3f59-4efd-94a2-c2636c9bcf39,network=Network(406cfa48-d21f-4499-b743-9679013fa8f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e05704-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.546 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.546 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77e05704-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77e05704-3f, col_values=(('external_ids', {'iface-id': '77e05704-3f59-4efd-94a2-c2636c9bcf39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:73:45', 'vm-uuid': 'eaca80c4-d12a-4524-8733-32e74e062dd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:08 np0005465987 NetworkManager[44910]: <info>  [1759407188.5541] manager: (tap77e05704-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.563 2 INFO os_vif [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:73:45,bridge_name='br-int',has_traffic_filtering=True,id=77e05704-3f59-4efd-94a2-c2636c9bcf39,network=Network(406cfa48-d21f-4499-b743-9679013fa8f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e05704-3f')#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.663 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.664 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.664 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] No VIF found with MAC fa:16:3e:72:73:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.665 2 INFO nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Using config drive#033[00m
Oct  2 08:13:08 np0005465987 nova_compute[230713]: 2025-10-02 12:13:08.705 2 DEBUG nova.storage.rbd_utils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] rbd image eaca80c4-d12a-4524-8733-32e74e062dd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:09 np0005465987 nova_compute[230713]: 2025-10-02 12:13:09.488 2 INFO nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Creating config drive at /var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6/disk.config#033[00m
Oct  2 08:13:09 np0005465987 nova_compute[230713]: 2025-10-02 12:13:09.494 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10iaem2h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:09 np0005465987 nova_compute[230713]: 2025-10-02 12:13:09.628 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10iaem2h" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:09 np0005465987 nova_compute[230713]: 2025-10-02 12:13:09.660 2 DEBUG nova.storage.rbd_utils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] rbd image eaca80c4-d12a-4524-8733-32e74e062dd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:09 np0005465987 nova_compute[230713]: 2025-10-02 12:13:09.664 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6/disk.config eaca80c4-d12a-4524-8733-32e74e062dd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:09.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:09 np0005465987 nova_compute[230713]: 2025-10-02 12:13:09.893 2 DEBUG oslo_concurrency.processutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6/disk.config eaca80c4-d12a-4524-8733-32e74e062dd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:09 np0005465987 nova_compute[230713]: 2025-10-02 12:13:09.893 2 INFO nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Deleting local config drive /var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6/disk.config because it was imported into RBD.#033[00m
Oct  2 08:13:09 np0005465987 kernel: tap77e05704-3f: entered promiscuous mode
Oct  2 08:13:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:09Z|00153|binding|INFO|Claiming lport 77e05704-3f59-4efd-94a2-c2636c9bcf39 for this chassis.
Oct  2 08:13:09 np0005465987 nova_compute[230713]: 2025-10-02 12:13:09.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:09Z|00154|binding|INFO|77e05704-3f59-4efd-94a2-c2636c9bcf39: Claiming fa:16:3e:72:73:45 10.100.0.10
Oct  2 08:13:09 np0005465987 NetworkManager[44910]: <info>  [1759407189.9633] manager: (tap77e05704-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct  2 08:13:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:09.973 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:73:45 10.100.0.10'], port_security=['fa:16:3e:72:73:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'eaca80c4-d12a-4524-8733-32e74e062dd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-406cfa48-d21f-4499-b743-9679013fa8f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6a848d3ce964c8a830dc094b1fabda3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05fdca40-1cb6-464c-aa5a-8851ff465206', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=475aee86-91de-476d-985b-041fc0833aae, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=77e05704-3f59-4efd-94a2-c2636c9bcf39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:13:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:09.975 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 77e05704-3f59-4efd-94a2-c2636c9bcf39 in datapath 406cfa48-d21f-4499-b743-9679013fa8f8 bound to our chassis#033[00m
Oct  2 08:13:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:09.978 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 406cfa48-d21f-4499-b743-9679013fa8f8#033[00m
Oct  2 08:13:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:09.991 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8948ac8f-c75c-401e-b150-e618370c37bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:09.993 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap406cfa48-d1 in ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:13:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:09.995 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap406cfa48-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:13:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:09.995 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[337bacd0-b91f-49dd-afee-c8b22ce5d63f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:09.996 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7043813c-eeeb-461f-934d-6a6faf2e1d6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 systemd-udevd[253575]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:13:10 np0005465987 systemd-machined[188335]: New machine qemu-28-instance-00000038.
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.011 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed51364-7780-4a13-ba04-427f62bf5902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.016 2 DEBUG nova.network.neutron [req-b87d55cc-1038-4a3c-b267-019a374906b3 req-1d820bb7-808d-424b-8e47-56f7fcaca03c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Updated VIF entry in instance network info cache for port 77e05704-3f59-4efd-94a2-c2636c9bcf39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.016 2 DEBUG nova.network.neutron [req-b87d55cc-1038-4a3c-b267-019a374906b3 req-1d820bb7-808d-424b-8e47-56f7fcaca03c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Updating instance_info_cache with network_info: [{"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:10 np0005465987 NetworkManager[44910]: <info>  [1759407190.0288] device (tap77e05704-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:13:10 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:10Z|00155|binding|INFO|Setting lport 77e05704-3f59-4efd-94a2-c2636c9bcf39 ovn-installed in OVS
Oct  2 08:13:10 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:10Z|00156|binding|INFO|Setting lport 77e05704-3f59-4efd-94a2-c2636c9bcf39 up in Southbound
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:10 np0005465987 NetworkManager[44910]: <info>  [1759407190.0334] device (tap77e05704-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:13:10 np0005465987 systemd[1]: Started Virtual Machine qemu-28-instance-00000038.
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.037 2 DEBUG oslo_concurrency.lockutils [req-b87d55cc-1038-4a3c-b267-019a374906b3 req-1d820bb7-808d-424b-8e47-56f7fcaca03c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-eaca80c4-d12a-4524-8733-32e74e062dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.041 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1d43cb-e663-4e1d-8aa8-e52f21ec3bed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.080 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9526e423-e576-4a23-ab40-fec2881b5d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 systemd-udevd[253583]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:13:10 np0005465987 NetworkManager[44910]: <info>  [1759407190.0900] manager: (tap406cfa48-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/96)
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.090 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[00fe4815-5a55-4550-be06-0a92d1ca4157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 podman[253563]: 2025-10-02 12:13:10.104077674 +0000 UTC m=+0.115299489 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.134 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[be6e40d2-41c9-4cd0-902d-301780767ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.140 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[561ba506-9d7a-4ba3-8ca7-e809c5876213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 NetworkManager[44910]: <info>  [1759407190.1680] device (tap406cfa48-d0): carrier: link connected
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.175 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e10d8a44-3531-40f2-9452-9038635fa873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.190 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b78280e8-baad-4224-8682-c5aa3abe4e13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap406cfa48-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:3c:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525874, 'reachable_time': 36960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253617, 'error': None, 'target': 'ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.206 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d4da34-b5a2-4687-93a4-c00250023ab2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:3ce6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525874, 'tstamp': 525874}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253618, 'error': None, 'target': 'ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.222 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba5398a-cd9e-4e97-a677-790b410c3c4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap406cfa48-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:3c:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525874, 'reachable_time': 36960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253619, 'error': None, 'target': 'ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.248 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe13e5d-657b-4cd5-b1ae-25ec5644dfe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.303 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[97fe2aa7-709b-4620-8867-910f5acda6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.304 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap406cfa48-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.305 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.305 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap406cfa48-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:10 np0005465987 NetworkManager[44910]: <info>  [1759407190.3080] manager: (tap406cfa48-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct  2 08:13:10 np0005465987 kernel: tap406cfa48-d0: entered promiscuous mode
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:10 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:10Z|00157|binding|INFO|Releasing lport cf00b7d7-10e7-415f-8b02-fe529a085051 from this chassis (sb_readonly=0)
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.312 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap406cfa48-d0, col_values=(('external_ids', {'iface-id': 'cf00b7d7-10e7-415f-8b02-fe529a085051'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.329 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/406cfa48-d21f-4499-b743-9679013fa8f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/406cfa48-d21f-4499-b743-9679013fa8f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.329 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8fd550-ac94-4a07-aa20-b316e4280bba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.330 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-406cfa48-d21f-4499-b743-9679013fa8f8
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/406cfa48-d21f-4499-b743-9679013fa8f8.pid.haproxy
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 406cfa48-d21f-4499-b743-9679013fa8f8
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:10.332 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8', 'env', 'PROCESS_TAG=haproxy-406cfa48-d21f-4499-b743-9679013fa8f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/406cfa48-d21f-4499-b743-9679013fa8f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.380 2 DEBUG nova.compute.manager [req-1f7d7736-928d-42ff-9171-36a8581590c9 req-b385da3c-de5e-43b5-8fe6-801d1269550a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received event network-vif-plugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.381 2 DEBUG oslo_concurrency.lockutils [req-1f7d7736-928d-42ff-9171-36a8581590c9 req-b385da3c-de5e-43b5-8fe6-801d1269550a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.381 2 DEBUG oslo_concurrency.lockutils [req-1f7d7736-928d-42ff-9171-36a8581590c9 req-b385da3c-de5e-43b5-8fe6-801d1269550a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.381 2 DEBUG oslo_concurrency.lockutils [req-1f7d7736-928d-42ff-9171-36a8581590c9 req-b385da3c-de5e-43b5-8fe6-801d1269550a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.381 2 DEBUG nova.compute.manager [req-1f7d7736-928d-42ff-9171-36a8581590c9 req-b385da3c-de5e-43b5-8fe6-801d1269550a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Processing event network-vif-plugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:13:10 np0005465987 podman[253693]: 2025-10-02 12:13:10.699425719 +0000 UTC m=+0.034107988 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:13:10 np0005465987 podman[253693]: 2025-10-02 12:13:10.833114152 +0000 UTC m=+0.167796421 container create 86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.879 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407190.879061, eaca80c4-d12a-4524-8733-32e74e062dd6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.879 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.881 2 DEBUG nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.884 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.887 2 INFO nova.virt.libvirt.driver [-] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Instance spawned successfully.#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.887 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.903 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:13:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:10.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.907 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.914 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:10 np0005465987 systemd[1]: Started libpod-conmon-86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc.scope.
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.914 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.915 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.915 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.915 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.916 2 DEBUG nova.virt.libvirt.driver [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:10 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.961 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.962 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407190.8791857, eaca80c4-d12a-4524-8733-32e74e062dd6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:13:10 np0005465987 nova_compute[230713]: 2025-10-02 12:13:10.962 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:13:10 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84c7bef2686dbf6b1383550cbd851402ed22d7b3f0f914fe87ebf3e234c841a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.019 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.022 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407190.8833892, eaca80c4-d12a-4524-8733-32e74e062dd6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.022 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:13:11 np0005465987 podman[253693]: 2025-10-02 12:13:11.023906614 +0000 UTC m=+0.358588853 container init 86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:13:11 np0005465987 podman[253693]: 2025-10-02 12:13:11.030535947 +0000 UTC m=+0.365218186 container start 86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.034 2 INFO nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Took 10.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.034 2 DEBUG nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.044 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.049 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:13:11 np0005465987 neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8[253707]: [NOTICE]   (253711) : New worker (253713) forked
Oct  2 08:13:11 np0005465987 neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8[253707]: [NOTICE]   (253711) : Loading success.
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.070 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.103 2 INFO nova.compute.manager [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Took 11.37 seconds to build instance.#033[00m
Oct  2 08:13:11 np0005465987 nova_compute[230713]: 2025-10-02 12:13:11.120 2 DEBUG oslo_concurrency.lockutils [None req-58239a8e-2111-4ffd-b158-1ebef0737f65 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:11.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:12 np0005465987 nova_compute[230713]: 2025-10-02 12:13:12.485 2 DEBUG nova.compute.manager [req-0dd6bc0c-02af-49e5-ad38-eb67a4e8d62b req-6b7f0742-7aad-46d8-ad53-4571d763daed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received event network-vif-plugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:12 np0005465987 nova_compute[230713]: 2025-10-02 12:13:12.485 2 DEBUG oslo_concurrency.lockutils [req-0dd6bc0c-02af-49e5-ad38-eb67a4e8d62b req-6b7f0742-7aad-46d8-ad53-4571d763daed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:12 np0005465987 nova_compute[230713]: 2025-10-02 12:13:12.485 2 DEBUG oslo_concurrency.lockutils [req-0dd6bc0c-02af-49e5-ad38-eb67a4e8d62b req-6b7f0742-7aad-46d8-ad53-4571d763daed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:12 np0005465987 nova_compute[230713]: 2025-10-02 12:13:12.485 2 DEBUG oslo_concurrency.lockutils [req-0dd6bc0c-02af-49e5-ad38-eb67a4e8d62b req-6b7f0742-7aad-46d8-ad53-4571d763daed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:12 np0005465987 nova_compute[230713]: 2025-10-02 12:13:12.486 2 DEBUG nova.compute.manager [req-0dd6bc0c-02af-49e5-ad38-eb67a4e8d62b req-6b7f0742-7aad-46d8-ad53-4571d763daed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] No waiting events found dispatching network-vif-plugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:13:12 np0005465987 nova_compute[230713]: 2025-10-02 12:13:12.486 2 WARNING nova.compute.manager [req-0dd6bc0c-02af-49e5-ad38-eb67a4e8d62b req-6b7f0742-7aad-46d8-ad53-4571d763daed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received unexpected event network-vif-plugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:13:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:12.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:13 np0005465987 nova_compute[230713]: 2025-10-02 12:13:13.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:13 np0005465987 nova_compute[230713]: 2025-10-02 12:13:13.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:13.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:14 np0005465987 NetworkManager[44910]: <info>  [1759407194.1745] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct  2 08:13:14 np0005465987 NetworkManager[44910]: <info>  [1759407194.1754] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Oct  2 08:13:14 np0005465987 nova_compute[230713]: 2025-10-02 12:13:14.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:14 np0005465987 nova_compute[230713]: 2025-10-02 12:13:14.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:14Z|00158|binding|INFO|Releasing lport cf00b7d7-10e7-415f-8b02-fe529a085051 from this chassis (sb_readonly=0)
Oct  2 08:13:14 np0005465987 nova_compute[230713]: 2025-10-02 12:13:14.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:14.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:15 np0005465987 nova_compute[230713]: 2025-10-02 12:13:15.200 2 DEBUG nova.compute.manager [req-37d1e404-a0b0-4c0c-b6bf-0645db82b030 req-a25d162f-0246-482a-a396-be6829408639 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received event network-changed-77e05704-3f59-4efd-94a2-c2636c9bcf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:15 np0005465987 nova_compute[230713]: 2025-10-02 12:13:15.202 2 DEBUG nova.compute.manager [req-37d1e404-a0b0-4c0c-b6bf-0645db82b030 req-a25d162f-0246-482a-a396-be6829408639 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Refreshing instance network info cache due to event network-changed-77e05704-3f59-4efd-94a2-c2636c9bcf39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:13:15 np0005465987 nova_compute[230713]: 2025-10-02 12:13:15.202 2 DEBUG oslo_concurrency.lockutils [req-37d1e404-a0b0-4c0c-b6bf-0645db82b030 req-a25d162f-0246-482a-a396-be6829408639 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-eaca80c4-d12a-4524-8733-32e74e062dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:13:15 np0005465987 nova_compute[230713]: 2025-10-02 12:13:15.203 2 DEBUG oslo_concurrency.lockutils [req-37d1e404-a0b0-4c0c-b6bf-0645db82b030 req-a25d162f-0246-482a-a396-be6829408639 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-eaca80c4-d12a-4524-8733-32e74e062dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:13:15 np0005465987 nova_compute[230713]: 2025-10-02 12:13:15.204 2 DEBUG nova.network.neutron [req-37d1e404-a0b0-4c0c-b6bf-0645db82b030 req-a25d162f-0246-482a-a396-be6829408639 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Refreshing network info cache for port 77e05704-3f59-4efd-94a2-c2636c9bcf39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:13:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:15.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:15 np0005465987 nova_compute[230713]: 2025-10-02 12:13:15.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:16 np0005465987 nova_compute[230713]: 2025-10-02 12:13:16.622 2 DEBUG nova.network.neutron [req-37d1e404-a0b0-4c0c-b6bf-0645db82b030 req-a25d162f-0246-482a-a396-be6829408639 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Updated VIF entry in instance network info cache for port 77e05704-3f59-4efd-94a2-c2636c9bcf39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:13:16 np0005465987 nova_compute[230713]: 2025-10-02 12:13:16.623 2 DEBUG nova.network.neutron [req-37d1e404-a0b0-4c0c-b6bf-0645db82b030 req-a25d162f-0246-482a-a396-be6829408639 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Updating instance_info_cache with network_info: [{"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:13:16 np0005465987 nova_compute[230713]: 2025-10-02 12:13:16.643 2 DEBUG oslo_concurrency.lockutils [req-37d1e404-a0b0-4c0c-b6bf-0645db82b030 req-a25d162f-0246-482a-a396-be6829408639 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-eaca80c4-d12a-4524-8733-32e74e062dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:13:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:16.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:17.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:18 np0005465987 nova_compute[230713]: 2025-10-02 12:13:18.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:18 np0005465987 nova_compute[230713]: 2025-10-02 12:13:18.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:18 np0005465987 podman[253725]: 2025-10-02 12:13:18.854611838 +0000 UTC m=+0.066095458 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:13:18 np0005465987 podman[253724]: 2025-10-02 12:13:18.855472331 +0000 UTC m=+0.069178552 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:13:18 np0005465987 podman[253723]: 2025-10-02 12:13:18.878540985 +0000 UTC m=+0.095266809 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:13:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:18.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e201 e201: 3 total, 3 up, 3 in
Oct  2 08:13:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:19.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:13:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:20.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:13:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:21Z|00159|binding|INFO|Releasing lport cf00b7d7-10e7-415f-8b02-fe529a085051 from this chassis (sb_readonly=0)
Oct  2 08:13:21 np0005465987 nova_compute[230713]: 2025-10-02 12:13:21.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:21.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:22.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:23 np0005465987 nova_compute[230713]: 2025-10-02 12:13:23.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:23 np0005465987 nova_compute[230713]: 2025-10-02 12:13:23.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:23Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:73:45 10.100.0.10
Oct  2 08:13:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:23Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:73:45 10.100.0.10
Oct  2 08:13:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:23.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:24.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:25.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:26.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:27.940 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:27.941 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:27.942 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:13:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:13:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:13:28 np0005465987 nova_compute[230713]: 2025-10-02 12:13:28.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:28 np0005465987 nova_compute[230713]: 2025-10-02 12:13:28.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e202 e202: 3 total, 3 up, 3 in
Oct  2 08:13:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:13:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:28.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:13:29 np0005465987 nova_compute[230713]: 2025-10-02 12:13:29.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:29.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:31Z|00160|binding|INFO|Releasing lport cf00b7d7-10e7-415f-8b02-fe529a085051 from this chassis (sb_readonly=0)
Oct  2 08:13:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:31.206 138432 DEBUG eventlet.wsgi.server [-] (138432) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  2 08:13:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:31.208 138432 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Oct  2 08:13:31 np0005465987 ovn_metadata_agent[138320]: Accept: */*#015
Oct  2 08:13:31 np0005465987 ovn_metadata_agent[138320]: Connection: close#015
Oct  2 08:13:31 np0005465987 ovn_metadata_agent[138320]: Content-Type: text/plain#015
Oct  2 08:13:31 np0005465987 ovn_metadata_agent[138320]: Host: 169.254.169.254#015
Oct  2 08:13:31 np0005465987 ovn_metadata_agent[138320]: User-Agent: curl/7.84.0#015
Oct  2 08:13:31 np0005465987 ovn_metadata_agent[138320]: X-Forwarded-For: 10.100.0.10#015
Oct  2 08:13:31 np0005465987 ovn_metadata_agent[138320]: X-Ovn-Network-Id: 406cfa48-d21f-4499-b743-9679013fa8f8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  2 08:13:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:13:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6764 writes, 34K keys, 6764 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 6764 writes, 6764 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1766 writes, 8595 keys, 1766 commit groups, 1.0 writes per commit group, ingest: 16.84 MB, 0.03 MB/s#012Interval WAL: 1766 writes, 1766 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     73.1      0.57              0.12        18    0.032       0      0       0.0       0.0#012  L6      1/0    9.67 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5    101.1     83.0      1.77              0.41        17    0.104     85K   9987       0.0       0.0#012 Sum      1/0    9.67 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5     76.5     80.6      2.33              0.53        35    0.067     85K   9987       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    105.5    107.8      0.46              0.12         8    0.058     24K   3113       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    101.1     83.0      1.77              0.41        17    0.104     85K   9987       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     73.3      0.57              0.12        17    0.033       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.041, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.18 GB write, 0.08 MB/s write, 0.17 GB read, 0.07 MB/s read, 2.3 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 19.69 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000268 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1146,19.01 MB,6.25492%) FilterBlock(35,244.80 KB,0.078638%) IndexBlock(35,448.58 KB,0.1441%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:13:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:31.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:32.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:33 np0005465987 nova_compute[230713]: 2025-10-02 12:13:33.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:33 np0005465987 nova_compute[230713]: 2025-10-02 12:13:33.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:33.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:33.918 138432 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  2 08:13:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:33.919 138432 INFO eventlet.wsgi.server [-] 10.100.0.10,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1671 time: 2.7113032#033[00m
Oct  2 08:13:33 np0005465987 haproxy-metadata-proxy-406cfa48-d21f-4499-b743-9679013fa8f8[253713]: 10.100.0.10:59024 [02/Oct/2025:12:13:31.205] listener listener/metadata 0/0/0/2713/2713 200 1655 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.173 2 DEBUG oslo_concurrency.lockutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquiring lock "eaca80c4-d12a-4524-8733-32e74e062dd6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.174 2 DEBUG oslo_concurrency.lockutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.175 2 DEBUG oslo_concurrency.lockutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquiring lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.175 2 DEBUG oslo_concurrency.lockutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.176 2 DEBUG oslo_concurrency.lockutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.177 2 INFO nova.compute.manager [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Terminating instance#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.178 2 DEBUG nova.compute.manager [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:34 np0005465987 kernel: tap77e05704-3f (unregistering): left promiscuous mode
Oct  2 08:13:34 np0005465987 NetworkManager[44910]: <info>  [1759407214.4637] device (tap77e05704-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:34Z|00161|binding|INFO|Releasing lport 77e05704-3f59-4efd-94a2-c2636c9bcf39 from this chassis (sb_readonly=0)
Oct  2 08:13:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:34Z|00162|binding|INFO|Setting lport 77e05704-3f59-4efd-94a2-c2636c9bcf39 down in Southbound
Oct  2 08:13:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:34Z|00163|binding|INFO|Removing iface tap77e05704-3f ovn-installed in OVS
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:34.481 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:73:45 10.100.0.10'], port_security=['fa:16:3e:72:73:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'eaca80c4-d12a-4524-8733-32e74e062dd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-406cfa48-d21f-4499-b743-9679013fa8f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6a848d3ce964c8a830dc094b1fabda3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05fdca40-1cb6-464c-aa5a-8851ff465206', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=475aee86-91de-476d-985b-041fc0833aae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=77e05704-3f59-4efd-94a2-c2636c9bcf39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:13:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:34.482 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 77e05704-3f59-4efd-94a2-c2636c9bcf39 in datapath 406cfa48-d21f-4499-b743-9679013fa8f8 unbound from our chassis#033[00m
Oct  2 08:13:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:34.483 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 406cfa48-d21f-4499-b743-9679013fa8f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:13:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:34.486 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5e7f93-d2e8-48d3-83c6-3776168c5286]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:34.486 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8 namespace which is not needed anymore#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:34 np0005465987 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct  2 08:13:34 np0005465987 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000038.scope: Consumed 14.076s CPU time.
Oct  2 08:13:34 np0005465987 systemd-machined[188335]: Machine qemu-28-instance-00000038 terminated.
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.613 2 INFO nova.virt.libvirt.driver [-] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Instance destroyed successfully.#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.613 2 DEBUG nova.objects.instance [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lazy-loading 'resources' on Instance uuid eaca80c4-d12a-4524-8733-32e74e062dd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.631 2 DEBUG nova.virt.libvirt.vif [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:12:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=56,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFihbxAjweHdajHgV7gFNpUayrs027OFrTN+QnpkkKeSFN3vmKwe8jU6x/xmmr52UmwPTTGUjrtTBDKf7dNO2YlgpiMp+JI2/DpRJDwi/07BC63zRuQvCsUDWXklq5stw==',key_name='tempest-keypair-187724139',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6a848d3ce964c8a830dc094b1fabda3',ramdisk_id='',reservation_id='r-92wehvzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-221039657',owner_user_name='tempest-ServersV294TestFqdnHostnames-221039657-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea7aba6956024282ad73d716eded27ce',uuid=eaca80c4-d12a-4524-8733-32e74e062dd6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.631 2 DEBUG nova.network.os_vif_util [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Converting VIF {"id": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "address": "fa:16:3e:72:73:45", "network": {"id": "406cfa48-d21f-4499-b743-9679013fa8f8", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1809387169-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6a848d3ce964c8a830dc094b1fabda3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e05704-3f", "ovs_interfaceid": "77e05704-3f59-4efd-94a2-c2636c9bcf39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.632 2 DEBUG nova.network.os_vif_util [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:73:45,bridge_name='br-int',has_traffic_filtering=True,id=77e05704-3f59-4efd-94a2-c2636c9bcf39,network=Network(406cfa48-d21f-4499-b743-9679013fa8f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e05704-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.632 2 DEBUG os_vif [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:73:45,bridge_name='br-int',has_traffic_filtering=True,id=77e05704-3f59-4efd-94a2-c2636c9bcf39,network=Network(406cfa48-d21f-4499-b743-9679013fa8f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e05704-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.634 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77e05704-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:34 np0005465987 nova_compute[230713]: 2025-10-02 12:13:34.639 2 INFO os_vif [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:73:45,bridge_name='br-int',has_traffic_filtering=True,id=77e05704-3f59-4efd-94a2-c2636c9bcf39,network=Network(406cfa48-d21f-4499-b743-9679013fa8f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e05704-3f')#033[00m
Oct  2 08:13:34 np0005465987 neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8[253707]: [NOTICE]   (253711) : haproxy version is 2.8.14-c23fe91
Oct  2 08:13:34 np0005465987 neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8[253707]: [NOTICE]   (253711) : path to executable is /usr/sbin/haproxy
Oct  2 08:13:34 np0005465987 neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8[253707]: [WARNING]  (253711) : Exiting Master process...
Oct  2 08:13:34 np0005465987 neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8[253707]: [ALERT]    (253711) : Current worker (253713) exited with code 143 (Terminated)
Oct  2 08:13:34 np0005465987 neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8[253707]: [WARNING]  (253711) : All workers exited. Exiting... (0)
Oct  2 08:13:34 np0005465987 systemd[1]: libpod-86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc.scope: Deactivated successfully.
Oct  2 08:13:34 np0005465987 podman[253937]: 2025-10-02 12:13:34.750550028 +0000 UTC m=+0.179900203 container died 86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:13:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:35.133 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:13:35 np0005465987 nova_compute[230713]: 2025-10-02 12:13:35.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:35 np0005465987 nova_compute[230713]: 2025-10-02 12:13:35.226 2 DEBUG nova.compute.manager [req-f998bd72-4e9c-4003-9675-49342cd8f83d req-7f60d459-da88-4924-a6d8-1cba38e90269 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received event network-vif-unplugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:35 np0005465987 nova_compute[230713]: 2025-10-02 12:13:35.226 2 DEBUG oslo_concurrency.lockutils [req-f998bd72-4e9c-4003-9675-49342cd8f83d req-7f60d459-da88-4924-a6d8-1cba38e90269 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:35 np0005465987 nova_compute[230713]: 2025-10-02 12:13:35.226 2 DEBUG oslo_concurrency.lockutils [req-f998bd72-4e9c-4003-9675-49342cd8f83d req-7f60d459-da88-4924-a6d8-1cba38e90269 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:35 np0005465987 nova_compute[230713]: 2025-10-02 12:13:35.227 2 DEBUG oslo_concurrency.lockutils [req-f998bd72-4e9c-4003-9675-49342cd8f83d req-7f60d459-da88-4924-a6d8-1cba38e90269 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:35 np0005465987 nova_compute[230713]: 2025-10-02 12:13:35.227 2 DEBUG nova.compute.manager [req-f998bd72-4e9c-4003-9675-49342cd8f83d req-7f60d459-da88-4924-a6d8-1cba38e90269 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] No waiting events found dispatching network-vif-unplugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:13:35 np0005465987 nova_compute[230713]: 2025-10-02 12:13:35.227 2 DEBUG nova.compute.manager [req-f998bd72-4e9c-4003-9675-49342cd8f83d req-7f60d459-da88-4924-a6d8-1cba38e90269 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received event network-vif-unplugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:13:35 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc-userdata-shm.mount: Deactivated successfully.
Oct  2 08:13:35 np0005465987 systemd[1]: var-lib-containers-storage-overlay-f84c7bef2686dbf6b1383550cbd851402ed22d7b3f0f914fe87ebf3e234c841a-merged.mount: Deactivated successfully.
Oct  2 08:13:35 np0005465987 podman[253937]: 2025-10-02 12:13:35.660875277 +0000 UTC m=+1.090225452 container cleanup 86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:13:35 np0005465987 systemd[1]: libpod-conmon-86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc.scope: Deactivated successfully.
Oct  2 08:13:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:13:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:35.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:13:36 np0005465987 podman[253994]: 2025-10-02 12:13:36.150113389 +0000 UTC m=+0.469135731 container remove 86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.156 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1651a17a-407a-4dbd-adab-5fa84208695c]: (4, ('Thu Oct  2 12:13:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8 (86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc)\n86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc\nThu Oct  2 12:13:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8 (86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc)\n86b3b2a345bf294d438659471b6c24b0f32e53ecbeb7c38f25c3c8cdf223eefc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.157 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[59644d97-744e-4879-97b2-bc43873ac37c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.158 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap406cfa48-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:36 np0005465987 nova_compute[230713]: 2025-10-02 12:13:36.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:36 np0005465987 kernel: tap406cfa48-d0: left promiscuous mode
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.164 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5752c3-ecf4-44e3-9ceb-63a52022e93e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:36 np0005465987 nova_compute[230713]: 2025-10-02 12:13:36.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.195 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a196b99e-8216-4271-bd8f-5afd7ccda1d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.196 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6d3d92-c2b5-480e-9fc0-8e0267f5769b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.211 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[53ff2267-d3a7-41cb-a56c-93aa795fe4a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525864, 'reachable_time': 22921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254011, 'error': None, 'target': 'ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.214 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-406cfa48-d21f-4499-b743-9679013fa8f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.214 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[04a0e13b-3f5d-46b0-88cb-7ca26099a949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:36.215 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:13:36 np0005465987 systemd[1]: run-netns-ovnmeta\x2d406cfa48\x2dd21f\x2d4499\x2db743\x2d9679013fa8f8.mount: Deactivated successfully.
Oct  2 08:13:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:36.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:36 np0005465987 nova_compute[230713]: 2025-10-02 12:13:36.967 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:36 np0005465987 nova_compute[230713]: 2025-10-02 12:13:36.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:13:36 np0005465987 nova_compute[230713]: 2025-10-02 12:13:36.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:13:36 np0005465987 nova_compute[230713]: 2025-10-02 12:13:36.989 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:13:36 np0005465987 nova_compute[230713]: 2025-10-02 12:13:36.990 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:13:37 np0005465987 nova_compute[230713]: 2025-10-02 12:13:37.317 2 DEBUG nova.compute.manager [req-7e20100c-692b-4686-8af6-9ef272b4075e req-de4156e7-1ed5-4182-bf1c-6ff78b6e1c0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received event network-vif-plugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:37 np0005465987 nova_compute[230713]: 2025-10-02 12:13:37.318 2 DEBUG oslo_concurrency.lockutils [req-7e20100c-692b-4686-8af6-9ef272b4075e req-de4156e7-1ed5-4182-bf1c-6ff78b6e1c0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:37 np0005465987 nova_compute[230713]: 2025-10-02 12:13:37.318 2 DEBUG oslo_concurrency.lockutils [req-7e20100c-692b-4686-8af6-9ef272b4075e req-de4156e7-1ed5-4182-bf1c-6ff78b6e1c0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:37 np0005465987 nova_compute[230713]: 2025-10-02 12:13:37.318 2 DEBUG oslo_concurrency.lockutils [req-7e20100c-692b-4686-8af6-9ef272b4075e req-de4156e7-1ed5-4182-bf1c-6ff78b6e1c0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:37 np0005465987 nova_compute[230713]: 2025-10-02 12:13:37.318 2 DEBUG nova.compute.manager [req-7e20100c-692b-4686-8af6-9ef272b4075e req-de4156e7-1ed5-4182-bf1c-6ff78b6e1c0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] No waiting events found dispatching network-vif-plugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:13:37 np0005465987 nova_compute[230713]: 2025-10-02 12:13:37.318 2 WARNING nova.compute.manager [req-7e20100c-692b-4686-8af6-9ef272b4075e req-de4156e7-1ed5-4182-bf1c-6ff78b6e1c0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received unexpected event network-vif-plugged-77e05704-3f59-4efd-94a2-c2636c9bcf39 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:13:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:37.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:38 np0005465987 nova_compute[230713]: 2025-10-02 12:13:38.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e203 e203: 3 total, 3 up, 3 in
Oct  2 08:13:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:38.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:13:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:13:38 np0005465987 nova_compute[230713]: 2025-10-02 12:13:38.981 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:39.217 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:39 np0005465987 nova_compute[230713]: 2025-10-02 12:13:39.577 2 INFO nova.virt.libvirt.driver [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Deleting instance files /var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6_del#033[00m
Oct  2 08:13:39 np0005465987 nova_compute[230713]: 2025-10-02 12:13:39.577 2 INFO nova.virt.libvirt.driver [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Deletion of /var/lib/nova/instances/eaca80c4-d12a-4524-8733-32e74e062dd6_del complete#033[00m
Oct  2 08:13:39 np0005465987 nova_compute[230713]: 2025-10-02 12:13:39.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:39.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:39 np0005465987 nova_compute[230713]: 2025-10-02 12:13:39.805 2 INFO nova.compute.manager [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Took 5.63 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:13:39 np0005465987 nova_compute[230713]: 2025-10-02 12:13:39.805 2 DEBUG oslo.service.loopingcall [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:13:39 np0005465987 nova_compute[230713]: 2025-10-02 12:13:39.806 2 DEBUG nova.compute.manager [-] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:13:39 np0005465987 nova_compute[230713]: 2025-10-02 12:13:39.806 2 DEBUG nova.network.neutron [-] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:13:40 np0005465987 podman[254064]: 2025-10-02 12:13:40.878914984 +0000 UTC m=+0.090292262 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  2 08:13:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:40.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:40 np0005465987 nova_compute[230713]: 2025-10-02 12:13:40.952 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:40 np0005465987 nova_compute[230713]: 2025-10-02 12:13:40.952 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:40 np0005465987 nova_compute[230713]: 2025-10-02 12:13:40.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:40 np0005465987 nova_compute[230713]: 2025-10-02 12:13:40.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:40 np0005465987 nova_compute[230713]: 2025-10-02 12:13:40.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:41 np0005465987 nova_compute[230713]: 2025-10-02 12:13:40.999 2 DEBUG nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:13:41 np0005465987 nova_compute[230713]: 2025-10-02 12:13:41.334 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:41 np0005465987 nova_compute[230713]: 2025-10-02 12:13:41.335 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:41 np0005465987 nova_compute[230713]: 2025-10-02 12:13:41.345 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:13:41 np0005465987 nova_compute[230713]: 2025-10-02 12:13:41.345 2 INFO nova.compute.claims [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:13:41 np0005465987 nova_compute[230713]: 2025-10-02 12:13:41.763 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:41.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:41 np0005465987 nova_compute[230713]: 2025-10-02 12:13:41.891 2 DEBUG nova.network.neutron [-] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:13:41 np0005465987 nova_compute[230713]: 2025-10-02 12:13:41.929 2 INFO nova.compute.manager [-] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Took 2.12 seconds to deallocate network for instance.#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:41.999 2 DEBUG oslo_concurrency.lockutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.002 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.003 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.024 2 DEBUG nova.compute.manager [req-b57b22b4-6b53-4d1b-a81b-a307743fdf50 req-5175ebbf-a7e1-482b-8829-b6a4630314ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Received event network-vif-deleted-77e05704-3f59-4efd-94a2-c2636c9bcf39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:13:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/284285891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.241 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.249 2 DEBUG nova.compute.provider_tree [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.267 2 DEBUG nova.scheduler.client.report [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.288 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.289 2 DEBUG nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.292 2 DEBUG oslo_concurrency.lockutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.343 2 DEBUG nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.343 2 DEBUG nova.network.neutron [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.376 2 INFO nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.392 2 DEBUG oslo_concurrency.processutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.421 2 DEBUG nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:13:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.539 2 DEBUG nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.540 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.540 2 INFO nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Creating image(s)#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.586 2 DEBUG nova.storage.rbd_utils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.724 2 DEBUG nova.storage.rbd_utils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.755 2 DEBUG nova.storage.rbd_utils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.760 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.829 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.830 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.831 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.831 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.858 2 DEBUG nova.storage.rbd_utils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.862 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:42.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:42 np0005465987 nova_compute[230713]: 2025-10-02 12:13:42.992 2 DEBUG nova.policy [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e627e098f2b465d97099fee8f489b71', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:13:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:13:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1124511012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:13:43 np0005465987 nova_compute[230713]: 2025-10-02 12:13:43.056 2 DEBUG oslo_concurrency.processutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:43 np0005465987 nova_compute[230713]: 2025-10-02 12:13:43.063 2 DEBUG nova.compute.provider_tree [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:13:43 np0005465987 nova_compute[230713]: 2025-10-02 12:13:43.082 2 DEBUG nova.scheduler.client.report [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:13:43 np0005465987 nova_compute[230713]: 2025-10-02 12:13:43.128 2 DEBUG oslo_concurrency.lockutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:43 np0005465987 nova_compute[230713]: 2025-10-02 12:13:43.176 2 INFO nova.scheduler.client.report [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Deleted allocations for instance eaca80c4-d12a-4524-8733-32e74e062dd6#033[00m
Oct  2 08:13:43 np0005465987 nova_compute[230713]: 2025-10-02 12:13:43.255 2 DEBUG oslo_concurrency.lockutils [None req-8ed53ebc-c528-49a7-838d-6b58995cd7c6 ea7aba6956024282ad73d716eded27ce b6a848d3ce964c8a830dc094b1fabda3 - - default default] Lock "eaca80c4-d12a-4524-8733-32e74e062dd6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:43 np0005465987 nova_compute[230713]: 2025-10-02 12:13:43.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:43 np0005465987 nova_compute[230713]: 2025-10-02 12:13:43.409 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:43 np0005465987 nova_compute[230713]: 2025-10-02 12:13:43.518 2 DEBUG nova.storage.rbd_utils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] resizing rbd image a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:13:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:43.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e204 e204: 3 total, 3 up, 3 in
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.007 2 DEBUG nova.objects.instance [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'migration_context' on Instance uuid a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.011 2 DEBUG nova.network.neutron [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Successfully created port: 5a322446-ffe8-4ee8-940b-a265c4add0f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.027 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.028 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Ensure instance console log exists: /var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.029 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.029 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.030 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.811 2 DEBUG nova.network.neutron [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Successfully updated port: 5a322446-ffe8-4ee8-940b-a265c4add0f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.828 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.829 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.829 2 DEBUG nova.network.neutron [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:13:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:44.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.979 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.999 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.999 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:44 np0005465987 nova_compute[230713]: 2025-10-02 12:13:44.999 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.000 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.000 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.026 2 DEBUG nova.network.neutron [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:13:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:13:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/428096603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.430 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.609 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.610 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4644MB free_disk=20.935977935791016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.610 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.610 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.668 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.668 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.668 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:13:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e205 e205: 3 total, 3 up, 3 in
Oct  2 08:13:45 np0005465987 nova_compute[230713]: 2025-10-02 12:13:45.717 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:13:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:45.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.098 2 DEBUG nova.network.neutron [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updating instance_info_cache with network_info: [{"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.137 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.138 2 DEBUG nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Instance network_info: |[{"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:13:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:13:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2718261622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.143 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Start _get_guest_xml network_info=[{"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.149 2 WARNING nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.158 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.161 2 DEBUG nova.virt.libvirt.host [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.162 2 DEBUG nova.virt.libvirt.host [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.169 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.173 2 DEBUG nova.virt.libvirt.host [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.173 2 DEBUG nova.virt.libvirt.host [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.175 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.176 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.177 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.177 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.178 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.178 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.179 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.179 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.180 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.180 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.181 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.181 2 DEBUG nova.virt.hardware [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.187 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.222 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.248 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.248 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.249 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.249 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.266 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.283 2 DEBUG nova.compute.manager [req-f3d36ddc-63c3-4a3b-972f-5d50caa464f0 req-9d1dc649-28a6-4d11-8524-f28bfd495bd2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-changed-5a322446-ffe8-4ee8-940b-a265c4add0f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.284 2 DEBUG nova.compute.manager [req-f3d36ddc-63c3-4a3b-972f-5d50caa464f0 req-9d1dc649-28a6-4d11-8524-f28bfd495bd2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Refreshing instance network info cache due to event network-changed-5a322446-ffe8-4ee8-940b-a265c4add0f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.285 2 DEBUG oslo_concurrency.lockutils [req-f3d36ddc-63c3-4a3b-972f-5d50caa464f0 req-9d1dc649-28a6-4d11-8524-f28bfd495bd2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.285 2 DEBUG oslo_concurrency.lockutils [req-f3d36ddc-63c3-4a3b-972f-5d50caa464f0 req-9d1dc649-28a6-4d11-8524-f28bfd495bd2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.285 2 DEBUG nova.network.neutron [req-f3d36ddc-63c3-4a3b-972f-5d50caa464f0 req-9d1dc649-28a6-4d11-8524-f28bfd495bd2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Refreshing network info cache for port 5a322446-ffe8-4ee8-940b-a265c4add0f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:13:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:13:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3643459010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.652 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.683 2 DEBUG nova.storage.rbd_utils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:46 np0005465987 nova_compute[230713]: 2025-10-02 12:13:46.687 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:46.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:13:47 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1876584961' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.160 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.162 2 DEBUG nova.virt.libvirt.vif [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2044102604',display_name='tempest-AttachInterfacesTestJSON-server-2044102604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2044102604',id=60,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGq65yZfmOYpel/emEQoQQx0FBLog3JLLvkY4o7KQ0CgiXWgH6I3t+7gjDjz3bdhLXvIOzEF9Ezr94mCGFReUqyBjojD8vmhtn2SCqndZ3QqnYchxSROlf2w4PX9dqK8Q==',key_name='tempest-keypair-792643536',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-5vwrm0vd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.162 2 DEBUG nova.network.os_vif_util [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.163 2 DEBUG nova.network.os_vif_util [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:04:b5,bridge_name='br-int',has_traffic_filtering=True,id=5a322446-ffe8-4ee8-940b-a265c4add0f4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a322446-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.164 2 DEBUG nova.objects.instance [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'pci_devices' on Instance uuid a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.178 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <uuid>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</uuid>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <name>instance-0000003c</name>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <nova:name>tempest-AttachInterfacesTestJSON-server-2044102604</nova:name>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:13:46</nova:creationTime>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <nova:port uuid="5a322446-ffe8-4ee8-940b-a265c4add0f4">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <entry name="serial">a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <entry name="uuid">a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:18:04:b5"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <target dev="tap5a322446-ff"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log" append="off"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:13:47 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:13:47 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:13:47 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:13:47 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.179 2 DEBUG nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Preparing to wait for external event network-vif-plugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.179 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.179 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.180 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.180 2 DEBUG nova.virt.libvirt.vif [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2044102604',display_name='tempest-AttachInterfacesTestJSON-server-2044102604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2044102604',id=60,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGq65yZfmOYpel/emEQoQQx0FBLog3JLLvkY4o7KQ0CgiXWgH6I3t+7gjDjz3bdhLXvIOzEF9Ezr94mCGFReUqyBjojD8vmhtn2SCqndZ3QqnYchxSROlf2w4PX9dqK8Q==',key_name='tempest-keypair-792643536',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-5vwrm0vd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.181 2 DEBUG nova.network.os_vif_util [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.181 2 DEBUG nova.network.os_vif_util [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:04:b5,bridge_name='br-int',has_traffic_filtering=True,id=5a322446-ffe8-4ee8-940b-a265c4add0f4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a322446-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.181 2 DEBUG os_vif [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:04:b5,bridge_name='br-int',has_traffic_filtering=True,id=5a322446-ffe8-4ee8-940b-a265c4add0f4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a322446-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.182 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a322446-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a322446-ff, col_values=(('external_ids', {'iface-id': '5a322446-ffe8-4ee8-940b-a265c4add0f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:04:b5', 'vm-uuid': 'a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:47 np0005465987 NetworkManager[44910]: <info>  [1759407227.1886] manager: (tap5a322446-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.194 2 INFO os_vif [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:04:b5,bridge_name='br-int',has_traffic_filtering=True,id=5a322446-ffe8-4ee8-940b-a265c4add0f4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a322446-ff')#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.253 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.253 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.295 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.295 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.295 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:18:04:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.296 2 INFO nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Using config drive#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.358 2 DEBUG nova.storage.rbd_utils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:47.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.790 2 DEBUG nova.network.neutron [req-f3d36ddc-63c3-4a3b-972f-5d50caa464f0 req-9d1dc649-28a6-4d11-8524-f28bfd495bd2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updated VIF entry in instance network info cache for port 5a322446-ffe8-4ee8-940b-a265c4add0f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.791 2 DEBUG nova.network.neutron [req-f3d36ddc-63c3-4a3b-972f-5d50caa464f0 req-9d1dc649-28a6-4d11-8524-f28bfd495bd2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updating instance_info_cache with network_info: [{"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.809 2 DEBUG oslo_concurrency.lockutils [req-f3d36ddc-63c3-4a3b-972f-5d50caa464f0 req-9d1dc649-28a6-4d11-8524-f28bfd495bd2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:13:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e206 e206: 3 total, 3 up, 3 in
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:47 np0005465987 nova_compute[230713]: 2025-10-02 12:13:47.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:13:48 np0005465987 nova_compute[230713]: 2025-10-02 12:13:48.041 2 INFO nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Creating config drive at /var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/disk.config#033[00m
Oct  2 08:13:48 np0005465987 nova_compute[230713]: 2025-10-02 12:13:48.047 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppt2qu3ow execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:48 np0005465987 nova_compute[230713]: 2025-10-02 12:13:48.180 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppt2qu3ow" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:48 np0005465987 nova_compute[230713]: 2025-10-02 12:13:48.212 2 DEBUG nova.storage.rbd_utils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:48 np0005465987 nova_compute[230713]: 2025-10-02 12:13:48.215 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/disk.config a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:48 np0005465987 nova_compute[230713]: 2025-10-02 12:13:48.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:13:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/912660896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:13:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:13:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:48.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:13:49 np0005465987 nova_compute[230713]: 2025-10-02 12:13:49.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:49 np0005465987 nova_compute[230713]: 2025-10-02 12:13:49.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:49 np0005465987 nova_compute[230713]: 2025-10-02 12:13:49.612 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407214.6117558, eaca80c4-d12a-4524-8733-32e74e062dd6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:13:49 np0005465987 nova_compute[230713]: 2025-10-02 12:13:49.613 2 INFO nova.compute.manager [-] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:13:49 np0005465987 nova_compute[230713]: 2025-10-02 12:13:49.637 2 DEBUG nova.compute.manager [None req-79db370d-cb0c-425d-ab49-29644b5bb4cc - - - - - -] [instance: eaca80c4-d12a-4524-8733-32e74e062dd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:13:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:49.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:49 np0005465987 podman[254463]: 2025-10-02 12:13:49.859482777 +0000 UTC m=+0.059352872 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:13:49 np0005465987 podman[254462]: 2025-10-02 12:13:49.874621403 +0000 UTC m=+0.081400817 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:13:49 np0005465987 podman[254464]: 2025-10-02 12:13:49.89961967 +0000 UTC m=+0.095205627 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.462 2 DEBUG oslo_concurrency.processutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/disk.config a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.463 2 INFO nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Deleting local config drive /var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/disk.config because it was imported into RBD.#033[00m
Oct  2 08:13:50 np0005465987 kernel: tap5a322446-ff: entered promiscuous mode
Oct  2 08:13:50 np0005465987 NetworkManager[44910]: <info>  [1759407230.5085] manager: (tap5a322446-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:50Z|00164|binding|INFO|Claiming lport 5a322446-ffe8-4ee8-940b-a265c4add0f4 for this chassis.
Oct  2 08:13:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:50Z|00165|binding|INFO|5a322446-ffe8-4ee8-940b-a265c4add0f4: Claiming fa:16:3e:18:04:b5 10.100.0.14
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.517 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:04:b5 10.100.0.14'], port_security=['fa:16:3e:18:04:b5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a0707d41-ef02-4f9c-9022-7062693896e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=5a322446-ffe8-4ee8-940b-a265c4add0f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.518 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 5a322446-ffe8-4ee8-940b-a265c4add0f4 in datapath ee114210-598c-482f-83c7-26c3363a45c4 bound to our chassis#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.519 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.531 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab236c4-1838-424f-91cc-9bf01abdcc1d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.532 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee114210-51 in ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.534 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee114210-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.534 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a244cc-b6db-4316-a477-6ae44d366a6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.535 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2623181d-8da9-4521-96f9-f1b76bffaa16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 systemd-udevd[254540]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:13:50 np0005465987 systemd-machined[188335]: New machine qemu-29-instance-0000003c.
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.552 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[60db3c41-1278-4849-aef1-189e5b084572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 NetworkManager[44910]: <info>  [1759407230.5536] device (tap5a322446-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:13:50 np0005465987 NetworkManager[44910]: <info>  [1759407230.5545] device (tap5a322446-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:13:50 np0005465987 systemd[1]: Started Virtual Machine qemu-29-instance-0000003c.
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.578 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e3af46a1-5edd-4736-8c8c-8ae4a72239d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:50Z|00166|binding|INFO|Setting lport 5a322446-ffe8-4ee8-940b-a265c4add0f4 ovn-installed in OVS
Oct  2 08:13:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:50Z|00167|binding|INFO|Setting lport 5a322446-ffe8-4ee8-940b-a265c4add0f4 up in Southbound
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.607 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b79f488a-ea8f-4d46-a239-1aab8fd8182b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.611 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9c61e6ac-d80d-4f06-a83c-ea078371ca12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 NetworkManager[44910]: <info>  [1759407230.6120] manager: (tapee114210-50): new Veth device (/org/freedesktop/NetworkManager/Devices/102)
Oct  2 08:13:50 np0005465987 systemd-udevd[254544]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.655 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[97d19ddc-dcbb-486b-8f76-2711b48de632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.658 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e1eb4da1-e872-4dc6-bc90-10c13035a543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 NetworkManager[44910]: <info>  [1759407230.6811] device (tapee114210-50): carrier: link connected
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.688 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b5465adf-675f-4220-864b-3762c6409029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.705 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2efd150a-e7e8-47b6-b908-048c7e56c1ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529925, 'reachable_time': 20821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254573, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.718 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[97cdc9bb-09f2-46c5-a267-fc0f13147118]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:8699'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529925, 'tstamp': 529925}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254574, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.734 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c933308d-615e-4c5a-ab67-998d4eafd424]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529925, 'reachable_time': 20821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254575, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.761 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbdda8b-7b5a-493d-adf5-51b2d744a05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.821 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f075e737-b4a6-4f9d-85df-e6b354abd5da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.822 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.822 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.823 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:50 np0005465987 NetworkManager[44910]: <info>  [1759407230.8253] manager: (tapee114210-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Oct  2 08:13:50 np0005465987 kernel: tapee114210-50: entered promiscuous mode
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.828 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:50Z|00168|binding|INFO|Releasing lport 544b20f9-e292-46bc-a770-180c318d534e from this chassis (sb_readonly=0)
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.850 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee114210-598c-482f-83c7-26c3363a45c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee114210-598c-482f-83c7-26c3363a45c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.852 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[649bdaa4-754c-4640-87fc-7f83d90533d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.852 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-ee114210-598c-482f-83c7-26c3363a45c4
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/ee114210-598c-482f-83c7-26c3363a45c4.pid.haproxy
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID ee114210-598c-482f-83c7-26c3363a45c4
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:13:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:13:50.853 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'env', 'PROCESS_TAG=haproxy-ee114210-598c-482f-83c7-26c3363a45c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee114210-598c-482f-83c7-26c3363a45c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:13:50 np0005465987 nova_compute[230713]: 2025-10-02 12:13:50.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:13:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:50.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:51 np0005465987 podman[254607]: 2025-10-02 12:13:51.179824611 +0000 UTC m=+0.022869540 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:13:51 np0005465987 podman[254607]: 2025-10-02 12:13:51.525255891 +0000 UTC m=+0.368300770 container create 053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:13:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:51.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:51 np0005465987 systemd[1]: Started libpod-conmon-053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a.scope.
Oct  2 08:13:51 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:13:51 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928cf7601918c9d9bbb082878eaeb62b354fe4740ed361a51f30c51edb8493a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:13:51 np0005465987 nova_compute[230713]: 2025-10-02 12:13:51.951 2 DEBUG nova.compute.manager [req-124faedd-05f7-45fa-9c0f-342bf32620b6 req-0f438cd8-5309-4e70-bcba-717fc2b213b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-plugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:51 np0005465987 nova_compute[230713]: 2025-10-02 12:13:51.952 2 DEBUG oslo_concurrency.lockutils [req-124faedd-05f7-45fa-9c0f-342bf32620b6 req-0f438cd8-5309-4e70-bcba-717fc2b213b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:51 np0005465987 nova_compute[230713]: 2025-10-02 12:13:51.953 2 DEBUG oslo_concurrency.lockutils [req-124faedd-05f7-45fa-9c0f-342bf32620b6 req-0f438cd8-5309-4e70-bcba-717fc2b213b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:51 np0005465987 nova_compute[230713]: 2025-10-02 12:13:51.953 2 DEBUG oslo_concurrency.lockutils [req-124faedd-05f7-45fa-9c0f-342bf32620b6 req-0f438cd8-5309-4e70-bcba-717fc2b213b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:51 np0005465987 nova_compute[230713]: 2025-10-02 12:13:51.954 2 DEBUG nova.compute.manager [req-124faedd-05f7-45fa-9c0f-342bf32620b6 req-0f438cd8-5309-4e70-bcba-717fc2b213b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Processing event network-vif-plugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:13:52 np0005465987 podman[254607]: 2025-10-02 12:13:52.085872193 +0000 UTC m=+0.928917082 container init 053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:13:52 np0005465987 podman[254607]: 2025-10-02 12:13:52.093942745 +0000 UTC m=+0.936987634 container start 053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:13:52 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[254658]: [NOTICE]   (254668) : New worker (254670) forked
Oct  2 08:13:52 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[254658]: [NOTICE]   (254668) : Loading success.
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.584 2 DEBUG nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.585 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407232.5842757, a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.585 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] VM Started (Lifecycle Event)#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.589 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.592 2 INFO nova.virt.libvirt.driver [-] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Instance spawned successfully.#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.592 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.608 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.615 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.619 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.620 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.620 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.621 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.621 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.622 2 DEBUG nova.virt.libvirt.driver [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.646 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.647 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407232.584451, a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.647 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.677 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.680 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407232.5883825, a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.681 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.687 2 INFO nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Took 10.15 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.688 2 DEBUG nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.699 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.702 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.752 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.785 2 INFO nova.compute.manager [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Took 11.68 seconds to build instance.#033[00m
Oct  2 08:13:52 np0005465987 nova_compute[230713]: 2025-10-02 12:13:52.801 2 DEBUG oslo_concurrency.lockutils [None req-07d71836-faf6-429d-8bc0-2d0250f17efb 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:52.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:53 np0005465987 nova_compute[230713]: 2025-10-02 12:13:53.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e207 e207: 3 total, 3 up, 3 in
Oct  2 08:13:54 np0005465987 nova_compute[230713]: 2025-10-02 12:13:54.243 2 DEBUG nova.compute.manager [req-a0f92a62-0c5f-4637-af60-924d46e6b8a5 req-43b02e3e-5eb6-494f-99cd-1d4c694f57cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-plugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:54 np0005465987 nova_compute[230713]: 2025-10-02 12:13:54.243 2 DEBUG oslo_concurrency.lockutils [req-a0f92a62-0c5f-4637-af60-924d46e6b8a5 req-43b02e3e-5eb6-494f-99cd-1d4c694f57cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:54 np0005465987 nova_compute[230713]: 2025-10-02 12:13:54.245 2 DEBUG oslo_concurrency.lockutils [req-a0f92a62-0c5f-4637-af60-924d46e6b8a5 req-43b02e3e-5eb6-494f-99cd-1d4c694f57cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:54 np0005465987 nova_compute[230713]: 2025-10-02 12:13:54.245 2 DEBUG oslo_concurrency.lockutils [req-a0f92a62-0c5f-4637-af60-924d46e6b8a5 req-43b02e3e-5eb6-494f-99cd-1d4c694f57cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:54 np0005465987 nova_compute[230713]: 2025-10-02 12:13:54.245 2 DEBUG nova.compute.manager [req-a0f92a62-0c5f-4637-af60-924d46e6b8a5 req-43b02e3e-5eb6-494f-99cd-1d4c694f57cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] No waiting events found dispatching network-vif-plugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:13:54 np0005465987 nova_compute[230713]: 2025-10-02 12:13:54.245 2 WARNING nova.compute.manager [req-a0f92a62-0c5f-4637-af60-924d46e6b8a5 req-43b02e3e-5eb6-494f-99cd-1d4c694f57cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received unexpected event network-vif-plugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:13:54 np0005465987 NetworkManager[44910]: <info>  [1759407234.8086] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct  2 08:13:54 np0005465987 NetworkManager[44910]: <info>  [1759407234.8095] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct  2 08:13:54 np0005465987 nova_compute[230713]: 2025-10-02 12:13:54.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:54 np0005465987 nova_compute[230713]: 2025-10-02 12:13:54.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:54 np0005465987 ovn_controller[129172]: 2025-10-02T12:13:54Z|00169|binding|INFO|Releasing lport 544b20f9-e292-46bc-a770-180c318d534e from this chassis (sb_readonly=0)
Oct  2 08:13:54 np0005465987 nova_compute[230713]: 2025-10-02 12:13:54.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:54.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:55 np0005465987 nova_compute[230713]: 2025-10-02 12:13:55.227 2 DEBUG nova.compute.manager [req-7015efcb-eb3c-4948-9a65-7303a39d7b52 req-d17b47ea-5ccd-419a-9ae4-13554c242abc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-changed-5a322446-ffe8-4ee8-940b-a265c4add0f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:13:55 np0005465987 nova_compute[230713]: 2025-10-02 12:13:55.228 2 DEBUG nova.compute.manager [req-7015efcb-eb3c-4948-9a65-7303a39d7b52 req-d17b47ea-5ccd-419a-9ae4-13554c242abc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Refreshing instance network info cache due to event network-changed-5a322446-ffe8-4ee8-940b-a265c4add0f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:13:55 np0005465987 nova_compute[230713]: 2025-10-02 12:13:55.229 2 DEBUG oslo_concurrency.lockutils [req-7015efcb-eb3c-4948-9a65-7303a39d7b52 req-d17b47ea-5ccd-419a-9ae4-13554c242abc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:13:55 np0005465987 nova_compute[230713]: 2025-10-02 12:13:55.229 2 DEBUG oslo_concurrency.lockutils [req-7015efcb-eb3c-4948-9a65-7303a39d7b52 req-d17b47ea-5ccd-419a-9ae4-13554c242abc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:13:55 np0005465987 nova_compute[230713]: 2025-10-02 12:13:55.230 2 DEBUG nova.network.neutron [req-7015efcb-eb3c-4948-9a65-7303a39d7b52 req-d17b47ea-5ccd-419a-9ae4-13554c242abc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Refreshing network info cache for port 5a322446-ffe8-4ee8-940b-a265c4add0f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:13:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:13:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:13:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:56.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:57 np0005465987 nova_compute[230713]: 2025-10-02 12:13:57.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:57 np0005465987 nova_compute[230713]: 2025-10-02 12:13:57.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:57 np0005465987 nova_compute[230713]: 2025-10-02 12:13:57.250 2 DEBUG nova.network.neutron [req-7015efcb-eb3c-4948-9a65-7303a39d7b52 req-d17b47ea-5ccd-419a-9ae4-13554c242abc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updated VIF entry in instance network info cache for port 5a322446-ffe8-4ee8-940b-a265c4add0f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:13:57 np0005465987 nova_compute[230713]: 2025-10-02 12:13:57.251 2 DEBUG nova.network.neutron [req-7015efcb-eb3c-4948-9a65-7303a39d7b52 req-d17b47ea-5ccd-419a-9ae4-13554c242abc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updating instance_info_cache with network_info: [{"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:13:57 np0005465987 nova_compute[230713]: 2025-10-02 12:13:57.290 2 DEBUG oslo_concurrency.lockutils [req-7015efcb-eb3c-4948-9a65-7303a39d7b52 req-d17b47ea-5ccd-419a-9ae4-13554c242abc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:13:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:13:58 np0005465987 nova_compute[230713]: 2025-10-02 12:13:58.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e208 e208: 3 total, 3 up, 3 in
Oct  2 08:13:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:13:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:58.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:13:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:13:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:59.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:00 np0005465987 nova_compute[230713]: 2025-10-02 12:14:00.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:00 np0005465987 nova_compute[230713]: 2025-10-02 12:14:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:00.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:01.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:02 np0005465987 nova_compute[230713]: 2025-10-02 12:14:02.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:02.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:14:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:03 np0005465987 nova_compute[230713]: 2025-10-02 12:14:03.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:03 np0005465987 nova_compute[230713]: 2025-10-02 12:14:03.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:14:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:03.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:14:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:04.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:14:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:05Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:04:b5 10.100.0.14
Oct  2 08:14:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:05Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:04:b5 10.100.0.14
Oct  2 08:14:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:05Z|00170|binding|INFO|Releasing lport 544b20f9-e292-46bc-a770-180c318d534e from this chassis (sb_readonly=0)
Oct  2 08:14:05 np0005465987 nova_compute[230713]: 2025-10-02 12:14:05.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:05.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:06.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:07 np0005465987 nova_compute[230713]: 2025-10-02 12:14:07.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:07.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:08 np0005465987 nova_compute[230713]: 2025-10-02 12:14:08.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:08.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:09.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:14:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:10.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:14:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:11.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:14:11 np0005465987 podman[254681]: 2025-10-02 12:14:11.835565256 +0000 UTC m=+0.054433396 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:14:12 np0005465987 nova_compute[230713]: 2025-10-02 12:14:12.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:12 np0005465987 nova_compute[230713]: 2025-10-02 12:14:12.934 2 DEBUG oslo_concurrency.lockutils [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "interface-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:12 np0005465987 nova_compute[230713]: 2025-10-02 12:14:12.934 2 DEBUG oslo_concurrency.lockutils [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:12 np0005465987 nova_compute[230713]: 2025-10-02 12:14:12.935 2 DEBUG nova.objects.instance [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'flavor' on Instance uuid a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:12.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:13 np0005465987 nova_compute[230713]: 2025-10-02 12:14:13.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:13 np0005465987 nova_compute[230713]: 2025-10-02 12:14:13.364 2 DEBUG nova.objects.instance [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'pci_requests' on Instance uuid a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:13 np0005465987 nova_compute[230713]: 2025-10-02 12:14:13.377 2 DEBUG nova.network.neutron [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:14:13 np0005465987 nova_compute[230713]: 2025-10-02 12:14:13.589 2 DEBUG nova.policy [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e627e098f2b465d97099fee8f489b71', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:14:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:14 np0005465987 nova_compute[230713]: 2025-10-02 12:14:14.712 2 DEBUG nova.network.neutron [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Successfully created port: 28514145-9436-47ed-9f6b-75697583e758 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:14:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:14.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:15 np0005465987 nova_compute[230713]: 2025-10-02 12:14:15.030 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:15 np0005465987 nova_compute[230713]: 2025-10-02 12:14:15.073 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Triggering sync for uuid a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:14:15 np0005465987 nova_compute[230713]: 2025-10-02 12:14:15.074 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:15 np0005465987 nova_compute[230713]: 2025-10-02 12:14:15.074 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:15 np0005465987 nova_compute[230713]: 2025-10-02 12:14:15.117 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:15 np0005465987 nova_compute[230713]: 2025-10-02 12:14:15.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:15.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:16 np0005465987 nova_compute[230713]: 2025-10-02 12:14:16.302 2 DEBUG nova.network.neutron [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Successfully updated port: 28514145-9436-47ed-9f6b-75697583e758 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:14:16 np0005465987 nova_compute[230713]: 2025-10-02 12:14:16.328 2 DEBUG oslo_concurrency.lockutils [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:14:16 np0005465987 nova_compute[230713]: 2025-10-02 12:14:16.328 2 DEBUG oslo_concurrency.lockutils [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:14:16 np0005465987 nova_compute[230713]: 2025-10-02 12:14:16.328 2 DEBUG nova.network.neutron [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:14:16 np0005465987 nova_compute[230713]: 2025-10-02 12:14:16.847 2 WARNING nova.network.neutron [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] ee114210-598c-482f-83c7-26c3363a45c4 already exists in list: networks containing: ['ee114210-598c-482f-83c7-26c3363a45c4']. ignoring it#033[00m
Oct  2 08:14:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:17 np0005465987 nova_compute[230713]: 2025-10-02 12:14:17.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:17.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:18 np0005465987 nova_compute[230713]: 2025-10-02 12:14:18.026 2 DEBUG nova.compute.manager [req-ac88944b-9147-4904-9a15-8c5840f71d9c req-b8c1c679-dfbc-4c67-879b-5fe5c07f4769 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-changed-28514145-9436-47ed-9f6b-75697583e758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:18 np0005465987 nova_compute[230713]: 2025-10-02 12:14:18.027 2 DEBUG nova.compute.manager [req-ac88944b-9147-4904-9a15-8c5840f71d9c req-b8c1c679-dfbc-4c67-879b-5fe5c07f4769 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Refreshing instance network info cache due to event network-changed-28514145-9436-47ed-9f6b-75697583e758. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:14:18 np0005465987 nova_compute[230713]: 2025-10-02 12:14:18.027 2 DEBUG oslo_concurrency.lockutils [req-ac88944b-9147-4904-9a15-8c5840f71d9c req-b8c1c679-dfbc-4c67-879b-5fe5c07f4769 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:14:18 np0005465987 nova_compute[230713]: 2025-10-02 12:14:18.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:19.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.029 2 DEBUG nova.network.neutron [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updating instance_info_cache with network_info: [{"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.052 2 DEBUG oslo_concurrency.lockutils [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.053 2 DEBUG oslo_concurrency.lockutils [req-ac88944b-9147-4904-9a15-8c5840f71d9c req-b8c1c679-dfbc-4c67-879b-5fe5c07f4769 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.053 2 DEBUG nova.network.neutron [req-ac88944b-9147-4904-9a15-8c5840f71d9c req-b8c1c679-dfbc-4c67-879b-5fe5c07f4769 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Refreshing network info cache for port 28514145-9436-47ed-9f6b-75697583e758 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.056 2 DEBUG nova.virt.libvirt.vif [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2044102604',display_name='tempest-AttachInterfacesTestJSON-server-2044102604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2044102604',id=60,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGq65yZfmOYpel/emEQoQQx0FBLog3JLLvkY4o7KQ0CgiXWgH6I3t+7gjDjz3bdhLXvIOzEF9Ezr94mCGFReUqyBjojD8vmhtn2SCqndZ3QqnYchxSROlf2w4PX9dqK8Q==',key_name='tempest-keypair-792643536',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-5vwrm0vd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.056 2 DEBUG nova.network.os_vif_util [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.057 2 DEBUG nova.network.os_vif_util [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.057 2 DEBUG os_vif [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28514145-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28514145-94, col_values=(('external_ids', {'iface-id': '28514145-9436-47ed-9f6b-75697583e758', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:92:9b', 'vm-uuid': 'a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 NetworkManager[44910]: <info>  [1759407259.1054] manager: (tap28514145-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.113 2 INFO os_vif [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94')#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.113 2 DEBUG nova.virt.libvirt.vif [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2044102604',display_name='tempest-AttachInterfacesTestJSON-server-2044102604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2044102604',id=60,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGq65yZfmOYpel/emEQoQQx0FBLog3JLLvkY4o7KQ0CgiXWgH6I3t+7gjDjz3bdhLXvIOzEF9Ezr94mCGFReUqyBjojD8vmhtn2SCqndZ3QqnYchxSROlf2w4PX9dqK8Q==',key_name='tempest-keypair-792643536',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-5vwrm0vd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.114 2 DEBUG nova.network.os_vif_util [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.114 2 DEBUG nova.network.os_vif_util [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.116 2 DEBUG nova.virt.libvirt.guest [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] attach device xml: <interface type="ethernet">
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:f7:92:9b"/>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <target dev="tap28514145-94"/>
Oct  2 08:14:19 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:14:19 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:14:19 np0005465987 kernel: tap28514145-94: entered promiscuous mode
Oct  2 08:14:19 np0005465987 NetworkManager[44910]: <info>  [1759407259.1306] manager: (tap28514145-94): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:19Z|00171|binding|INFO|Claiming lport 28514145-9436-47ed-9f6b-75697583e758 for this chassis.
Oct  2 08:14:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:19Z|00172|binding|INFO|28514145-9436-47ed-9f6b-75697583e758: Claiming fa:16:3e:f7:92:9b 10.100.0.13
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.140 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:92:9b 10.100.0.13'], port_security=['fa:16:3e:f7:92:9b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=28514145-9436-47ed-9f6b-75697583e758) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.142 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 28514145-9436-47ed-9f6b-75697583e758 in datapath ee114210-598c-482f-83c7-26c3363a45c4 bound to our chassis#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.143 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:14:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:19Z|00173|binding|INFO|Setting lport 28514145-9436-47ed-9f6b-75697583e758 ovn-installed in OVS
Oct  2 08:14:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:19Z|00174|binding|INFO|Setting lport 28514145-9436-47ed-9f6b-75697583e758 up in Southbound
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 systemd-udevd[254708]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.161 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[af96861d-52fa-48ff-b9d3-9f338098679d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:19 np0005465987 NetworkManager[44910]: <info>  [1759407259.1818] device (tap28514145-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:14:19 np0005465987 NetworkManager[44910]: <info>  [1759407259.1825] device (tap28514145-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.191 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[66a246b9-ce3c-4185-91e3-1fe185d155c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.194 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c2752837-adc1-4c22-8fab-87c9e3e01d7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.218 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[24916ac9-616a-4766-88de-ba50c03d9426]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.236 2 DEBUG nova.virt.libvirt.driver [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.236 2 DEBUG nova.virt.libvirt.driver [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.236 2 DEBUG nova.virt.libvirt.driver [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:18:04:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.236 2 DEBUG nova.virt.libvirt.driver [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:f7:92:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.236 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fa754b73-cb4a-4b23-b4f2-1dc6a4ca94e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529925, 'reachable_time': 20821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254715, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.251 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[16baafa6-238d-4716-b447-2c5c9b265d0e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529936, 'tstamp': 529936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254716, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529939, 'tstamp': 529939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254716, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.253 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.256 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.256 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.257 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:19.257 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.265 2 DEBUG nova.virt.libvirt.guest [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2044102604</nova:name>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:14:19</nova:creationTime>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    <nova:port uuid="5a322446-ffe8-4ee8-940b-a265c4add0f4">
Oct  2 08:14:19 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    <nova:port uuid="28514145-9436-47ed-9f6b-75697583e758">
Oct  2 08:14:19 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:19 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:14:19 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:14:19 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.320 2 DEBUG oslo_concurrency.lockutils [None req-935334c5-66f0-4160-bf87-0b6b6c095d51 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.482 2 DEBUG nova.compute.manager [req-60928716-6638-4b5b-98cd-0e1167071b09 req-a7e6df43-9266-470b-af78-3ab15f93280c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-plugged-28514145-9436-47ed-9f6b-75697583e758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.483 2 DEBUG oslo_concurrency.lockutils [req-60928716-6638-4b5b-98cd-0e1167071b09 req-a7e6df43-9266-470b-af78-3ab15f93280c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.484 2 DEBUG oslo_concurrency.lockutils [req-60928716-6638-4b5b-98cd-0e1167071b09 req-a7e6df43-9266-470b-af78-3ab15f93280c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.484 2 DEBUG oslo_concurrency.lockutils [req-60928716-6638-4b5b-98cd-0e1167071b09 req-a7e6df43-9266-470b-af78-3ab15f93280c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.484 2 DEBUG nova.compute.manager [req-60928716-6638-4b5b-98cd-0e1167071b09 req-a7e6df43-9266-470b-af78-3ab15f93280c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] No waiting events found dispatching network-vif-plugged-28514145-9436-47ed-9f6b-75697583e758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:14:19 np0005465987 nova_compute[230713]: 2025-10-02 12:14:19.485 2 WARNING nova.compute.manager [req-60928716-6638-4b5b-98cd-0e1167071b09 req-a7e6df43-9266-470b-af78-3ab15f93280c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received unexpected event network-vif-plugged-28514145-9436-47ed-9f6b-75697583e758 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:14:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:19.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:20 np0005465987 nova_compute[230713]: 2025-10-02 12:14:20.692 2 DEBUG nova.network.neutron [req-ac88944b-9147-4904-9a15-8c5840f71d9c req-b8c1c679-dfbc-4c67-879b-5fe5c07f4769 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updated VIF entry in instance network info cache for port 28514145-9436-47ed-9f6b-75697583e758. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:14:20 np0005465987 nova_compute[230713]: 2025-10-02 12:14:20.693 2 DEBUG nova.network.neutron [req-ac88944b-9147-4904-9a15-8c5840f71d9c req-b8c1c679-dfbc-4c67-879b-5fe5c07f4769 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updating instance_info_cache with network_info: [{"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:20 np0005465987 nova_compute[230713]: 2025-10-02 12:14:20.722 2 DEBUG oslo_concurrency.lockutils [req-ac88944b-9147-4904-9a15-8c5840f71d9c req-b8c1c679-dfbc-4c67-879b-5fe5c07f4769 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:14:20 np0005465987 podman[254719]: 2025-10-02 12:14:20.880806724 +0000 UTC m=+0.070166229 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:14:20 np0005465987 podman[254718]: 2025-10-02 12:14:20.901564734 +0000 UTC m=+0.102213469 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:14:20 np0005465987 podman[254717]: 2025-10-02 12:14:20.908534046 +0000 UTC m=+0.118670932 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:14:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:21.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.214 2 DEBUG oslo_concurrency.lockutils [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "interface-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-28514145-9436-47ed-9f6b-75697583e758" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.215 2 DEBUG oslo_concurrency.lockutils [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-28514145-9436-47ed-9f6b-75697583e758" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.236 2 DEBUG nova.objects.instance [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'flavor' on Instance uuid a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.266 2 DEBUG nova.virt.libvirt.vif [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2044102604',display_name='tempest-AttachInterfacesTestJSON-server-2044102604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2044102604',id=60,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGq65yZfmOYpel/emEQoQQx0FBLog3JLLvkY4o7KQ0CgiXWgH6I3t+7gjDjz3bdhLXvIOzEF9Ezr94mCGFReUqyBjojD8vmhtn2SCqndZ3QqnYchxSROlf2w4PX9dqK8Q==',key_name='tempest-keypair-792643536',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-5vwrm0vd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.266 2 DEBUG nova.network.os_vif_util [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.267 2 DEBUG nova.network.os_vif_util [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.271 2 DEBUG nova.virt.libvirt.guest [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.274 2 DEBUG nova.virt.libvirt.guest [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.278 2 DEBUG nova.virt.libvirt.driver [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Attempting to detach device tap28514145-94 from instance a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.279 2 DEBUG nova.virt.libvirt.guest [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:f7:92:9b"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <target dev="tap28514145-94"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.296 2 DEBUG nova.virt.libvirt.guest [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.300 2 DEBUG nova.virt.libvirt.guest [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface>not found in domain: <domain type='kvm' id='29'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <name>instance-0000003c</name>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <uuid>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</uuid>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2044102604</nova:name>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:14:19</nova:creationTime>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:port uuid="5a322446-ffe8-4ee8-940b-a265c4add0f4">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:port uuid="28514145-9436-47ed-9f6b-75697583e758">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='serial'>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='uuid'>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk' index='2'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config' index='1'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:18:04:b5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target dev='tap5a322446-ff'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:f7:92:9b'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target dev='tap28514145-94'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='net1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log' append='off'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log' append='off'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c471,c501</label>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c471,c501</imagelabel>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.303 2 INFO nova.virt.libvirt.driver [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully detached device tap28514145-94 from instance a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 from the persistent domain config.#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.303 2 DEBUG nova.virt.libvirt.driver [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] (1/8): Attempting to detach device tap28514145-94 with device alias net1 from instance a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.304 2 DEBUG nova.virt.libvirt.guest [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:f7:92:9b"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <target dev="tap28514145-94"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:14:21 np0005465987 kernel: tap28514145-94 (unregistering): left promiscuous mode
Oct  2 08:14:21 np0005465987 NetworkManager[44910]: <info>  [1759407261.4014] device (tap28514145-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:14:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:21Z|00175|binding|INFO|Releasing lport 28514145-9436-47ed-9f6b-75697583e758 from this chassis (sb_readonly=0)
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:21Z|00176|binding|INFO|Setting lport 28514145-9436-47ed-9f6b-75697583e758 down in Southbound
Oct  2 08:14:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:21Z|00177|binding|INFO|Removing iface tap28514145-94 ovn-installed in OVS
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.418 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:92:9b 10.100.0.13'], port_security=['fa:16:3e:f7:92:9b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=28514145-9436-47ed-9f6b-75697583e758) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.419 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 28514145-9436-47ed-9f6b-75697583e758 in datapath ee114210-598c-482f-83c7-26c3363a45c4 unbound from our chassis#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.421 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.431 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759407261.4317846, a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.434 2 DEBUG nova.virt.libvirt.driver [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Start waiting for the detach event from libvirt for device tap28514145-94 with device alias net1 for instance a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.434 2 DEBUG nova.virt.libvirt.guest [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.437 2 DEBUG nova.virt.libvirt.guest [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface>not found in domain: <domain type='kvm' id='29'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <name>instance-0000003c</name>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <uuid>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</uuid>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2044102604</nova:name>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:14:19</nova:creationTime>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:port uuid="5a322446-ffe8-4ee8-940b-a265c4add0f4">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:port uuid="28514145-9436-47ed-9f6b-75697583e758">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='serial'>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='uuid'>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk' index='2'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config' index='1'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:18:04:b5'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target dev='tap5a322446-ff'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log' append='off'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log' append='off'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c471,c501</label>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c471,c501</imagelabel>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.438 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e60d3e50-3828-4db8-9f2d-363d1257f7ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.439 2 INFO nova.virt.libvirt.driver [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully detached device tap28514145-94 from instance a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 from the live domain config.#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.439 2 DEBUG nova.virt.libvirt.vif [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2044102604',display_name='tempest-AttachInterfacesTestJSON-server-2044102604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2044102604',id=60,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGq65yZfmOYpel/emEQoQQx0FBLog3JLLvkY4o7KQ0CgiXWgH6I3t+7gjDjz3bdhLXvIOzEF9Ezr94mCGFReUqyBjojD8vmhtn2SCqndZ3QqnYchxSROlf2w4PX9dqK8Q==',key_name='tempest-keypair-792643536',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-5vwrm0vd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.440 2 DEBUG nova.network.os_vif_util [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.440 2 DEBUG nova.network.os_vif_util [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.440 2 DEBUG os_vif [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.442 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28514145-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.446 2 INFO os_vif [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94')#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.446 2 DEBUG nova.virt.libvirt.guest [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2044102604</nova:name>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:14:21</nova:creationTime>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    <nova:port uuid="5a322446-ffe8-4ee8-940b-a265c4add0f4">
Oct  2 08:14:21 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:21 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:14:21 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.476 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[84fc55ae-fc68-4317-8b6b-cab69aa19229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.479 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc4404b-6ca1-4303-9cb0-e672fd026cff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.516 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bfeeeaef-f856-4971-b0f5-9b15bbd233a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.534 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e7145057-5f4d-4487-b8b0-7dfac5f1732b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529925, 'reachable_time': 20821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254791, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.551 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[785076fd-0da4-442f-bd96-25500b8a1647]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529936, 'tstamp': 529936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254792, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529939, 'tstamp': 529939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254792, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.553 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:21 np0005465987 nova_compute[230713]: 2025-10-02 12:14:21.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.555 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.556 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.556 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:21.556 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:21.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:22 np0005465987 nova_compute[230713]: 2025-10-02 12:14:22.339 2 DEBUG nova.compute.manager [req-d36bd0c4-5d66-4a4f-a1cf-c82c53562400 req-a878c257-6372-4685-9a32-e4f4969e1aed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-plugged-28514145-9436-47ed-9f6b-75697583e758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:22 np0005465987 nova_compute[230713]: 2025-10-02 12:14:22.341 2 DEBUG oslo_concurrency.lockutils [req-d36bd0c4-5d66-4a4f-a1cf-c82c53562400 req-a878c257-6372-4685-9a32-e4f4969e1aed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:22 np0005465987 nova_compute[230713]: 2025-10-02 12:14:22.341 2 DEBUG oslo_concurrency.lockutils [req-d36bd0c4-5d66-4a4f-a1cf-c82c53562400 req-a878c257-6372-4685-9a32-e4f4969e1aed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:22 np0005465987 nova_compute[230713]: 2025-10-02 12:14:22.342 2 DEBUG oslo_concurrency.lockutils [req-d36bd0c4-5d66-4a4f-a1cf-c82c53562400 req-a878c257-6372-4685-9a32-e4f4969e1aed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:22 np0005465987 nova_compute[230713]: 2025-10-02 12:14:22.342 2 DEBUG nova.compute.manager [req-d36bd0c4-5d66-4a4f-a1cf-c82c53562400 req-a878c257-6372-4685-9a32-e4f4969e1aed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] No waiting events found dispatching network-vif-plugged-28514145-9436-47ed-9f6b-75697583e758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:14:22 np0005465987 nova_compute[230713]: 2025-10-02 12:14:22.342 2 WARNING nova.compute.manager [req-d36bd0c4-5d66-4a4f-a1cf-c82c53562400 req-a878c257-6372-4685-9a32-e4f4969e1aed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received unexpected event network-vif-plugged-28514145-9436-47ed-9f6b-75697583e758 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:14:22 np0005465987 nova_compute[230713]: 2025-10-02 12:14:22.916 2 DEBUG oslo_concurrency.lockutils [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:14:22 np0005465987 nova_compute[230713]: 2025-10-02 12:14:22.917 2 DEBUG oslo_concurrency.lockutils [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:14:22 np0005465987 nova_compute[230713]: 2025-10-02 12:14:22.917 2 DEBUG nova.network.neutron [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:14:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:23.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.288 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquiring lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.289 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.310 2 DEBUG nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.415 2 DEBUG nova.compute.manager [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-deleted-28514145-9436-47ed-9f6b-75697583e758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.415 2 INFO nova.compute.manager [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Neutron deleted interface 28514145-9436-47ed-9f6b-75697583e758; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.415 2 DEBUG nova.network.neutron [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updating instance_info_cache with network_info: [{"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.454 2 DEBUG nova.objects.instance [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'system_metadata' on Instance uuid a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.505 2 DEBUG nova.objects.instance [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'flavor' on Instance uuid a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.526 2 DEBUG nova.virt.libvirt.vif [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2044102604',display_name='tempest-AttachInterfacesTestJSON-server-2044102604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2044102604',id=60,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGq65yZfmOYpel/emEQoQQx0FBLog3JLLvkY4o7KQ0CgiXWgH6I3t+7gjDjz3bdhLXvIOzEF9Ezr94mCGFReUqyBjojD8vmhtn2SCqndZ3QqnYchxSROlf2w4PX9dqK8Q==',key_name='tempest-keypair-792643536',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-5vwrm0vd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.527 2 DEBUG nova.network.os_vif_util [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.527 2 DEBUG nova.network.os_vif_util [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.530 2 DEBUG nova.virt.libvirt.guest [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.532 2 DEBUG nova.virt.libvirt.guest [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface>not found in domain: <domain type='kvm' id='29'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <name>instance-0000003c</name>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <uuid>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</uuid>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2044102604</nova:name>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:14:21</nova:creationTime>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:port uuid="5a322446-ffe8-4ee8-940b-a265c4add0f4">
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:14:23 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='serial'>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='uuid'>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk' index='2'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config' index='1'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:18:04:b5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target dev='tap5a322446-ff'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log' append='off'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log' append='off'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c471,c501</label>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c471,c501</imagelabel>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:14:23 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:14:23 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.533 2 DEBUG nova.virt.libvirt.guest [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.537 2 DEBUG nova.virt.libvirt.guest [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f7:92:9b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap28514145-94"/></interface>not found in domain: <domain type='kvm' id='29'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <name>instance-0000003c</name>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <uuid>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</uuid>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2044102604</nova:name>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:14:21</nova:creationTime>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:port uuid="5a322446-ffe8-4ee8-940b-a265c4add0f4">
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:14:23 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='serial'>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='uuid'>a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk' index='2'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_disk.config' index='1'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:18:04:b5'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target dev='tap5a322446-ff'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log' append='off'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8/console.log' append='off'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c471,c501</label>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c471,c501</imagelabel>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:14:23 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:14:23 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.538 2 WARNING nova.virt.libvirt.driver [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Detaching interface fa:16:3e:f7:92:9b failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap28514145-94' not found.#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.538 2 DEBUG nova.virt.libvirt.vif [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2044102604',display_name='tempest-AttachInterfacesTestJSON-server-2044102604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2044102604',id=60,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGq65yZfmOYpel/emEQoQQx0FBLog3JLLvkY4o7KQ0CgiXWgH6I3t+7gjDjz3bdhLXvIOzEF9Ezr94mCGFReUqyBjojD8vmhtn2SCqndZ3QqnYchxSROlf2w4PX9dqK8Q==',key_name='tempest-keypair-792643536',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-5vwrm0vd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.538 2 DEBUG nova.network.os_vif_util [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "28514145-9436-47ed-9f6b-75697583e758", "address": "fa:16:3e:f7:92:9b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28514145-94", "ovs_interfaceid": "28514145-9436-47ed-9f6b-75697583e758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.539 2 DEBUG nova.network.os_vif_util [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.539 2 DEBUG os_vif [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28514145-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.542 2 INFO os_vif [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:92:9b,bridge_name='br-int',has_traffic_filtering=True,id=28514145-9436-47ed-9f6b-75697583e758,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28514145-94')#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.542 2 DEBUG nova.virt.libvirt.guest [req-01b9260b-4b7b-4b31-b140-832f95a3c6c0 req-b14d6a17-d4db-4272-bbda-ea36f2d18ffa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2044102604</nova:name>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:14:23</nova:creationTime>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    <nova:port uuid="5a322446-ffe8-4ee8-940b-a265c4add0f4">
Oct  2 08:14:23 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:14:23 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:14:23 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:14:23 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.622 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.622 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.631 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.632 2 INFO nova.compute.claims [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.685 2 DEBUG oslo_concurrency.lockutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.686 2 DEBUG oslo_concurrency.lockutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.687 2 DEBUG oslo_concurrency.lockutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.687 2 DEBUG oslo_concurrency.lockutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.688 2 DEBUG oslo_concurrency.lockutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.689 2 INFO nova.compute.manager [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Terminating instance#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.690 2 DEBUG nova.compute.manager [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.773 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:23.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:23 np0005465987 kernel: tap5a322446-ff (unregistering): left promiscuous mode
Oct  2 08:14:23 np0005465987 NetworkManager[44910]: <info>  [1759407263.9238] device (tap5a322446-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:23Z|00178|binding|INFO|Releasing lport 5a322446-ffe8-4ee8-940b-a265c4add0f4 from this chassis (sb_readonly=0)
Oct  2 08:14:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:23Z|00179|binding|INFO|Setting lport 5a322446-ffe8-4ee8-940b-a265c4add0f4 down in Southbound
Oct  2 08:14:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:23Z|00180|binding|INFO|Removing iface tap5a322446-ff ovn-installed in OVS
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:23.946 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:04:b5 10.100.0.14'], port_security=['fa:16:3e:18:04:b5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a0707d41-ef02-4f9c-9022-7062693896e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=5a322446-ffe8-4ee8-940b-a265c4add0f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:14:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:23.948 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 5a322446-ffe8-4ee8-940b-a265c4add0f4 in datapath ee114210-598c-482f-83c7-26c3363a45c4 unbound from our chassis#033[00m
Oct  2 08:14:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:23.950 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee114210-598c-482f-83c7-26c3363a45c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:14:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:23.951 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa80e46-0449-432c-819f-7aba77abfb00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:23.951 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 namespace which is not needed anymore#033[00m
Oct  2 08:14:23 np0005465987 nova_compute[230713]: 2025-10-02 12:14:23.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:23 np0005465987 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct  2 08:14:23 np0005465987 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003c.scope: Consumed 15.626s CPU time.
Oct  2 08:14:24 np0005465987 systemd-machined[188335]: Machine qemu-29-instance-0000003c terminated.
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.136 2 INFO nova.virt.libvirt.driver [-] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Instance destroyed successfully.#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.137 2 DEBUG nova.objects.instance [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'resources' on Instance uuid a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[254658]: [NOTICE]   (254668) : haproxy version is 2.8.14-c23fe91
Oct  2 08:14:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[254658]: [NOTICE]   (254668) : path to executable is /usr/sbin/haproxy
Oct  2 08:14:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[254658]: [WARNING]  (254668) : Exiting Master process...
Oct  2 08:14:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[254658]: [WARNING]  (254668) : Exiting Master process...
Oct  2 08:14:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[254658]: [ALERT]    (254668) : Current worker (254670) exited with code 143 (Terminated)
Oct  2 08:14:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[254658]: [WARNING]  (254668) : All workers exited. Exiting... (0)
Oct  2 08:14:24 np0005465987 systemd[1]: libpod-053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a.scope: Deactivated successfully.
Oct  2 08:14:24 np0005465987 podman[254839]: 2025-10-02 12:14:24.162554142 +0000 UTC m=+0.103183416 container died 053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.163 2 DEBUG nova.virt.libvirt.vif [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2044102604',display_name='tempest-AttachInterfacesTestJSON-server-2044102604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2044102604',id=60,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGq65yZfmOYpel/emEQoQQx0FBLog3JLLvkY4o7KQ0CgiXWgH6I3t+7gjDjz3bdhLXvIOzEF9Ezr94mCGFReUqyBjojD8vmhtn2SCqndZ3QqnYchxSROlf2w4PX9dqK8Q==',key_name='tempest-keypair-792643536',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-5vwrm0vd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.164 2 DEBUG nova.network.os_vif_util [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.165 2 DEBUG nova.network.os_vif_util [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:04:b5,bridge_name='br-int',has_traffic_filtering=True,id=5a322446-ffe8-4ee8-940b-a265c4add0f4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a322446-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.165 2 DEBUG os_vif [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:04:b5,bridge_name='br-int',has_traffic_filtering=True,id=5a322446-ffe8-4ee8-940b-a265c4add0f4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a322446-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a322446-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.172 2 INFO os_vif [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:04:b5,bridge_name='br-int',has_traffic_filtering=True,id=5a322446-ffe8-4ee8-940b-a265c4add0f4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a322446-ff')#033[00m
Oct  2 08:14:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:14:24 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3758610623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.207 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.211 2 DEBUG nova.compute.provider_tree [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.228 2 DEBUG nova.scheduler.client.report [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:14:24 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:14:24 np0005465987 systemd[1]: var-lib-containers-storage-overlay-928cf7601918c9d9bbb082878eaeb62b354fe4740ed361a51f30c51edb8493a9-merged.mount: Deactivated successfully.
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.311 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.312 2 DEBUG nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.381 2 DEBUG nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.381 2 DEBUG nova.network.neutron [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.402 2 INFO nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.420 2 DEBUG nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:14:24 np0005465987 podman[254839]: 2025-10-02 12:14:24.462575294 +0000 UTC m=+0.403204558 container cleanup 053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.465 2 DEBUG nova.compute.manager [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-unplugged-28514145-9436-47ed-9f6b-75697583e758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.466 2 DEBUG oslo_concurrency.lockutils [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.466 2 DEBUG oslo_concurrency.lockutils [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.466 2 DEBUG oslo_concurrency.lockutils [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.467 2 DEBUG nova.compute.manager [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] No waiting events found dispatching network-vif-unplugged-28514145-9436-47ed-9f6b-75697583e758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.467 2 DEBUG nova.compute.manager [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-unplugged-28514145-9436-47ed-9f6b-75697583e758 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.467 2 DEBUG nova.compute.manager [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-plugged-28514145-9436-47ed-9f6b-75697583e758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.467 2 DEBUG oslo_concurrency.lockutils [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.467 2 DEBUG oslo_concurrency.lockutils [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.468 2 DEBUG oslo_concurrency.lockutils [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.468 2 DEBUG nova.compute.manager [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] No waiting events found dispatching network-vif-plugged-28514145-9436-47ed-9f6b-75697583e758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.468 2 WARNING nova.compute.manager [req-8a090c26-0f0d-41f1-9ead-45fce46d961b req-f094d9ad-4541-4e8b-874c-db81c993f9c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received unexpected event network-vif-plugged-28514145-9436-47ed-9f6b-75697583e758 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:14:24 np0005465987 systemd[1]: libpod-conmon-053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a.scope: Deactivated successfully.
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.547 2 DEBUG nova.policy [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72c74994085d4fc697ddd4acddfa7a11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3204d74f349d47fda3152d9d7fbea43e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.563 2 DEBUG nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.564 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.564 2 INFO nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Creating image(s)#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.588 2 DEBUG nova.storage.rbd_utils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] rbd image dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.610 2 DEBUG nova.storage.rbd_utils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] rbd image dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.633 2 DEBUG nova.storage.rbd_utils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] rbd image dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.637 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.659 2 INFO nova.network.neutron [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Port 28514145-9436-47ed-9f6b-75697583e758 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.660 2 DEBUG nova.network.neutron [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updating instance_info_cache with network_info: [{"id": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "address": "fa:16:3e:18:04:b5", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a322446-ff", "ovs_interfaceid": "5a322446-ffe8-4ee8-940b-a265c4add0f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.690 2 DEBUG oslo_concurrency.lockutils [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.698 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.699 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.699 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.700 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:24 np0005465987 podman[254901]: 2025-10-02 12:14:24.700427089 +0000 UTC m=+0.218883554 container remove 053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:14:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:24.708 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[47e5f4f7-d261-4019-aedd-dac23bdb889c]: (4, ('Thu Oct  2 12:14:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 (053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a)\n053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a\nThu Oct  2 12:14:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 (053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a)\n053f6e5b0b75ea1d7401945df0858a48fba066967e704fce854b3033d81e310a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:24.710 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[70f5215f-f34f-490a-8609-be12b7dc40d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:24.712 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:24 np0005465987 kernel: tapee114210-50: left promiscuous mode
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.729 2 DEBUG nova.storage.rbd_utils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] rbd image dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:24.741 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[67c348ff-0652-40e8-81fc-44be3292173a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.740 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:24.775 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[679a0a1b-53bb-4fc0-8cea-37f13551ef7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:24.776 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[53619453-542f-404a-85b5-eb85d121bb91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:24.797 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ecfb04-49b7-401c-8c2e-12d98967fff3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529917, 'reachable_time': 38883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254994, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:24.799 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:14:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:24.799 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5931cf-053f-4ca2-8bf5-312b656bc670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:24 np0005465987 systemd[1]: run-netns-ovnmeta\x2dee114210\x2d598c\x2d482f\x2d83c7\x2d26c3363a45c4.mount: Deactivated successfully.
Oct  2 08:14:24 np0005465987 nova_compute[230713]: 2025-10-02 12:14:24.833 2 DEBUG oslo_concurrency.lockutils [None req-562644f0-fab4-432b-8a39-be962d2ef05a 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-28514145-9436-47ed-9f6b-75697583e758" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:25.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.582 2 DEBUG nova.network.neutron [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Successfully created port: ee26d18a-454e-4f7a-9c50-26dda65d11de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.609 2 DEBUG nova.compute.manager [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-unplugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.609 2 DEBUG oslo_concurrency.lockutils [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.610 2 DEBUG oslo_concurrency.lockutils [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.610 2 DEBUG oslo_concurrency.lockutils [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.611 2 DEBUG nova.compute.manager [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] No waiting events found dispatching network-vif-unplugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.611 2 DEBUG nova.compute.manager [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-unplugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.611 2 DEBUG nova.compute.manager [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-plugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.612 2 DEBUG oslo_concurrency.lockutils [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.612 2 DEBUG oslo_concurrency.lockutils [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.613 2 DEBUG oslo_concurrency.lockutils [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.613 2 DEBUG nova.compute.manager [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] No waiting events found dispatching network-vif-plugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.613 2 WARNING nova.compute.manager [req-3f6862ac-0e03-4b74-8e94-6378c38d4408 req-70656c7f-4f16-4f0d-b4d6-bf9a6867d137 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received unexpected event network-vif-plugged-5a322446-ffe8-4ee8-940b-a265c4add0f4 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.615 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.874s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.699 2 DEBUG nova.storage.rbd_utils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] resizing rbd image dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:14:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:25.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.850 2 DEBUG nova.objects.instance [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lazy-loading 'migration_context' on Instance uuid dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.866 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.867 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Ensure instance console log exists: /var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.867 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.867 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:25 np0005465987 nova_compute[230713]: 2025-10-02 12:14:25.867 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.138 2 INFO nova.virt.libvirt.driver [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Deleting instance files /var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_del#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.139 2 INFO nova.virt.libvirt.driver [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Deletion of /var/lib/nova/instances/a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8_del complete#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.217 2 INFO nova.compute.manager [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Took 2.53 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.218 2 DEBUG oslo.service.loopingcall [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.218 2 DEBUG nova.compute.manager [-] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.219 2 DEBUG nova.network.neutron [-] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.474 2 DEBUG nova.network.neutron [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Successfully updated port: ee26d18a-454e-4f7a-9c50-26dda65d11de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.494 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquiring lock "refresh_cache-dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.495 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquired lock "refresh_cache-dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.495 2 DEBUG nova.network.neutron [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.552 2 DEBUG nova.compute.manager [req-929386b8-595b-4243-940a-3442560ffaa4 req-929ebfc7-6741-48f9-9efa-ecaa7728aba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Received event network-changed-ee26d18a-454e-4f7a-9c50-26dda65d11de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.552 2 DEBUG nova.compute.manager [req-929386b8-595b-4243-940a-3442560ffaa4 req-929ebfc7-6741-48f9-9efa-ecaa7728aba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Refreshing instance network info cache due to event network-changed-ee26d18a-454e-4f7a-9c50-26dda65d11de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.553 2 DEBUG oslo_concurrency.lockutils [req-929386b8-595b-4243-940a-3442560ffaa4 req-929ebfc7-6741-48f9-9efa-ecaa7728aba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:14:26 np0005465987 nova_compute[230713]: 2025-10-02 12:14:26.660 2 DEBUG nova.network.neutron [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:14:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:27.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.424 2 DEBUG nova.network.neutron [-] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.439 2 INFO nova.compute.manager [-] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Took 1.22 seconds to deallocate network for instance.#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.513 2 DEBUG oslo_concurrency.lockutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.513 2 DEBUG oslo_concurrency.lockutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:27.520 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:14:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:27.521 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.593 2 DEBUG oslo_concurrency.processutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.659 2 DEBUG nova.network.neutron [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Updating instance_info_cache with network_info: [{"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.681 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Releasing lock "refresh_cache-dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.681 2 DEBUG nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Instance network_info: |[{"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.683 2 DEBUG oslo_concurrency.lockutils [req-929386b8-595b-4243-940a-3442560ffaa4 req-929ebfc7-6741-48f9-9efa-ecaa7728aba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.683 2 DEBUG nova.network.neutron [req-929386b8-595b-4243-940a-3442560ffaa4 req-929ebfc7-6741-48f9-9efa-ecaa7728aba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Refreshing network info cache for port ee26d18a-454e-4f7a-9c50-26dda65d11de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.687 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Start _get_guest_xml network_info=[{"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.691 2 WARNING nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.698 2 DEBUG nova.compute.manager [req-754f5527-cd6d-4a82-b3de-f95a9586c385 req-f06060a3-01ac-4fa9-b0d9-d59c0b01721d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Received event network-vif-deleted-5a322446-ffe8-4ee8-940b-a265c4add0f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.699 2 DEBUG nova.virt.libvirt.host [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.700 2 DEBUG nova.virt.libvirt.host [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.709 2 DEBUG nova.virt.libvirt.host [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.710 2 DEBUG nova.virt.libvirt.host [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.711 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.712 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.712 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.712 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.713 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.713 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.713 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.714 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.714 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.714 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.714 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.715 2 DEBUG nova.virt.hardware [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:14:27 np0005465987 nova_compute[230713]: 2025-10-02 12:14:27.719 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:27.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:27.941 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:27.942 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:27.942 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:14:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3309648743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:14:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.026 2 DEBUG oslo_concurrency.processutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.032 2 DEBUG nova.compute.provider_tree [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.048 2 DEBUG nova.scheduler.client.report [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.067 2 DEBUG oslo_concurrency.lockutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.096 2 INFO nova.scheduler.client.report [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Deleted allocations for instance a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8#033[00m
Oct  2 08:14:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:14:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3712668963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.157 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.181 2 DEBUG nova.storage.rbd_utils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] rbd image dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.185 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.214 2 DEBUG oslo_concurrency.lockutils [None req-a40f01cb-1171-4832-84b8-49312ca3f1ae 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:14:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3614499518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.621 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.622 2 DEBUG nova.virt.libvirt.vif [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1680809607',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1680809607',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1680809607',id=63,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3204d74f349d47fda3152d9d7fbea43e',ramdisk_id='',reservation_id='r-l61btnv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1213912851',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1213912851-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:24Z,user_data=None,user_id='72c74994085d4fc697ddd4acddfa7a11',uuid=dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.623 2 DEBUG nova.network.os_vif_util [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Converting VIF {"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.623 2 DEBUG nova.network.os_vif_util [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:70:e5,bridge_name='br-int',has_traffic_filtering=True,id=ee26d18a-454e-4f7a-9c50-26dda65d11de,network=Network(34395819-7251-4d97-acea-2b98c07c277f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee26d18a-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.624 2 DEBUG nova.objects.instance [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lazy-loading 'pci_devices' on Instance uuid dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.639 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <uuid>dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96</uuid>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <name>instance-0000003f</name>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1680809607</nova:name>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:14:27</nova:creationTime>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <nova:user uuid="72c74994085d4fc697ddd4acddfa7a11">tempest-ImagesOneServerNegativeTestJSON-1213912851-project-member</nova:user>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <nova:project uuid="3204d74f349d47fda3152d9d7fbea43e">tempest-ImagesOneServerNegativeTestJSON-1213912851</nova:project>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <nova:port uuid="ee26d18a-454e-4f7a-9c50-26dda65d11de">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <entry name="serial">dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96</entry>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <entry name="uuid">dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96</entry>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk.config">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:f7:70:e5"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <target dev="tapee26d18a-45"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96/console.log" append="off"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:14:28 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:14:28 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:14:28 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:14:28 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.640 2 DEBUG nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Preparing to wait for external event network-vif-plugged-ee26d18a-454e-4f7a-9c50-26dda65d11de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.641 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquiring lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.641 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.641 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.642 2 DEBUG nova.virt.libvirt.vif [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1680809607',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1680809607',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1680809607',id=63,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3204d74f349d47fda3152d9d7fbea43e',ramdisk_id='',reservation_id='r-l61btnv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1213912851',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1213912851-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:24Z,user_data=None,user_id='72c74994085d4fc697ddd4acddfa7a11',uuid=dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.642 2 DEBUG nova.network.os_vif_util [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Converting VIF {"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.643 2 DEBUG nova.network.os_vif_util [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:70:e5,bridge_name='br-int',has_traffic_filtering=True,id=ee26d18a-454e-4f7a-9c50-26dda65d11de,network=Network(34395819-7251-4d97-acea-2b98c07c277f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee26d18a-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.643 2 DEBUG os_vif [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:70:e5,bridge_name='br-int',has_traffic_filtering=True,id=ee26d18a-454e-4f7a-9c50-26dda65d11de,network=Network(34395819-7251-4d97-acea-2b98c07c277f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee26d18a-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee26d18a-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee26d18a-45, col_values=(('external_ids', {'iface-id': 'ee26d18a-454e-4f7a-9c50-26dda65d11de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:70:e5', 'vm-uuid': 'dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:28 np0005465987 NetworkManager[44910]: <info>  [1759407268.6495] manager: (tapee26d18a-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.656 2 INFO os_vif [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:70:e5,bridge_name='br-int',has_traffic_filtering=True,id=ee26d18a-454e-4f7a-9c50-26dda65d11de,network=Network(34395819-7251-4d97-acea-2b98c07c277f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee26d18a-45')#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.726 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.727 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.727 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] No VIF found with MAC fa:16:3e:f7:70:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.728 2 INFO nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Using config drive#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.856 2 DEBUG nova.storage.rbd_utils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] rbd image dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.877 2 DEBUG nova.network.neutron [req-929386b8-595b-4243-940a-3442560ffaa4 req-929ebfc7-6741-48f9-9efa-ecaa7728aba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Updated VIF entry in instance network info cache for port ee26d18a-454e-4f7a-9c50-26dda65d11de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.877 2 DEBUG nova.network.neutron [req-929386b8-595b-4243-940a-3442560ffaa4 req-929ebfc7-6741-48f9-9efa-ecaa7728aba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Updating instance_info_cache with network_info: [{"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:28 np0005465987 nova_compute[230713]: 2025-10-02 12:14:28.931 2 DEBUG oslo_concurrency.lockutils [req-929386b8-595b-4243-940a-3442560ffaa4 req-929ebfc7-6741-48f9-9efa-ecaa7728aba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:14:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:29.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:29 np0005465987 nova_compute[230713]: 2025-10-02 12:14:29.646 2 INFO nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Creating config drive at /var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96/disk.config#033[00m
Oct  2 08:14:29 np0005465987 nova_compute[230713]: 2025-10-02 12:14:29.652 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl9_f7zm1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:29 np0005465987 nova_compute[230713]: 2025-10-02 12:14:29.786 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl9_f7zm1" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:29.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:29 np0005465987 nova_compute[230713]: 2025-10-02 12:14:29.934 2 DEBUG nova.storage.rbd_utils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] rbd image dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:29 np0005465987 nova_compute[230713]: 2025-10-02 12:14:29.939 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96/disk.config dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.288 2 DEBUG oslo_concurrency.processutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96/disk.config dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.289 2 INFO nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Deleting local config drive /var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96/disk.config because it was imported into RBD.#033[00m
Oct  2 08:14:30 np0005465987 NetworkManager[44910]: <info>  [1759407270.3379] manager: (tapee26d18a-45): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Oct  2 08:14:30 np0005465987 kernel: tapee26d18a-45: entered promiscuous mode
Oct  2 08:14:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:30Z|00181|binding|INFO|Claiming lport ee26d18a-454e-4f7a-9c50-26dda65d11de for this chassis.
Oct  2 08:14:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:30Z|00182|binding|INFO|ee26d18a-454e-4f7a-9c50-26dda65d11de: Claiming fa:16:3e:f7:70:e5 10.100.0.6
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.349 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:70:e5 10.100.0.6'], port_security=['fa:16:3e:f7:70:e5 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34395819-7251-4d97-acea-2b98c07c277f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3204d74f349d47fda3152d9d7fbea43e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e7f67ea6-7556-4acf-963b-57a7d344e510', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c7b3003-1bab-46e9-ac89-20b4b6aed5f5, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ee26d18a-454e-4f7a-9c50-26dda65d11de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.350 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ee26d18a-454e-4f7a-9c50-26dda65d11de in datapath 34395819-7251-4d97-acea-2b98c07c277f bound to our chassis#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.351 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34395819-7251-4d97-acea-2b98c07c277f#033[00m
Oct  2 08:14:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:30Z|00183|binding|INFO|Setting lport ee26d18a-454e-4f7a-9c50-26dda65d11de ovn-installed in OVS
Oct  2 08:14:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:30Z|00184|binding|INFO|Setting lport ee26d18a-454e-4f7a-9c50-26dda65d11de up in Southbound
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.362 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3d8b24-04ef-446c-ac35-d5778495e82a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.363 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34395819-71 in ovnmeta-34395819-7251-4d97-acea-2b98c07c277f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:14:30 np0005465987 systemd-udevd[255242]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.365 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34395819-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.365 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce7f305-9b8c-4b41-92c1-2cc5576fdcfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.366 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7f6ff8-027d-467c-8837-4539d5591210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.376 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3ee0da-a064-4e79-a274-3bd129dd1576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 NetworkManager[44910]: <info>  [1759407270.3776] device (tapee26d18a-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:14:30 np0005465987 NetworkManager[44910]: <info>  [1759407270.3785] device (tapee26d18a-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:14:30 np0005465987 systemd-machined[188335]: New machine qemu-30-instance-0000003f.
Oct  2 08:14:30 np0005465987 systemd[1]: Started Virtual Machine qemu-30-instance-0000003f.
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.399 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[99c141f5-a60b-4287-b8b1-b449059ca640]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.426 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6d94f33a-699b-4026-805e-7b79a9799534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.432 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[584098cc-ff31-43a6-a3e7-932388bb1d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 NetworkManager[44910]: <info>  [1759407270.4331] manager: (tap34395819-70): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.467 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1c71a8ad-e4b2-4f15-a93a-824059115e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.470 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3940e2-0d86-4569-8b72-8829c0aee666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 NetworkManager[44910]: <info>  [1759407270.4915] device (tap34395819-70): carrier: link connected
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.498 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e8cf865d-71e3-4aa0-83cb-5045d2b49a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.516 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e11998dd-cc1a-488f-b1e0-79db353b8674]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34395819-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:6c:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533906, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255276, 'error': None, 'target': 'ovnmeta-34395819-7251-4d97-acea-2b98c07c277f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.533 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3eeec98e-472f-4891-840b-16ac2c88fd0d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:6c0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533906, 'tstamp': 533906}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255277, 'error': None, 'target': 'ovnmeta-34395819-7251-4d97-acea-2b98c07c277f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.552 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[709df703-e1cc-4fcd-9945-7d6757970627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34395819-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:6c:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533906, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255278, 'error': None, 'target': 'ovnmeta-34395819-7251-4d97-acea-2b98c07c277f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.564 2 DEBUG nova.compute.manager [req-fca44033-45f3-4a6d-8f2e-779bf5e40b79 req-3b576156-2e93-4e51-9f62-a331f845cb05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Received event network-vif-plugged-ee26d18a-454e-4f7a-9c50-26dda65d11de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.565 2 DEBUG oslo_concurrency.lockutils [req-fca44033-45f3-4a6d-8f2e-779bf5e40b79 req-3b576156-2e93-4e51-9f62-a331f845cb05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.565 2 DEBUG oslo_concurrency.lockutils [req-fca44033-45f3-4a6d-8f2e-779bf5e40b79 req-3b576156-2e93-4e51-9f62-a331f845cb05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.566 2 DEBUG oslo_concurrency.lockutils [req-fca44033-45f3-4a6d-8f2e-779bf5e40b79 req-3b576156-2e93-4e51-9f62-a331f845cb05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.566 2 DEBUG nova.compute.manager [req-fca44033-45f3-4a6d-8f2e-779bf5e40b79 req-3b576156-2e93-4e51-9f62-a331f845cb05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Processing event network-vif-plugged-ee26d18a-454e-4f7a-9c50-26dda65d11de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.591 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecdc7d5-fd41-4cd5-a228-e006d0e9507f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.646 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[616e2470-358b-4f56-9fb2-168241fff14f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.647 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34395819-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.648 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.649 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34395819-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:30 np0005465987 NetworkManager[44910]: <info>  [1759407270.6522] manager: (tap34395819-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Oct  2 08:14:30 np0005465987 kernel: tap34395819-70: entered promiscuous mode
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.657 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34395819-70, col_values=(('external_ids', {'iface-id': '595d0160-be54-4c2f-8674-117a5a5028e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:30Z|00185|binding|INFO|Releasing lport 595d0160-be54-4c2f-8674-117a5a5028e1 from this chassis (sb_readonly=0)
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.663 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34395819-7251-4d97-acea-2b98c07c277f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34395819-7251-4d97-acea-2b98c07c277f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.664 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e1705534-882c-438f-bd5a-4448cb6acc62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.665 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-34395819-7251-4d97-acea-2b98c07c277f
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/34395819-7251-4d97-acea-2b98c07c277f.pid.haproxy
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 34395819-7251-4d97-acea-2b98c07c277f
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:14:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:30.666 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34395819-7251-4d97-acea-2b98c07c277f', 'env', 'PROCESS_TAG=haproxy-34395819-7251-4d97-acea-2b98c07c277f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34395819-7251-4d97-acea-2b98c07c277f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:14:30 np0005465987 nova_compute[230713]: 2025-10-02 12:14:30.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:31.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:31 np0005465987 podman[255308]: 2025-10-02 12:14:30.971794883 +0000 UTC m=+0.023492676 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:14:31 np0005465987 podman[255308]: 2025-10-02 12:14:31.512621561 +0000 UTC m=+0.564319334 container create c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:14:31 np0005465987 systemd[1]: Started libpod-conmon-c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8.scope.
Oct  2 08:14:31 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:14:31 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc1ecba775765f369d5f7311f20f4ec956847481473422c210d37d3a40e891b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.762 2 DEBUG nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.763 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407271.7615116, dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.763 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] VM Started (Lifecycle Event)#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.770 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.774 2 INFO nova.virt.libvirt.driver [-] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Instance spawned successfully.#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.775 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.781 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.785 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.797 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.798 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.799 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.799 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.800 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.800 2 DEBUG nova.virt.libvirt.driver [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:31 np0005465987 podman[255308]: 2025-10-02 12:14:31.804592292 +0000 UTC m=+0.856290075 container init c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.806 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.806 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407271.7626858, dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.807 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:14:31 np0005465987 podman[255308]: 2025-10-02 12:14:31.811875972 +0000 UTC m=+0.863573735 container start c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:14:31 np0005465987 neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f[255365]: [NOTICE]   (255369) : New worker (255371) forked
Oct  2 08:14:31 np0005465987 neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f[255365]: [NOTICE]   (255369) : Loading success.
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.841 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.845 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407271.7683146, dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.846 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:14:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:31.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.871 2 INFO nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Took 7.31 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.872 2 DEBUG nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.875 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.882 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.923 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.957 2 INFO nova.compute.manager [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Took 8.56 seconds to build instance.#033[00m
Oct  2 08:14:31 np0005465987 nova_compute[230713]: 2025-10-02 12:14:31.979 2 DEBUG oslo_concurrency.lockutils [None req-9f26eff3-fcec-4e22-b869-1b60721546f5 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:33.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:33 np0005465987 nova_compute[230713]: 2025-10-02 12:14:33.316 2 DEBUG nova.compute.manager [req-c5977933-deca-45d2-9c35-3e8d635e81d2 req-bf479453-9020-4ee8-aa03-26c9a69ac300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Received event network-vif-plugged-ee26d18a-454e-4f7a-9c50-26dda65d11de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:33 np0005465987 nova_compute[230713]: 2025-10-02 12:14:33.316 2 DEBUG oslo_concurrency.lockutils [req-c5977933-deca-45d2-9c35-3e8d635e81d2 req-bf479453-9020-4ee8-aa03-26c9a69ac300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:33 np0005465987 nova_compute[230713]: 2025-10-02 12:14:33.317 2 DEBUG oslo_concurrency.lockutils [req-c5977933-deca-45d2-9c35-3e8d635e81d2 req-bf479453-9020-4ee8-aa03-26c9a69ac300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:33 np0005465987 nova_compute[230713]: 2025-10-02 12:14:33.317 2 DEBUG oslo_concurrency.lockutils [req-c5977933-deca-45d2-9c35-3e8d635e81d2 req-bf479453-9020-4ee8-aa03-26c9a69ac300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:33 np0005465987 nova_compute[230713]: 2025-10-02 12:14:33.317 2 DEBUG nova.compute.manager [req-c5977933-deca-45d2-9c35-3e8d635e81d2 req-bf479453-9020-4ee8-aa03-26c9a69ac300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] No waiting events found dispatching network-vif-plugged-ee26d18a-454e-4f7a-9c50-26dda65d11de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:14:33 np0005465987 nova_compute[230713]: 2025-10-02 12:14:33.318 2 WARNING nova.compute.manager [req-c5977933-deca-45d2-9c35-3e8d635e81d2 req-bf479453-9020-4ee8-aa03-26c9a69ac300 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Received unexpected event network-vif-plugged-ee26d18a-454e-4f7a-9c50-26dda65d11de for instance with vm_state active and task_state None.#033[00m
Oct  2 08:14:33 np0005465987 nova_compute[230713]: 2025-10-02 12:14:33.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:33 np0005465987 nova_compute[230713]: 2025-10-02 12:14:33.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:33.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:35.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:35 np0005465987 nova_compute[230713]: 2025-10-02 12:14:35.075 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:35 np0005465987 nova_compute[230713]: 2025-10-02 12:14:35.075 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:35 np0005465987 nova_compute[230713]: 2025-10-02 12:14:35.164 2 DEBUG nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:14:35 np0005465987 nova_compute[230713]: 2025-10-02 12:14:35.365 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:35 np0005465987 nova_compute[230713]: 2025-10-02 12:14:35.365 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:35 np0005465987 nova_compute[230713]: 2025-10-02 12:14:35.373 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:14:35 np0005465987 nova_compute[230713]: 2025-10-02 12:14:35.373 2 INFO nova.compute.claims [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:14:35 np0005465987 nova_compute[230713]: 2025-10-02 12:14:35.713 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:35.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:14:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2864317009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.158 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.163 2 DEBUG nova.compute.provider_tree [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.194 2 DEBUG nova.scheduler.client.report [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.385 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.386 2 DEBUG nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.573 2 DEBUG nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.573 2 DEBUG nova.network.neutron [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.707 2 INFO nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.740 2 DEBUG nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:14:36 np0005465987 nova_compute[230713]: 2025-10-02 12:14:36.824 2 DEBUG nova.policy [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e627e098f2b465d97099fee8f489b71', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:14:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:37.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.201 2 DEBUG nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.202 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.203 2 INFO nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Creating image(s)#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.230 2 DEBUG nova.storage.rbd_utils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.256 2 DEBUG nova.storage.rbd_utils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.281 2 DEBUG nova.storage.rbd_utils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.284 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.361 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.362 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.363 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.363 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.387 2 DEBUG nova.storage.rbd_utils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:37 np0005465987 nova_compute[230713]: 2025-10-02 12:14:37.391 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:37.524 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:38 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.110 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.718s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.232 2 DEBUG nova.storage.rbd_utils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] resizing rbd image a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.367 2 DEBUG nova.objects.instance [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'migration_context' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:38 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.417 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.418 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Ensure instance console log exists: /var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.418 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.419 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.419 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.624 2 DEBUG nova.network.neutron [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Successfully created port: 8fad7e34-752f-4cb6-96a5-b506478720d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:14:38 np0005465987 nova_compute[230713]: 2025-10-02 12:14:38.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.009 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.009 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.010 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.022 2 DEBUG nova.compute.manager [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:39.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.037 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.038 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.038 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.039 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.039 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.095 2 INFO nova.compute.manager [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] instance snapshotting#033[00m
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:39.102640) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407279102774, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2223, "num_deletes": 258, "total_data_size": 4926773, "memory_usage": 5018816, "flush_reason": "Manual Compaction"}
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.134 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407264.1337297, a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.135 2 INFO nova.compute.manager [-] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.263 2 DEBUG nova.compute.manager [None req-ce888865-01a4-4421-89d8-2186115ad70c - - - - - -] [instance: a9a4741a-0d16-4d7a-8c2b-6143d5b3d4e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.446 2 INFO nova.virt.libvirt.driver [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Beginning live snapshot process#033[00m
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407279481088, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 2055007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33567, "largest_seqno": 35785, "table_properties": {"data_size": 2047946, "index_size": 3815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18673, "raw_average_key_size": 21, "raw_value_size": 2032262, "raw_average_value_size": 2352, "num_data_blocks": 166, "num_entries": 864, "num_filter_entries": 864, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407114, "oldest_key_time": 1759407114, "file_creation_time": 1759407279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 378483 microseconds, and 7016 cpu microseconds.
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:39.481144) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 2055007 bytes OK
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:39.481167) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:39.589200) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:39.589252) EVENT_LOG_v1 {"time_micros": 1759407279589240, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:39.589280) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4916858, prev total WAL file size 4984171, number of live WAL files 2.
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:39.591336) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303035' seq:72057594037927935, type:22 .. '6D6772737461740031323537' seq:0, type:0; will stop at (end)
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(2006KB)], [63(9901KB)]
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407279591376, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 12194634, "oldest_snapshot_seqno": -1}
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.814 2 DEBUG nova.network.neutron [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Successfully updated port: 8fad7e34-752f-4cb6-96a5-b506478720d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.824 2 DEBUG nova.virt.libvirt.imagebackend [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.835 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.835 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.836 2 DEBUG nova.network.neutron [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:14:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:39.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.868 2 DEBUG nova.compute.manager [req-44bfa05f-fe6b-454a-b0c2-4beeb557072a req-b1886abd-cd2e-4a00-aa34-e2738a255ec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-changed-8fad7e34-752f-4cb6-96a5-b506478720d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.869 2 DEBUG nova.compute.manager [req-44bfa05f-fe6b-454a-b0c2-4beeb557072a req-b1886abd-cd2e-4a00-aa34-e2738a255ec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing instance network info cache due to event network-changed-8fad7e34-752f-4cb6-96a5-b506478720d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:14:39 np0005465987 nova_compute[230713]: 2025-10-02 12:14:39.869 2 DEBUG oslo_concurrency.lockutils [req-44bfa05f-fe6b-454a-b0c2-4beeb557072a req-b1886abd-cd2e-4a00-aa34-e2738a255ec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6025 keys, 9496293 bytes, temperature: kUnknown
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407279920671, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 9496293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9456398, "index_size": 23721, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153374, "raw_average_key_size": 25, "raw_value_size": 9348623, "raw_average_value_size": 1551, "num_data_blocks": 957, "num_entries": 6025, "num_filter_entries": 6025, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759407279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:14:39 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:39.920914) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9496293 bytes
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:40.049507) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 37.0 rd, 28.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.7 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.6) write-amplify(4.6) OK, records in: 6481, records dropped: 456 output_compression: NoCompression
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:40.049575) EVENT_LOG_v1 {"time_micros": 1759407280049548, "job": 38, "event": "compaction_finished", "compaction_time_micros": 329392, "compaction_time_cpu_micros": 31187, "output_level": 6, "num_output_files": 1, "total_output_size": 9496293, "num_input_records": 6481, "num_output_records": 6025, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407280050621, "job": 38, "event": "table_file_deletion", "file_number": 65}
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407280054502, "job": 38, "event": "table_file_deletion", "file_number": 63}
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:39.591244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:40.054622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:40.054628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:40.054630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:40.054632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:40.054633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:40 np0005465987 nova_compute[230713]: 2025-10-02 12:14:40.100 2 DEBUG nova.storage.rbd_utils [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] creating snapshot(d24d7ce0a22745d5a6525a4f201e1ddb) on rbd image(dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:14:40 np0005465987 nova_compute[230713]: 2025-10-02 12:14:40.149 2 DEBUG nova.network.neutron [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:14:40 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:14:40 np0005465987 nova_compute[230713]: 2025-10-02 12:14:40.901 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Updating instance_info_cache with network_info: [{"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:40 np0005465987 nova_compute[230713]: 2025-10-02 12:14:40.943 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:14:40 np0005465987 nova_compute[230713]: 2025-10-02 12:14:40.944 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:14:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:41.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.101 2 DEBUG nova.network.neutron [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.182 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.183 2 DEBUG nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Instance network_info: |[{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.184 2 DEBUG oslo_concurrency.lockutils [req-44bfa05f-fe6b-454a-b0c2-4beeb557072a req-b1886abd-cd2e-4a00-aa34-e2738a255ec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.184 2 DEBUG nova.network.neutron [req-44bfa05f-fe6b-454a-b0c2-4beeb557072a req-b1886abd-cd2e-4a00-aa34-e2738a255ec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing network info cache for port 8fad7e34-752f-4cb6-96a5-b506478720d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.187 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Start _get_guest_xml network_info=[{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.190 2 WARNING nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.194 2 DEBUG nova.virt.libvirt.host [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.195 2 DEBUG nova.virt.libvirt.host [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.198 2 DEBUG nova.virt.libvirt.host [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.198 2 DEBUG nova.virt.libvirt.host [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.199 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.200 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.200 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.201 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.201 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.201 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.201 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.202 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.202 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.202 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.203 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.203 2 DEBUG nova.virt.hardware [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:14:41 np0005465987 nova_compute[230713]: 2025-10-02 12:14:41.232 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e209 e209: 3 total, 3 up, 3 in
Oct  2 08:14:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:41.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:14:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/409931445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:14:42 np0005465987 nova_compute[230713]: 2025-10-02 12:14:42.089 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.857s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:42 np0005465987 nova_compute[230713]: 2025-10-02 12:14:42.117 2 DEBUG nova.storage.rbd_utils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:42 np0005465987 nova_compute[230713]: 2025-10-02 12:14:42.122 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:42 np0005465987 nova_compute[230713]: 2025-10-02 12:14:42.741 2 DEBUG nova.storage.rbd_utils [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] cloning vms/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk@d24d7ce0a22745d5a6525a4f201e1ddb to images/7f3fdab7-d2d5-41be-9174-42e3a9caee7b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:14:42 np0005465987 podman[255938]: 2025-10-02 12:14:42.869646313 +0000 UTC m=+0.076045400 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:14:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:14:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1650383396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:14:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:43.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.083 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.084 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.963s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.085 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.086 2 DEBUG nova.virt.libvirt.vif [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.087 2 DEBUG nova.network.os_vif_util [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.088 2 DEBUG nova.network.os_vif_util [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:0c:3b,bridge_name='br-int',has_traffic_filtering=True,id=8fad7e34-752f-4cb6-96a5-b506478720d3,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fad7e34-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.090 2 DEBUG nova.objects.instance [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'pci_devices' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.092 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.112 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <uuid>a7e79d88-7a39-4a1a-94de-77b469dd67e2</uuid>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <name>instance-00000040</name>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:14:41</nova:creationTime>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <entry name="serial">a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <entry name="uuid">a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:53:0c:3b"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <target dev="tap8fad7e34-75"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log" append="off"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:14:43 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:14:43 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:14:43 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:14:43 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.118 2 DEBUG nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Preparing to wait for external event network-vif-plugged-8fad7e34-752f-4cb6-96a5-b506478720d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.119 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.119 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.120 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.120 2 DEBUG nova.virt.libvirt.vif [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.121 2 DEBUG nova.network.os_vif_util [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.122 2 DEBUG nova.network.os_vif_util [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:0c:3b,bridge_name='br-int',has_traffic_filtering=True,id=8fad7e34-752f-4cb6-96a5-b506478720d3,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fad7e34-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.122 2 DEBUG os_vif [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0c:3b,bridge_name='br-int',has_traffic_filtering=True,id=8fad7e34-752f-4cb6-96a5-b506478720d3,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fad7e34-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8fad7e34-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8fad7e34-75, col_values=(('external_ids', {'iface-id': '8fad7e34-752f-4cb6-96a5-b506478720d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:0c:3b', 'vm-uuid': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:43 np0005465987 NetworkManager[44910]: <info>  [1759407283.1307] manager: (tap8fad7e34-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.137 2 INFO os_vif [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0c:3b,bridge_name='br-int',has_traffic_filtering=True,id=8fad7e34-752f-4cb6-96a5-b506478720d3,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fad7e34-75')#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.315 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.316 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.317 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:53:0c:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.317 2 INFO nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Using config drive#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.345 2 DEBUG nova.storage.rbd_utils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.803 2 DEBUG nova.network.neutron [req-44bfa05f-fe6b-454a-b0c2-4beeb557072a req-b1886abd-cd2e-4a00-aa34-e2738a255ec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updated VIF entry in instance network info cache for port 8fad7e34-752f-4cb6-96a5-b506478720d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.804 2 DEBUG nova.network.neutron [req-44bfa05f-fe6b-454a-b0c2-4beeb557072a req-b1886abd-cd2e-4a00-aa34-e2738a255ec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:43 np0005465987 nova_compute[230713]: 2025-10-02 12:14:43.836 2 DEBUG oslo_concurrency.lockutils [req-44bfa05f-fe6b-454a-b0c2-4beeb557072a req-b1886abd-cd2e-4a00-aa34-e2738a255ec9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:14:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:43.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:44 np0005465987 nova_compute[230713]: 2025-10-02 12:14:44.043 2 INFO nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Creating config drive at /var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/disk.config#033[00m
Oct  2 08:14:44 np0005465987 nova_compute[230713]: 2025-10-02 12:14:44.050 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw8k2vrca execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:44 np0005465987 nova_compute[230713]: 2025-10-02 12:14:44.186 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw8k2vrca" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:44 np0005465987 nova_compute[230713]: 2025-10-02 12:14:44.341 2 DEBUG nova.storage.rbd_utils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:14:44 np0005465987 nova_compute[230713]: 2025-10-02 12:14:44.345 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/disk.config a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:44 np0005465987 nova_compute[230713]: 2025-10-02 12:14:44.506 2 DEBUG nova.storage.rbd_utils [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] flattening images/7f3fdab7-d2d5-41be-9174-42e3a9caee7b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:14:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:45.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:45.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.262 2 DEBUG oslo_concurrency.processutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/disk.config a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.917s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.263 2 INFO nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Deleting local config drive /var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/disk.config because it was imported into RBD.#033[00m
Oct  2 08:14:46 np0005465987 kernel: tap8fad7e34-75: entered promiscuous mode
Oct  2 08:14:46 np0005465987 NetworkManager[44910]: <info>  [1759407286.3414] manager: (tap8fad7e34-75): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Oct  2 08:14:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:46Z|00186|binding|INFO|Claiming lport 8fad7e34-752f-4cb6-96a5-b506478720d3 for this chassis.
Oct  2 08:14:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:46Z|00187|binding|INFO|8fad7e34-752f-4cb6-96a5-b506478720d3: Claiming fa:16:3e:53:0c:3b 10.100.0.13
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:46Z|00188|binding|INFO|Setting lport 8fad7e34-752f-4cb6-96a5-b506478720d3 ovn-installed in OVS
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:46 np0005465987 systemd-udevd[256078]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:14:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:46Z|00189|binding|INFO|Setting lport 8fad7e34-752f-4cb6-96a5-b506478720d3 up in Southbound
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.387 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0c:3b 10.100.0.13'], port_security=['fa:16:3e:53:0c:3b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f727668f-4f0a-4e45-a88b-61e109830874', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=8fad7e34-752f-4cb6-96a5-b506478720d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.389 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 8fad7e34-752f-4cb6-96a5-b506478720d3 in datapath ee114210-598c-482f-83c7-26c3363a45c4 bound to our chassis#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.392 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:14:46 np0005465987 NetworkManager[44910]: <info>  [1759407286.3995] device (tap8fad7e34-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:14:46 np0005465987 NetworkManager[44910]: <info>  [1759407286.4004] device (tap8fad7e34-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:14:46 np0005465987 systemd-machined[188335]: New machine qemu-31-instance-00000040.
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.408 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4772e9-b174-40a2-bc8a-62965ed12247]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.409 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee114210-51 in ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.411 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee114210-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.412 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[383fa095-5259-4a5d-b69c-c4699c79a7c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.412 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa84485-6e4a-4fd2-b18d-4b0938911035]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 systemd[1]: Started Virtual Machine qemu-31-instance-00000040.
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.424 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[ad32c762-54a1-40a7-9975-3f66bd55f6bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.451 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[34b2755c-c6c7-44f5-a55f-251b8cf68498]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.478 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e98f61f1-f594-410d-b785-4ed7c338926e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.482 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3048b4c8-a983-4eba-b897-63c846e24dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 NetworkManager[44910]: <info>  [1759407286.4839] manager: (tapee114210-50): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Oct  2 08:14:46 np0005465987 systemd-udevd[256084]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.520 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[95832a07-9f5a-4247-afc9-7b0a3d5315f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.524 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[2287fe28-558e-4a8c-b2d9-f5ca4c1da6c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 NetworkManager[44910]: <info>  [1759407286.5425] device (tapee114210-50): carrier: link connected
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.548 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[fa04005a-5315-45af-8c37-69ffa240813d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.564 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c60b8ba5-25a9-450b-a0b3-9f96d393401b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535511, 'reachable_time': 23320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256114, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.579 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e55dfc46-f054-4d84-9a5a-e9922df2e82a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:8699'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535511, 'tstamp': 535511}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256115, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.595 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b78f335e-cf80-4db4-aa94-bde9927f57e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535511, 'reachable_time': 23320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256116, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.619 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7d36b472-3799-41f3-bf4e-98ae1a1970f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.671 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f53f651d-811a-4c38-a19c-9509e6cda52e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.673 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.674 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.674 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:46 np0005465987 NetworkManager[44910]: <info>  [1759407286.6769] manager: (tapee114210-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Oct  2 08:14:46 np0005465987 kernel: tapee114210-50: entered promiscuous mode
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.679 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:46Z|00190|binding|INFO|Releasing lport 544b20f9-e292-46bc-a770-180c318d534e from this chassis (sb_readonly=0)
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.687 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee114210-598c-482f-83c7-26c3363a45c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee114210-598c-482f-83c7-26c3363a45c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.694 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3414e3-daa6-4834-8d20-dac973ce456f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.695 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-ee114210-598c-482f-83c7-26c3363a45c4
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/ee114210-598c-482f-83c7-26c3363a45c4.pid.haproxy
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID ee114210-598c-482f-83c7-26c3363a45c4
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:14:46.696 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'env', 'PROCESS_TAG=haproxy-ee114210-598c-482f-83c7-26c3363a45c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee114210-598c-482f-83c7-26c3363a45c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:46 np0005465987 nova_compute[230713]: 2025-10-02 12:14:46.967 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.028 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.029 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.029 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.029 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.030 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:47.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.109 2 DEBUG nova.compute.manager [req-5941a0b6-b341-4a1b-9764-b2852dbeb8f7 req-20e473a6-a302-4a5a-9384-1b49366f4dc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-8fad7e34-752f-4cb6-96a5-b506478720d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.110 2 DEBUG oslo_concurrency.lockutils [req-5941a0b6-b341-4a1b-9764-b2852dbeb8f7 req-20e473a6-a302-4a5a-9384-1b49366f4dc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.111 2 DEBUG oslo_concurrency.lockutils [req-5941a0b6-b341-4a1b-9764-b2852dbeb8f7 req-20e473a6-a302-4a5a-9384-1b49366f4dc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.111 2 DEBUG oslo_concurrency.lockutils [req-5941a0b6-b341-4a1b-9764-b2852dbeb8f7 req-20e473a6-a302-4a5a-9384-1b49366f4dc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.111 2 DEBUG nova.compute.manager [req-5941a0b6-b341-4a1b-9764-b2852dbeb8f7 req-20e473a6-a302-4a5a-9384-1b49366f4dc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Processing event network-vif-plugged-8fad7e34-752f-4cb6-96a5-b506478720d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:14:47 np0005465987 podman[256190]: 2025-10-02 12:14:47.06422127 +0000 UTC m=+0.025936183 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.300 2 DEBUG nova.storage.rbd_utils [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] removing snapshot(d24d7ce0a22745d5a6525a4f201e1ddb) on rbd image(dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.325 2 DEBUG nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.327 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407287.3260498, a7e79d88-7a39-4a1a-94de-77b469dd67e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.327 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] VM Started (Lifecycle Event)#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.331 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.335 2 INFO nova.virt.libvirt.driver [-] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Instance spawned successfully.#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.335 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.367 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.373 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.376 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.376 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.377 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.377 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.378 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.378 2 DEBUG nova.virt.libvirt.driver [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:14:47 np0005465987 podman[256190]: 2025-10-02 12:14:47.401579959 +0000 UTC m=+0.363295052 container create 702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.409 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.409 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407287.326138, a7e79d88-7a39-4a1a-94de-77b469dd67e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.410 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.450 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.454 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407287.3308923, a7e79d88-7a39-4a1a-94de-77b469dd67e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.455 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:14:47 np0005465987 systemd[1]: Started libpod-conmon-702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2.scope.
Oct  2 08:14:47 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:14:47 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6162db7e1e082b1da0c0290e9eca1c8ada325a04f9a61437812b156055394005/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.499 2 INFO nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Took 10.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.500 2 DEBUG nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:47 np0005465987 podman[256190]: 2025-10-02 12:14:47.521301168 +0000 UTC m=+0.483016091 container init 702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:14:47 np0005465987 podman[256190]: 2025-10-02 12:14:47.527453607 +0000 UTC m=+0.489168500 container start 702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.534 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.539 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:14:47 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[256243]: [NOTICE]   (256247) : New worker (256249) forked
Oct  2 08:14:47 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[256243]: [NOTICE]   (256247) : Loading success.
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.571 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.583 2 INFO nova.compute.manager [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Took 12.24 seconds to build instance.#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.619 2 DEBUG oslo_concurrency.lockutils [None req-8c4ff58b-005a-4c00-b114-19cc4253a680 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.696 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.696 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.699 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.700 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:14:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:47.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.877 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.878 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4399MB free_disk=20.913002014160156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.879 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.879 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.964 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.965 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.965 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.965 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:14:47 np0005465987 nova_compute[230713]: 2025-10-02 12:14:47.989 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:14:48 np0005465987 nova_compute[230713]: 2025-10-02 12:14:48.017 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:14:48 np0005465987 nova_compute[230713]: 2025-10-02 12:14:48.018 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:14:48 np0005465987 nova_compute[230713]: 2025-10-02 12:14:48.036 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:14:48 np0005465987 nova_compute[230713]: 2025-10-02 12:14:48.069 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:14:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:48 np0005465987 nova_compute[230713]: 2025-10-02 12:14:48.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:48 np0005465987 nova_compute[230713]: 2025-10-02 12:14:48.149 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:14:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e210 e210: 3 total, 3 up, 3 in
Oct  2 08:14:48 np0005465987 nova_compute[230713]: 2025-10-02 12:14:48.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:48Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:70:e5 10.100.0.6
Oct  2 08:14:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:48Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:70:e5 10.100.0.6
Oct  2 08:14:48 np0005465987 nova_compute[230713]: 2025-10-02 12:14:48.794 2 DEBUG nova.storage.rbd_utils [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] creating snapshot(snap) on rbd image(7f3fdab7-d2d5-41be-9174-42e3a9caee7b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:14:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:49.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.238 2 DEBUG nova.compute.manager [req-ea69a514-f2c6-4417-a9b6-97f6345bff9d req-3d5c1d40-6716-40e3-a212-c95908418b7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-8fad7e34-752f-4cb6-96a5-b506478720d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.238 2 DEBUG oslo_concurrency.lockutils [req-ea69a514-f2c6-4417-a9b6-97f6345bff9d req-3d5c1d40-6716-40e3-a212-c95908418b7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.239 2 DEBUG oslo_concurrency.lockutils [req-ea69a514-f2c6-4417-a9b6-97f6345bff9d req-3d5c1d40-6716-40e3-a212-c95908418b7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.239 2 DEBUG oslo_concurrency.lockutils [req-ea69a514-f2c6-4417-a9b6-97f6345bff9d req-3d5c1d40-6716-40e3-a212-c95908418b7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.239 2 DEBUG nova.compute.manager [req-ea69a514-f2c6-4417-a9b6-97f6345bff9d req-3d5c1d40-6716-40e3-a212-c95908418b7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-8fad7e34-752f-4cb6-96a5-b506478720d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.240 2 WARNING nova.compute.manager [req-ea69a514-f2c6-4417-a9b6-97f6345bff9d req-3d5c1d40-6716-40e3-a212-c95908418b7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-8fad7e34-752f-4cb6-96a5-b506478720d3 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:14:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:14:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/520319809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.320 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.326 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:14:49 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:14:49 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.355 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.399 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:14:49 np0005465987 nova_compute[230713]: 2025-10-02 12:14:49.400 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:14:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:49.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e211 e211: 3 total, 3 up, 3 in
Oct  2 08:14:50 np0005465987 nova_compute[230713]: 2025-10-02 12:14:50.398 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:50 np0005465987 nova_compute[230713]: 2025-10-02 12:14:50.400 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:50 np0005465987 nova_compute[230713]: 2025-10-02 12:14:50.400 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:14:50 np0005465987 nova_compute[230713]: 2025-10-02 12:14:50.401 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:14:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:51.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:51 np0005465987 podman[256352]: 2025-10-02 12:14:51.839974955 +0000 UTC m=+0.053075549 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 08:14:51 np0005465987 podman[256353]: 2025-10-02 12:14:51.856067227 +0000 UTC m=+0.065274354 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:14:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:51.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:14:51 np0005465987 podman[256351]: 2025-10-02 12:14:51.902006069 +0000 UTC m=+0.108696707 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:14:52 np0005465987 nova_compute[230713]: 2025-10-02 12:14:52.379 2 DEBUG nova.compute.manager [req-04239f33-8e21-49f0-b412-9709843baafc req-149b0b69-3d59-41a6-9165-e87eed771703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-changed-8fad7e34-752f-4cb6-96a5-b506478720d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:14:52 np0005465987 nova_compute[230713]: 2025-10-02 12:14:52.381 2 DEBUG nova.compute.manager [req-04239f33-8e21-49f0-b412-9709843baafc req-149b0b69-3d59-41a6-9165-e87eed771703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing instance network info cache due to event network-changed-8fad7e34-752f-4cb6-96a5-b506478720d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:14:52 np0005465987 nova_compute[230713]: 2025-10-02 12:14:52.382 2 DEBUG oslo_concurrency.lockutils [req-04239f33-8e21-49f0-b412-9709843baafc req-149b0b69-3d59-41a6-9165-e87eed771703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:14:52 np0005465987 nova_compute[230713]: 2025-10-02 12:14:52.383 2 DEBUG oslo_concurrency.lockutils [req-04239f33-8e21-49f0-b412-9709843baafc req-149b0b69-3d59-41a6-9165-e87eed771703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:14:52 np0005465987 nova_compute[230713]: 2025-10-02 12:14:52.383 2 DEBUG nova.network.neutron [req-04239f33-8e21-49f0-b412-9709843baafc req-149b0b69-3d59-41a6-9165-e87eed771703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing network info cache for port 8fad7e34-752f-4cb6-96a5-b506478720d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:14:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:53.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 7f3fdab7-d2d5-41be-9174-42e3a9caee7b could not be found.
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 7f3fdab7-d2d5-41be-9174-42e3a9caee7b
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver 
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver 
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 7f3fdab7-d2d5-41be-9174-42e3a9caee7b could not be found.
Oct  2 08:14:53 np0005465987 nova_compute[230713]: 2025-10-02 12:14:53.393 2 ERROR nova.virt.libvirt.driver #033[00m
Oct  2 08:14:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:53.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:14:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:55.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:55 np0005465987 nova_compute[230713]: 2025-10-02 12:14:55.239 2 DEBUG nova.storage.rbd_utils [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] removing snapshot(snap) on rbd image(7f3fdab7-d2d5-41be-9174-42e3a9caee7b) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:14:55 np0005465987 nova_compute[230713]: 2025-10-02 12:14:55.573 2 DEBUG nova.network.neutron [req-04239f33-8e21-49f0-b412-9709843baafc req-149b0b69-3d59-41a6-9165-e87eed771703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updated VIF entry in instance network info cache for port 8fad7e34-752f-4cb6-96a5-b506478720d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:14:55 np0005465987 nova_compute[230713]: 2025-10-02 12:14:55.574 2 DEBUG nova.network.neutron [req-04239f33-8e21-49f0-b412-9709843baafc req-149b0b69-3d59-41a6-9165-e87eed771703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:14:55 np0005465987 nova_compute[230713]: 2025-10-02 12:14:55.617 2 DEBUG oslo_concurrency.lockutils [req-04239f33-8e21-49f0-b412-9709843baafc req-149b0b69-3d59-41a6-9165-e87eed771703 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:14:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:55.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:14:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e212 e212: 3 total, 3 up, 3 in
Oct  2 08:14:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:57.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:58Z|00191|binding|INFO|Releasing lport 595d0160-be54-4c2f-8674-117a5a5028e1 from this chassis (sb_readonly=0)
Oct  2 08:14:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:14:58Z|00192|binding|INFO|Releasing lport 544b20f9-e292-46bc-a770-180c318d534e from this chassis (sb_readonly=0)
Oct  2 08:14:58 np0005465987 nova_compute[230713]: 2025-10-02 12:14:58.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:58 np0005465987 nova_compute[230713]: 2025-10-02 12:14:58.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:14:58 np0005465987 nova_compute[230713]: 2025-10-02 12:14:58.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:14:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:14:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:59.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e213 e213: 3 total, 3 up, 3 in
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.345747) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299345789, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 558, "num_deletes": 252, "total_data_size": 804438, "memory_usage": 815864, "flush_reason": "Manual Compaction"}
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299387933, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 520019, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35790, "largest_seqno": 36343, "table_properties": {"data_size": 517003, "index_size": 988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7610, "raw_average_key_size": 19, "raw_value_size": 510696, "raw_average_value_size": 1336, "num_data_blocks": 43, "num_entries": 382, "num_filter_entries": 382, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407279, "oldest_key_time": 1759407279, "file_creation_time": 1759407299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 42227 microseconds, and 2980 cpu microseconds.
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.387974) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 520019 bytes OK
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.387993) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.403920) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.403968) EVENT_LOG_v1 {"time_micros": 1759407299403955, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.403997) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 801120, prev total WAL file size 801120, number of live WAL files 2.
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.404824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(507KB)], [66(9273KB)]
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299404867, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 10016312, "oldest_snapshot_seqno": -1}
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 5886 keys, 8185003 bytes, temperature: kUnknown
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299587855, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 8185003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8147172, "index_size": 22041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 151320, "raw_average_key_size": 25, "raw_value_size": 8042869, "raw_average_value_size": 1366, "num_data_blocks": 880, "num_entries": 5886, "num_filter_entries": 5886, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759407299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.588143) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8185003 bytes
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.600454) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.7 rd, 44.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.1 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(35.0) write-amplify(15.7) OK, records in: 6407, records dropped: 521 output_compression: NoCompression
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.600497) EVENT_LOG_v1 {"time_micros": 1759407299600481, "job": 40, "event": "compaction_finished", "compaction_time_micros": 183108, "compaction_time_cpu_micros": 36528, "output_level": 6, "num_output_files": 1, "total_output_size": 8185003, "num_input_records": 6407, "num_output_records": 5886, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299600786, "job": 40, "event": "table_file_deletion", "file_number": 68}
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407299602569, "job": 40, "event": "table_file_deletion", "file_number": 66}
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.404755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.602629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.602635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.602637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.602639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:14:59.602641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:14:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:14:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:14:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:59.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:00 np0005465987 nova_compute[230713]: 2025-10-02 12:15:00.769 2 WARNING nova.compute.manager [None req-3df48f03-d24d-44c5-92ae-a4ac5a4f3855 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Image not found during snapshot: nova.exception.ImageNotFound: Image 7f3fdab7-d2d5-41be-9174-42e3a9caee7b could not be found.#033[00m
Oct  2 08:15:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:01.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:01.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:03.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:03 np0005465987 nova_compute[230713]: 2025-10-02 12:15:03.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:03 np0005465987 nova_compute[230713]: 2025-10-02 12:15:03.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e214 e214: 3 total, 3 up, 3 in
Oct  2 08:15:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:03.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.206 2 DEBUG oslo_concurrency.lockutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquiring lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.207 2 DEBUG oslo_concurrency.lockutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.208 2 DEBUG oslo_concurrency.lockutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquiring lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.208 2 DEBUG oslo_concurrency.lockutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.209 2 DEBUG oslo_concurrency.lockutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.210 2 INFO nova.compute.manager [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Terminating instance#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.210 2 DEBUG nova.compute.manager [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:15:04 np0005465987 kernel: tapee26d18a-45 (unregistering): left promiscuous mode
Oct  2 08:15:04 np0005465987 NetworkManager[44910]: <info>  [1759407304.3457] device (tapee26d18a-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:04Z|00193|binding|INFO|Releasing lport ee26d18a-454e-4f7a-9c50-26dda65d11de from this chassis (sb_readonly=0)
Oct  2 08:15:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:04Z|00194|binding|INFO|Setting lport ee26d18a-454e-4f7a-9c50-26dda65d11de down in Southbound
Oct  2 08:15:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:04Z|00195|binding|INFO|Removing iface tapee26d18a-45 ovn-installed in OVS
Oct  2 08:15:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:04.365 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:70:e5 10.100.0.6'], port_security=['fa:16:3e:f7:70:e5 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34395819-7251-4d97-acea-2b98c07c277f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3204d74f349d47fda3152d9d7fbea43e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e7f67ea6-7556-4acf-963b-57a7d344e510', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c7b3003-1bab-46e9-ac89-20b4b6aed5f5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ee26d18a-454e-4f7a-9c50-26dda65d11de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:04.367 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ee26d18a-454e-4f7a-9c50-26dda65d11de in datapath 34395819-7251-4d97-acea-2b98c07c277f unbound from our chassis#033[00m
Oct  2 08:15:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:04.369 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34395819-7251-4d97-acea-2b98c07c277f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:15:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:04.371 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ff44ef2c-32f8-4085-89a9-ccf7f4c45ec5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:04.371 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34395819-7251-4d97-acea-2b98c07c277f namespace which is not needed anymore#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:04 np0005465987 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct  2 08:15:04 np0005465987 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003f.scope: Consumed 15.686s CPU time.
Oct  2 08:15:04 np0005465987 systemd-machined[188335]: Machine qemu-30-instance-0000003f terminated.
Oct  2 08:15:04 np0005465987 neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f[255365]: [NOTICE]   (255369) : haproxy version is 2.8.14-c23fe91
Oct  2 08:15:04 np0005465987 neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f[255365]: [NOTICE]   (255369) : path to executable is /usr/sbin/haproxy
Oct  2 08:15:04 np0005465987 neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f[255365]: [WARNING]  (255369) : Exiting Master process...
Oct  2 08:15:04 np0005465987 neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f[255365]: [ALERT]    (255369) : Current worker (255371) exited with code 143 (Terminated)
Oct  2 08:15:04 np0005465987 neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f[255365]: [WARNING]  (255369) : All workers exited. Exiting... (0)
Oct  2 08:15:04 np0005465987 systemd[1]: libpod-c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8.scope: Deactivated successfully.
Oct  2 08:15:04 np0005465987 conmon[255365]: conmon c144c458ea8f7267372e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8.scope/container/memory.events
Oct  2 08:15:04 np0005465987 podman[256471]: 2025-10-02 12:15:04.568356001 +0000 UTC m=+0.098507667 container died c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.645 2 INFO nova.virt.libvirt.driver [-] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Instance destroyed successfully.#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.645 2 DEBUG nova.objects.instance [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lazy-loading 'resources' on Instance uuid dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.670 2 DEBUG nova.virt.libvirt.vif [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1680809607',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1680809607',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1680809607',id=63,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3204d74f349d47fda3152d9d7fbea43e',ramdisk_id='',reservation_id='r-l61btnv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1213912851',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1213912851-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:00Z,user_data=None,user_id='72c74994085d4fc697ddd4acddfa7a11',uuid=dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.670 2 DEBUG nova.network.os_vif_util [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Converting VIF {"id": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "address": "fa:16:3e:f7:70:e5", "network": {"id": "34395819-7251-4d97-acea-2b98c07c277f", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1049268166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3204d74f349d47fda3152d9d7fbea43e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee26d18a-45", "ovs_interfaceid": "ee26d18a-454e-4f7a-9c50-26dda65d11de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.671 2 DEBUG nova.network.os_vif_util [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:70:e5,bridge_name='br-int',has_traffic_filtering=True,id=ee26d18a-454e-4f7a-9c50-26dda65d11de,network=Network(34395819-7251-4d97-acea-2b98c07c277f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee26d18a-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.671 2 DEBUG os_vif [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:70:e5,bridge_name='br-int',has_traffic_filtering=True,id=ee26d18a-454e-4f7a-9c50-26dda65d11de,network=Network(34395819-7251-4d97-acea-2b98c07c277f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee26d18a-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee26d18a-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.677 2 INFO os_vif [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:70:e5,bridge_name='br-int',has_traffic_filtering=True,id=ee26d18a-454e-4f7a-9c50-26dda65d11de,network=Network(34395819-7251-4d97-acea-2b98c07c277f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee26d18a-45')#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.778 2 DEBUG nova.compute.manager [req-1adc7a0d-1c97-4f7f-8238-f090eff43117 req-9a29ec0e-1199-470a-91f7-030c0aafa33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Received event network-vif-unplugged-ee26d18a-454e-4f7a-9c50-26dda65d11de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.779 2 DEBUG oslo_concurrency.lockutils [req-1adc7a0d-1c97-4f7f-8238-f090eff43117 req-9a29ec0e-1199-470a-91f7-030c0aafa33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.779 2 DEBUG oslo_concurrency.lockutils [req-1adc7a0d-1c97-4f7f-8238-f090eff43117 req-9a29ec0e-1199-470a-91f7-030c0aafa33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.779 2 DEBUG oslo_concurrency.lockutils [req-1adc7a0d-1c97-4f7f-8238-f090eff43117 req-9a29ec0e-1199-470a-91f7-030c0aafa33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.780 2 DEBUG nova.compute.manager [req-1adc7a0d-1c97-4f7f-8238-f090eff43117 req-9a29ec0e-1199-470a-91f7-030c0aafa33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] No waiting events found dispatching network-vif-unplugged-ee26d18a-454e-4f7a-9c50-26dda65d11de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:04 np0005465987 nova_compute[230713]: 2025-10-02 12:15:04.780 2 DEBUG nova.compute.manager [req-1adc7a0d-1c97-4f7f-8238-f090eff43117 req-9a29ec0e-1199-470a-91f7-030c0aafa33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Received event network-vif-unplugged-ee26d18a-454e-4f7a-9c50-26dda65d11de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:15:05 np0005465987 systemd[1]: var-lib-containers-storage-overlay-7cc1ecba775765f369d5f7311f20f4ec956847481473422c210d37d3a40e891b-merged.mount: Deactivated successfully.
Oct  2 08:15:05 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8-userdata-shm.mount: Deactivated successfully.
Oct  2 08:15:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:05.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:05 np0005465987 podman[256471]: 2025-10-02 12:15:05.390401055 +0000 UTC m=+0.920552721 container cleanup c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:15:05 np0005465987 systemd[1]: libpod-conmon-c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8.scope: Deactivated successfully.
Oct  2 08:15:05 np0005465987 podman[256529]: 2025-10-02 12:15:05.494143195 +0000 UTC m=+0.078755165 container remove c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:15:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:05.501 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0159dc4c-c1ab-4df9-b5df-943a40ce76a0]: (4, ('Thu Oct  2 12:15:04 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f (c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8)\nc144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8\nThu Oct  2 12:15:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34395819-7251-4d97-acea-2b98c07c277f (c144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8)\nc144c458ea8f7267372ec477727f0b90954b560c156f94d6ae28ba80f15f94c8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:05.504 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e49632a9-5af9-44b8-86bd-d2d7bbe3fd2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:05.506 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34395819-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:05 np0005465987 nova_compute[230713]: 2025-10-02 12:15:05.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:05 np0005465987 kernel: tap34395819-70: left promiscuous mode
Oct  2 08:15:05 np0005465987 nova_compute[230713]: 2025-10-02 12:15:05.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:05.529 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[17d61a88-c681-4ec6-822a-b154af1b609f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:05.555 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a6135c5f-0df1-40a4-af84-750a5d24aef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:05.556 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2be42304-7a1e-480f-b400-fd3d917afc02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:05.573 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a51ec345-004d-4041-a483-ea8c8054ef47]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533899, 'reachable_time': 35623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256544, 'error': None, 'target': 'ovnmeta-34395819-7251-4d97-acea-2b98c07c277f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:05.576 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34395819-7251-4d97-acea-2b98c07c277f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:15:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:05.576 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[de4fbfca-70d4-4dfd-861f-57fd1c3c6680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:05 np0005465987 systemd[1]: run-netns-ovnmeta\x2d34395819\x2d7251\x2d4d97\x2dacea\x2d2b98c07c277f.mount: Deactivated successfully.
Oct  2 08:15:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:05Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:0c:3b 10.100.0.13
Oct  2 08:15:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:05Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:0c:3b 10.100.0.13
Oct  2 08:15:05 np0005465987 nova_compute[230713]: 2025-10-02 12:15:05.837 2 INFO nova.virt.libvirt.driver [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Deleting instance files /var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_del#033[00m
Oct  2 08:15:05 np0005465987 nova_compute[230713]: 2025-10-02 12:15:05.838 2 INFO nova.virt.libvirt.driver [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Deletion of /var/lib/nova/instances/dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96_del complete#033[00m
Oct  2 08:15:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:05.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.411 2 INFO nova.compute.manager [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Took 2.20 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.412 2 DEBUG oslo.service.loopingcall [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.412 2 DEBUG nova.compute.manager [-] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.413 2 DEBUG nova.network.neutron [-] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.949 2 DEBUG nova.compute.manager [req-47d8306e-1182-4cac-a58b-26a17a027868 req-b49ab1dc-e058-4e1c-a9d5-a45d1245a7d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Received event network-vif-plugged-ee26d18a-454e-4f7a-9c50-26dda65d11de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.949 2 DEBUG oslo_concurrency.lockutils [req-47d8306e-1182-4cac-a58b-26a17a027868 req-b49ab1dc-e058-4e1c-a9d5-a45d1245a7d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.949 2 DEBUG oslo_concurrency.lockutils [req-47d8306e-1182-4cac-a58b-26a17a027868 req-b49ab1dc-e058-4e1c-a9d5-a45d1245a7d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.950 2 DEBUG oslo_concurrency.lockutils [req-47d8306e-1182-4cac-a58b-26a17a027868 req-b49ab1dc-e058-4e1c-a9d5-a45d1245a7d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.950 2 DEBUG nova.compute.manager [req-47d8306e-1182-4cac-a58b-26a17a027868 req-b49ab1dc-e058-4e1c-a9d5-a45d1245a7d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] No waiting events found dispatching network-vif-plugged-ee26d18a-454e-4f7a-9c50-26dda65d11de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:06 np0005465987 nova_compute[230713]: 2025-10-02 12:15:06.950 2 WARNING nova.compute.manager [req-47d8306e-1182-4cac-a58b-26a17a027868 req-b49ab1dc-e058-4e1c-a9d5-a45d1245a7d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Received unexpected event network-vif-plugged-ee26d18a-454e-4f7a-9c50-26dda65d11de for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:15:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:07.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:07.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.290 2 DEBUG nova.network.neutron [-] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.309 2 INFO nova.compute.manager [-] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Took 1.90 seconds to deallocate network for instance.#033[00m
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.359 2 DEBUG oslo_concurrency.lockutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.360 2 DEBUG oslo_concurrency.lockutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.432 2 DEBUG oslo_concurrency.processutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.729 2 DEBUG nova.compute.manager [req-11f05969-1e04-4142-b629-cfec0365b0fc req-e952da83-f687-4a8c-b0f4-d3dc3567146b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Received event network-vif-deleted-ee26d18a-454e-4f7a-9c50-26dda65d11de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1171195345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.902 2 DEBUG oslo_concurrency.processutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.908 2 DEBUG nova.compute.provider_tree [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.933 2 DEBUG nova.scheduler.client.report [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.967 2 DEBUG oslo_concurrency.lockutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:08 np0005465987 nova_compute[230713]: 2025-10-02 12:15:08.998 2 INFO nova.scheduler.client.report [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Deleted allocations for instance dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96#033[00m
Oct  2 08:15:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:09.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:09 np0005465987 nova_compute[230713]: 2025-10-02 12:15:09.082 2 DEBUG oslo_concurrency.lockutils [None req-71f94fca-cce3-415a-b927-f974f8d7551e 72c74994085d4fc697ddd4acddfa7a11 3204d74f349d47fda3152d9d7fbea43e - - default default] Lock "dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:09 np0005465987 nova_compute[230713]: 2025-10-02 12:15:09.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:09.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:10 np0005465987 nova_compute[230713]: 2025-10-02 12:15:10.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:11.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:11.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 08:15:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:13.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 08:15:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:13 np0005465987 nova_compute[230713]: 2025-10-02 12:15:13.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:13 np0005465987 podman[256568]: 2025-10-02 12:15:13.850884179 +0000 UTC m=+0.068481292 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:15:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:13.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:14 np0005465987 nova_compute[230713]: 2025-10-02 12:15:14.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:14 np0005465987 nova_compute[230713]: 2025-10-02 12:15:14.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:15.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:15.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:17.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:17.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:18 np0005465987 nova_compute[230713]: 2025-10-02 12:15:18.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:19 np0005465987 nova_compute[230713]: 2025-10-02 12:15:19.644 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407304.642826, dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:19 np0005465987 nova_compute[230713]: 2025-10-02 12:15:19.644 2 INFO nova.compute.manager [-] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:15:19 np0005465987 nova_compute[230713]: 2025-10-02 12:15:19.665 2 DEBUG nova.compute.manager [None req-d0e1fc6a-22b1-4673-a549-ab8509f1eaa8 - - - - - -] [instance: dfaa589c-eb92-4e0f-b1ad-7b4004f0cb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:19 np0005465987 nova_compute[230713]: 2025-10-02 12:15:19.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:19.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:21.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:21.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:22 np0005465987 podman[256589]: 2025-10-02 12:15:22.846610619 +0000 UTC m=+0.053125501 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:15:22 np0005465987 podman[256588]: 2025-10-02 12:15:22.851367639 +0000 UTC m=+0.060373169 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:15:22 np0005465987 podman[256587]: 2025-10-02 12:15:22.861493127 +0000 UTC m=+0.076036570 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:15:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:23.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:23 np0005465987 nova_compute[230713]: 2025-10-02 12:15:23.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:23.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:24Z|00196|binding|INFO|Releasing lport 544b20f9-e292-46bc-a770-180c318d534e from this chassis (sb_readonly=0)
Oct  2 08:15:24 np0005465987 nova_compute[230713]: 2025-10-02 12:15:24.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:24 np0005465987 nova_compute[230713]: 2025-10-02 12:15:24.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:25.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:25 np0005465987 nova_compute[230713]: 2025-10-02 12:15:25.205 2 DEBUG oslo_concurrency.lockutils [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:25 np0005465987 nova_compute[230713]: 2025-10-02 12:15:25.206 2 DEBUG oslo_concurrency.lockutils [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:25 np0005465987 nova_compute[230713]: 2025-10-02 12:15:25.206 2 DEBUG nova.objects.instance [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'flavor' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:25 np0005465987 nova_compute[230713]: 2025-10-02 12:15:25.233 2 DEBUG nova.objects.instance [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'pci_requests' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:25 np0005465987 nova_compute[230713]: 2025-10-02 12:15:25.253 2 DEBUG nova.network.neutron [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:15:25 np0005465987 nova_compute[230713]: 2025-10-02 12:15:25.819 2 DEBUG nova.policy [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e627e098f2b465d97099fee8f489b71', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:15:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:25.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:26 np0005465987 nova_compute[230713]: 2025-10-02 12:15:26.829 2 DEBUG nova.network.neutron [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Successfully created port: 07033da6-8cf1-4845-b0f0-525229bc100d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:15:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:27.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:27.942 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:27.943 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:27.943 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:28 np0005465987 nova_compute[230713]: 2025-10-02 12:15:28.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:28.754 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:28 np0005465987 nova_compute[230713]: 2025-10-02 12:15:28.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:28.755 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:15:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:29.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:29 np0005465987 nova_compute[230713]: 2025-10-02 12:15:29.426 2 DEBUG nova.network.neutron [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Successfully updated port: 07033da6-8cf1-4845-b0f0-525229bc100d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:15:29 np0005465987 nova_compute[230713]: 2025-10-02 12:15:29.451 2 DEBUG oslo_concurrency.lockutils [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:29 np0005465987 nova_compute[230713]: 2025-10-02 12:15:29.452 2 DEBUG oslo_concurrency.lockutils [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:29 np0005465987 nova_compute[230713]: 2025-10-02 12:15:29.452 2 DEBUG nova.network.neutron [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:15:29 np0005465987 nova_compute[230713]: 2025-10-02 12:15:29.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:29.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:30 np0005465987 nova_compute[230713]: 2025-10-02 12:15:30.924 2 WARNING nova.network.neutron [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] ee114210-598c-482f-83c7-26c3363a45c4 already exists in list: networks containing: ['ee114210-598c-482f-83c7-26c3363a45c4']. ignoring it#033[00m
Oct  2 08:15:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:31.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:31 np0005465987 nova_compute[230713]: 2025-10-02 12:15:31.292 2 DEBUG nova.compute.manager [req-55234c1e-2f7d-4414-9182-9a9c682ddaab req-344cb8be-c68e-4968-bd7c-f71107c7c97e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-changed-07033da6-8cf1-4845-b0f0-525229bc100d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:31 np0005465987 nova_compute[230713]: 2025-10-02 12:15:31.293 2 DEBUG nova.compute.manager [req-55234c1e-2f7d-4414-9182-9a9c682ddaab req-344cb8be-c68e-4968-bd7c-f71107c7c97e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing instance network info cache due to event network-changed-07033da6-8cf1-4845-b0f0-525229bc100d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:15:31 np0005465987 nova_compute[230713]: 2025-10-02 12:15:31.293 2 DEBUG oslo_concurrency.lockutils [req-55234c1e-2f7d-4414-9182-9a9c682ddaab req-344cb8be-c68e-4968-bd7c-f71107c7c97e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:31.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:33.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:33 np0005465987 nova_compute[230713]: 2025-10-02 12:15:33.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.342 2 DEBUG nova.network.neutron [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.365 2 DEBUG oslo_concurrency.lockutils [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.366 2 DEBUG oslo_concurrency.lockutils [req-55234c1e-2f7d-4414-9182-9a9c682ddaab req-344cb8be-c68e-4968-bd7c-f71107c7c97e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.367 2 DEBUG nova.network.neutron [req-55234c1e-2f7d-4414-9182-9a9c682ddaab req-344cb8be-c68e-4968-bd7c-f71107c7c97e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing network info cache for port 07033da6-8cf1-4845-b0f0-525229bc100d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.370 2 DEBUG nova.virt.libvirt.vif [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.371 2 DEBUG nova.network.os_vif_util [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.371 2 DEBUG nova.network.os_vif_util [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.372 2 DEBUG os_vif [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07033da6-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07033da6-8c, col_values=(('external_ids', {'iface-id': '07033da6-8cf1-4845-b0f0-525229bc100d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:f3:f7', 'vm-uuid': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005465987 NetworkManager[44910]: <info>  [1759407334.3797] manager: (tap07033da6-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.386 2 INFO os_vif [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c')#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.387 2 DEBUG nova.virt.libvirt.vif [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.387 2 DEBUG nova.network.os_vif_util [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.388 2 DEBUG nova.network.os_vif_util [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.391 2 DEBUG nova.virt.libvirt.guest [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] attach device xml: <interface type="ethernet">
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:86:f3:f7"/>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <target dev="tap07033da6-8c"/>
Oct  2 08:15:34 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:15:34 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:15:34 np0005465987 kernel: tap07033da6-8c: entered promiscuous mode
Oct  2 08:15:34 np0005465987 NetworkManager[44910]: <info>  [1759407334.4076] manager: (tap07033da6-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Oct  2 08:15:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:34Z|00197|binding|INFO|Claiming lport 07033da6-8cf1-4845-b0f0-525229bc100d for this chassis.
Oct  2 08:15:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:34Z|00198|binding|INFO|07033da6-8cf1-4845-b0f0-525229bc100d: Claiming fa:16:3e:86:f3:f7 10.100.0.8
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:34Z|00199|binding|INFO|Setting lport 07033da6-8cf1-4845-b0f0-525229bc100d ovn-installed in OVS
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:34Z|00200|binding|INFO|Setting lport 07033da6-8cf1-4845-b0f0-525229bc100d up in Southbound
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.431 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:f3:f7 10.100.0.8'], port_security=['fa:16:3e:86:f3:f7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=07033da6-8cf1-4845-b0f0-525229bc100d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.432 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 07033da6-8cf1-4845-b0f0-525229bc100d in datapath ee114210-598c-482f-83c7-26c3363a45c4 bound to our chassis#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.433 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:15:34 np0005465987 systemd-udevd[256660]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.451 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e41e86-30ea-4f52-b387-86edfdf32510]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:34 np0005465987 NetworkManager[44910]: <info>  [1759407334.4555] device (tap07033da6-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:15:34 np0005465987 NetworkManager[44910]: <info>  [1759407334.4564] device (tap07033da6-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.482 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e468241f-6633-4329-b072-05ccb7a8e8a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.485 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7f92f6a9-b7b4-4fc1-84e5-b827805e5180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.509 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c2181-d811-4708-86be-4060ff7b6c1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.524 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[badb3a30-3036-4451-b699-b4ec40f2a13c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535511, 'reachable_time': 23320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256668, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.537 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a9773124-49a1-4ca8-b061-ca78a2719434]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535521, 'tstamp': 535521}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256669, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535523, 'tstamp': 535523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256669, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.539 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.585 2 DEBUG nova.virt.libvirt.driver [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.584 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.584 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.584 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:34.585 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.585 2 DEBUG nova.virt.libvirt.driver [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.585 2 DEBUG nova.virt.libvirt.driver [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:53:0c:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.585 2 DEBUG nova.virt.libvirt.driver [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:86:f3:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.613 2 DEBUG nova.virt.libvirt.guest [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:15:34</nova:creationTime>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:15:34 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    <nova:port uuid="07033da6-8cf1-4845-b0f0-525229bc100d">
Oct  2 08:15:34 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:15:34 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:15:34 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:15:34 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:15:34 np0005465987 nova_compute[230713]: 2025-10-02 12:15:34.648 2 DEBUG oslo_concurrency.lockutils [None req-2b349610-2eea-4f9b-ad15-df1da7c4cf9d 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e215 e215: 3 total, 3 up, 3 in
Oct  2 08:15:35 np0005465987 nova_compute[230713]: 2025-10-02 12:15:35.042 2 DEBUG nova.compute.manager [req-97b673ee-15d6-4e29-9ee1-6c631aefb737 req-d260f0cd-d692-4108-abec-929f2a852564 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-07033da6-8cf1-4845-b0f0-525229bc100d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:35 np0005465987 nova_compute[230713]: 2025-10-02 12:15:35.042 2 DEBUG oslo_concurrency.lockutils [req-97b673ee-15d6-4e29-9ee1-6c631aefb737 req-d260f0cd-d692-4108-abec-929f2a852564 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:35 np0005465987 nova_compute[230713]: 2025-10-02 12:15:35.042 2 DEBUG oslo_concurrency.lockutils [req-97b673ee-15d6-4e29-9ee1-6c631aefb737 req-d260f0cd-d692-4108-abec-929f2a852564 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:35 np0005465987 nova_compute[230713]: 2025-10-02 12:15:35.043 2 DEBUG oslo_concurrency.lockutils [req-97b673ee-15d6-4e29-9ee1-6c631aefb737 req-d260f0cd-d692-4108-abec-929f2a852564 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:35 np0005465987 nova_compute[230713]: 2025-10-02 12:15:35.043 2 DEBUG nova.compute.manager [req-97b673ee-15d6-4e29-9ee1-6c631aefb737 req-d260f0cd-d692-4108-abec-929f2a852564 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-07033da6-8cf1-4845-b0f0-525229bc100d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:35 np0005465987 nova_compute[230713]: 2025-10-02 12:15:35.043 2 WARNING nova.compute.manager [req-97b673ee-15d6-4e29-9ee1-6c631aefb737 req-d260f0cd-d692-4108-abec-929f2a852564 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-07033da6-8cf1-4845-b0f0-525229bc100d for instance with vm_state active and task_state None.#033[00m
Oct  2 08:15:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:35.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:35Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:f3:f7 10.100.0.8
Oct  2 08:15:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:35Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:f3:f7 10.100.0.8
Oct  2 08:15:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:35.757 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:35 np0005465987 nova_compute[230713]: 2025-10-02 12:15:35.789 2 DEBUG oslo_concurrency.lockutils [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:35 np0005465987 nova_compute[230713]: 2025-10-02 12:15:35.789 2 DEBUG oslo_concurrency.lockutils [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:35 np0005465987 nova_compute[230713]: 2025-10-02 12:15:35.790 2 DEBUG nova.objects.instance [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'flavor' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:35.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:36 np0005465987 nova_compute[230713]: 2025-10-02 12:15:36.509 2 DEBUG nova.objects.instance [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'pci_requests' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:36 np0005465987 nova_compute[230713]: 2025-10-02 12:15:36.511 2 DEBUG nova.network.neutron [req-55234c1e-2f7d-4414-9182-9a9c682ddaab req-344cb8be-c68e-4968-bd7c-f71107c7c97e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updated VIF entry in instance network info cache for port 07033da6-8cf1-4845-b0f0-525229bc100d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:15:36 np0005465987 nova_compute[230713]: 2025-10-02 12:15:36.512 2 DEBUG nova.network.neutron [req-55234c1e-2f7d-4414-9182-9a9c682ddaab req-344cb8be-c68e-4968-bd7c-f71107c7c97e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:36 np0005465987 nova_compute[230713]: 2025-10-02 12:15:36.531 2 DEBUG nova.network.neutron [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:15:36 np0005465987 nova_compute[230713]: 2025-10-02 12:15:36.537 2 DEBUG oslo_concurrency.lockutils [req-55234c1e-2f7d-4414-9182-9a9c682ddaab req-344cb8be-c68e-4968-bd7c-f71107c7c97e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:37 np0005465987 nova_compute[230713]: 2025-10-02 12:15:37.030 2 DEBUG nova.policy [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e627e098f2b465d97099fee8f489b71', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:15:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:37.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:37 np0005465987 nova_compute[230713]: 2025-10-02 12:15:37.150 2 DEBUG nova.compute.manager [req-79d775ad-12bb-4c81-b113-7a1da8ed14d1 req-627c8f71-4968-4ff5-8e5b-8a4caabb813d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-07033da6-8cf1-4845-b0f0-525229bc100d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:37 np0005465987 nova_compute[230713]: 2025-10-02 12:15:37.151 2 DEBUG oslo_concurrency.lockutils [req-79d775ad-12bb-4c81-b113-7a1da8ed14d1 req-627c8f71-4968-4ff5-8e5b-8a4caabb813d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:37 np0005465987 nova_compute[230713]: 2025-10-02 12:15:37.152 2 DEBUG oslo_concurrency.lockutils [req-79d775ad-12bb-4c81-b113-7a1da8ed14d1 req-627c8f71-4968-4ff5-8e5b-8a4caabb813d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:37 np0005465987 nova_compute[230713]: 2025-10-02 12:15:37.152 2 DEBUG oslo_concurrency.lockutils [req-79d775ad-12bb-4c81-b113-7a1da8ed14d1 req-627c8f71-4968-4ff5-8e5b-8a4caabb813d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:37 np0005465987 nova_compute[230713]: 2025-10-02 12:15:37.152 2 DEBUG nova.compute.manager [req-79d775ad-12bb-4c81-b113-7a1da8ed14d1 req-627c8f71-4968-4ff5-8e5b-8a4caabb813d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-07033da6-8cf1-4845-b0f0-525229bc100d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:37 np0005465987 nova_compute[230713]: 2025-10-02 12:15:37.152 2 WARNING nova.compute.manager [req-79d775ad-12bb-4c81-b113-7a1da8ed14d1 req-627c8f71-4968-4ff5-8e5b-8a4caabb813d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-07033da6-8cf1-4845-b0f0-525229bc100d for instance with vm_state active and task_state None.#033[00m
Oct  2 08:15:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e216 e216: 3 total, 3 up, 3 in
Oct  2 08:15:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:37.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:38 np0005465987 nova_compute[230713]: 2025-10-02 12:15:38.225 2 DEBUG nova.network.neutron [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Successfully created port: e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:15:38 np0005465987 nova_compute[230713]: 2025-10-02 12:15:38.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:38 np0005465987 nova_compute[230713]: 2025-10-02 12:15:38.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:39.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.209 2 DEBUG nova.network.neutron [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Successfully updated port: e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.227 2 DEBUG oslo_concurrency.lockutils [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.227 2 DEBUG oslo_concurrency.lockutils [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.227 2 DEBUG nova.network.neutron [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.328 2 DEBUG nova.compute.manager [req-2c0b330f-af1c-47c8-baa9-99c9f37794cc req-67b0e215-8236-4853-8914-56150de57d10 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-changed-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.328 2 DEBUG nova.compute.manager [req-2c0b330f-af1c-47c8-baa9-99c9f37794cc req-67b0e215-8236-4853-8914-56150de57d10 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing instance network info cache due to event network-changed-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.328 2 DEBUG oslo_concurrency.lockutils [req-2c0b330f-af1c-47c8-baa9-99c9f37794cc req-67b0e215-8236-4853-8914-56150de57d10 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.410 2 WARNING nova.network.neutron [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] ee114210-598c-482f-83c7-26c3363a45c4 already exists in list: networks containing: ['ee114210-598c-482f-83c7-26c3363a45c4']. ignoring it#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.410 2 WARNING nova.network.neutron [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] ee114210-598c-482f-83c7-26c3363a45c4 already exists in list: networks containing: ['ee114210-598c-482f-83c7-26c3363a45c4']. ignoring it#033[00m
Oct  2 08:15:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e217 e217: 3 total, 3 up, 3 in
Oct  2 08:15:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:39.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.967 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.968 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:15:39 np0005465987 nova_compute[230713]: 2025-10-02 12:15:39.968 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:15:40 np0005465987 nova_compute[230713]: 2025-10-02 12:15:40.252 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:41.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:41.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e218 e218: 3 total, 3 up, 3 in
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.699 2 DEBUG nova.network.neutron [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.749 2 DEBUG oslo_concurrency.lockutils [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.750 2 DEBUG oslo_concurrency.lockutils [req-2c0b330f-af1c-47c8-baa9-99c9f37794cc req-67b0e215-8236-4853-8914-56150de57d10 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.750 2 DEBUG nova.network.neutron [req-2c0b330f-af1c-47c8-baa9-99c9f37794cc req-67b0e215-8236-4853-8914-56150de57d10 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing network info cache for port e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.753 2 DEBUG nova.virt.libvirt.vif [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.753 2 DEBUG nova.network.os_vif_util [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.754 2 DEBUG nova.network.os_vif_util [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:ee:eb,bridge_name='br-int',has_traffic_filtering=True,id=e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5e52b7f-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.754 2 DEBUG os_vif [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:ee:eb,bridge_name='br-int',has_traffic_filtering=True,id=e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5e52b7f-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5e52b7f-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.758 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape5e52b7f-1b, col_values=(('external_ids', {'iface-id': 'e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:ee:eb', 'vm-uuid': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005465987 NetworkManager[44910]: <info>  [1759407342.7603] manager: (tape5e52b7f-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.766 2 INFO os_vif [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:ee:eb,bridge_name='br-int',has_traffic_filtering=True,id=e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5e52b7f-1b')#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.767 2 DEBUG nova.virt.libvirt.vif [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.767 2 DEBUG nova.network.os_vif_util [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.768 2 DEBUG nova.network.os_vif_util [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:ee:eb,bridge_name='br-int',has_traffic_filtering=True,id=e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5e52b7f-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.772 2 DEBUG nova.virt.libvirt.guest [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] attach device xml: <interface type="ethernet">
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:77:ee:eb"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <target dev="tape5e52b7f-1b"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:15:42 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:15:42 np0005465987 NetworkManager[44910]: <info>  [1759407342.7836] manager: (tape5e52b7f-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Oct  2 08:15:42 np0005465987 kernel: tape5e52b7f-1b: entered promiscuous mode
Oct  2 08:15:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:42Z|00201|binding|INFO|Claiming lport e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 for this chassis.
Oct  2 08:15:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:42Z|00202|binding|INFO|e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1: Claiming fa:16:3e:77:ee:eb 10.100.0.11
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.799 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:ee:eb 10.100.0.11'], port_security=['fa:16:3e:77:ee:eb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.800 138325 INFO neutron.agent.ovn.metadata.agent [-] Port e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 in datapath ee114210-598c-482f-83c7-26c3363a45c4 bound to our chassis#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.802 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:15:42 np0005465987 systemd-udevd[256678]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:15:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:42Z|00203|binding|INFO|Setting lport e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 ovn-installed in OVS
Oct  2 08:15:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:42Z|00204|binding|INFO|Setting lport e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 up in Southbound
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.827 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[037fa210-3add-4fc2-ae33-44f2d0b6578b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:42 np0005465987 NetworkManager[44910]: <info>  [1759407342.8313] device (tape5e52b7f-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:15:42 np0005465987 NetworkManager[44910]: <info>  [1759407342.8321] device (tape5e52b7f-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.856 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[75bb8a52-7636-4255-a66a-a81db65207d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.859 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bb42d6e4-8ad9-4489-9164-268143b47e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.891 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[409ba595-71b5-4cee-8557-5cdf2dab5e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.908 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1451458b-23b0-4fcd-a8a3-6475341f54b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535511, 'reachable_time': 23320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256685, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.925 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[41c43156-6f05-4013-8ef1-de30c880ffc9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535521, 'tstamp': 535521}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256686, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535523, 'tstamp': 535523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256686, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.926 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.929 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.930 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.930 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:15:42.931 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.930 2 DEBUG nova.virt.libvirt.driver [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.931 2 DEBUG nova.virt.libvirt.driver [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.931 2 DEBUG nova.virt.libvirt.driver [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:53:0c:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.931 2 DEBUG nova.virt.libvirt.driver [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:86:f3:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.931 2 DEBUG nova.virt.libvirt.driver [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:77:ee:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.966 2 DEBUG nova.virt.libvirt.guest [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:15:42</nova:creationTime>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:15:42 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:port uuid="07033da6-8cf1-4845-b0f0-525229bc100d">
Oct  2 08:15:42 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    <nova:port uuid="e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1">
Oct  2 08:15:42 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:15:42 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:15:42 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:15:42 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:15:42 np0005465987 nova_compute[230713]: 2025-10-02 12:15:42.997 2 DEBUG oslo_concurrency.lockutils [None req-a0cd8c9d-63d0-453a-b33a-1e5f8ada583e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:43.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:43 np0005465987 nova_compute[230713]: 2025-10-02 12:15:43.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e219 e219: 3 total, 3 up, 3 in
Oct  2 08:15:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:43.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:44 np0005465987 nova_compute[230713]: 2025-10-02 12:15:44.587 2 DEBUG nova.network.neutron [req-2c0b330f-af1c-47c8-baa9-99c9f37794cc req-67b0e215-8236-4853-8914-56150de57d10 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updated VIF entry in instance network info cache for port e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:15:44 np0005465987 nova_compute[230713]: 2025-10-02 12:15:44.587 2 DEBUG nova.network.neutron [req-2c0b330f-af1c-47c8-baa9-99c9f37794cc req-67b0e215-8236-4853-8914-56150de57d10 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:44 np0005465987 nova_compute[230713]: 2025-10-02 12:15:44.604 2 DEBUG oslo_concurrency.lockutils [req-2c0b330f-af1c-47c8-baa9-99c9f37794cc req-67b0e215-8236-4853-8914-56150de57d10 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:44 np0005465987 nova_compute[230713]: 2025-10-02 12:15:44.605 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:44 np0005465987 nova_compute[230713]: 2025-10-02 12:15:44.605 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:15:44 np0005465987 nova_compute[230713]: 2025-10-02 12:15:44.605 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:44 np0005465987 podman[256687]: 2025-10-02 12:15:44.85079985 +0000 UTC m=+0.065687145 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:15:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:44Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:ee:eb 10.100.0.11
Oct  2 08:15:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:44Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:ee:eb 10.100.0.11
Oct  2 08:15:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:45.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:15:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:45.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:15:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:47.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:47 np0005465987 nova_compute[230713]: 2025-10-02 12:15:47.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:47.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:48 np0005465987 nova_compute[230713]: 2025-10-02 12:15:48.379 2 DEBUG nova.compute.manager [req-213e607c-6ff6-4c12-89e5-9551889d4f5e req-c7e06a18-845a-44c6-9c51-6d0e52da416f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:48 np0005465987 nova_compute[230713]: 2025-10-02 12:15:48.380 2 DEBUG oslo_concurrency.lockutils [req-213e607c-6ff6-4c12-89e5-9551889d4f5e req-c7e06a18-845a-44c6-9c51-6d0e52da416f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:48 np0005465987 nova_compute[230713]: 2025-10-02 12:15:48.380 2 DEBUG oslo_concurrency.lockutils [req-213e607c-6ff6-4c12-89e5-9551889d4f5e req-c7e06a18-845a-44c6-9c51-6d0e52da416f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:48 np0005465987 nova_compute[230713]: 2025-10-02 12:15:48.380 2 DEBUG oslo_concurrency.lockutils [req-213e607c-6ff6-4c12-89e5-9551889d4f5e req-c7e06a18-845a-44c6-9c51-6d0e52da416f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:48 np0005465987 nova_compute[230713]: 2025-10-02 12:15:48.381 2 DEBUG nova.compute.manager [req-213e607c-6ff6-4c12-89e5-9551889d4f5e req-c7e06a18-845a-44c6-9c51-6d0e52da416f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:48 np0005465987 nova_compute[230713]: 2025-10-02 12:15:48.381 2 WARNING nova.compute.manager [req-213e607c-6ff6-4c12-89e5-9551889d4f5e req-c7e06a18-845a-44c6-9c51-6d0e52da416f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:15:48 np0005465987 nova_compute[230713]: 2025-10-02 12:15:48.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e220 e220: 3 total, 3 up, 3 in
Oct  2 08:15:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:49.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:49 np0005465987 nova_compute[230713]: 2025-10-02 12:15:49.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:49 np0005465987 nova_compute[230713]: 2025-10-02 12:15:49.732 2 DEBUG oslo_concurrency.lockutils [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-92dcc2e6-3d83-4983-aa84-91b164c9dac4" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:49 np0005465987 nova_compute[230713]: 2025-10-02 12:15:49.732 2 DEBUG oslo_concurrency.lockutils [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-92dcc2e6-3d83-4983-aa84-91b164c9dac4" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:49 np0005465987 nova_compute[230713]: 2025-10-02 12:15:49.733 2 DEBUG nova.objects.instance [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'flavor' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:49.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.484 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.500 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.501 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.501 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.502 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.502 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.503 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.503 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.504 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.504 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.504 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.583 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.583 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.584 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.584 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.585 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.622 2 DEBUG nova.compute.manager [req-f319d67f-5e8f-4baa-941e-ca6d4b41c26e req-1b4b9ae9-1e90-438b-bdb2-435e9822756a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.623 2 DEBUG oslo_concurrency.lockutils [req-f319d67f-5e8f-4baa-941e-ca6d4b41c26e req-1b4b9ae9-1e90-438b-bdb2-435e9822756a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.624 2 DEBUG oslo_concurrency.lockutils [req-f319d67f-5e8f-4baa-941e-ca6d4b41c26e req-1b4b9ae9-1e90-438b-bdb2-435e9822756a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.624 2 DEBUG oslo_concurrency.lockutils [req-f319d67f-5e8f-4baa-941e-ca6d4b41c26e req-1b4b9ae9-1e90-438b-bdb2-435e9822756a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.625 2 DEBUG nova.compute.manager [req-f319d67f-5e8f-4baa-941e-ca6d4b41c26e req-1b4b9ae9-1e90-438b-bdb2-435e9822756a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:50 np0005465987 nova_compute[230713]: 2025-10-02 12:15:50.625 2 WARNING nova.compute.manager [req-f319d67f-5e8f-4baa-941e-ca6d4b41c26e req-1b4b9ae9-1e90-438b-bdb2-435e9822756a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:15:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:15:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:15:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:15:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:15:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3937590854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.141 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:51.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.211 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.212 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.400 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.401 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4499MB free_disk=20.942665100097656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.401 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.402 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.501 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.502 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.502 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:15:51 np0005465987 nova_compute[230713]: 2025-10-02 12:15:51.539 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:15:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:15:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:15:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:15:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:51.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/225370352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:52 np0005465987 nova_compute[230713]: 2025-10-02 12:15:52.006 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:52 np0005465987 nova_compute[230713]: 2025-10-02 12:15:52.012 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:52 np0005465987 nova_compute[230713]: 2025-10-02 12:15:52.035 2 DEBUG nova.objects.instance [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'pci_requests' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:52 np0005465987 nova_compute[230713]: 2025-10-02 12:15:52.037 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:52 np0005465987 nova_compute[230713]: 2025-10-02 12:15:52.057 2 DEBUG nova.network.neutron [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:15:52 np0005465987 nova_compute[230713]: 2025-10-02 12:15:52.061 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:15:52 np0005465987 nova_compute[230713]: 2025-10-02 12:15:52.062 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:52 np0005465987 nova_compute[230713]: 2025-10-02 12:15:52.642 2 DEBUG nova.policy [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e627e098f2b465d97099fee8f489b71', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:15:52 np0005465987 nova_compute[230713]: 2025-10-02 12:15:52.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:53.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:53 np0005465987 nova_compute[230713]: 2025-10-02 12:15:53.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:53 np0005465987 ovn_controller[129172]: 2025-10-02T12:15:53Z|00205|binding|INFO|Releasing lport 544b20f9-e292-46bc-a770-180c318d534e from this chassis (sb_readonly=0)
Oct  2 08:15:53 np0005465987 nova_compute[230713]: 2025-10-02 12:15:53.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:53 np0005465987 podman[256882]: 2025-10-02 12:15:53.903429621 +0000 UTC m=+0.091601798 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 08:15:53 np0005465987 podman[256883]: 2025-10-02 12:15:53.923845361 +0000 UTC m=+0.114014513 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:15:53 np0005465987 podman[256881]: 2025-10-02 12:15:53.939506332 +0000 UTC m=+0.137117588 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:15:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:53.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.021 2 DEBUG nova.network.neutron [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Successfully updated port: 92dcc2e6-3d83-4983-aa84-91b164c9dac4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.051 2 DEBUG oslo_concurrency.lockutils [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.053 2 DEBUG oslo_concurrency.lockutils [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.053 2 DEBUG nova.network.neutron [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.203 2 DEBUG nova.compute.manager [req-9f310a11-1691-4eae-89c3-71a41f28b837 req-d8a017c4-437d-4f3b-ad6d-22b711bdd7f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-changed-92dcc2e6-3d83-4983-aa84-91b164c9dac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.204 2 DEBUG nova.compute.manager [req-9f310a11-1691-4eae-89c3-71a41f28b837 req-d8a017c4-437d-4f3b-ad6d-22b711bdd7f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing instance network info cache due to event network-changed-92dcc2e6-3d83-4983-aa84-91b164c9dac4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.204 2 DEBUG oslo_concurrency.lockutils [req-9f310a11-1691-4eae-89c3-71a41f28b837 req-d8a017c4-437d-4f3b-ad6d-22b711bdd7f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.357 2 WARNING nova.network.neutron [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] ee114210-598c-482f-83c7-26c3363a45c4 already exists in list: networks containing: ['ee114210-598c-482f-83c7-26c3363a45c4']. ignoring it#033[00m
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.358 2 WARNING nova.network.neutron [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] ee114210-598c-482f-83c7-26c3363a45c4 already exists in list: networks containing: ['ee114210-598c-482f-83c7-26c3363a45c4']. ignoring it#033[00m
Oct  2 08:15:54 np0005465987 nova_compute[230713]: 2025-10-02 12:15:54.358 2 WARNING nova.network.neutron [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] ee114210-598c-482f-83c7-26c3363a45c4 already exists in list: networks containing: ['ee114210-598c-482f-83c7-26c3363a45c4']. ignoring it#033[00m
Oct  2 08:15:55 np0005465987 nova_compute[230713]: 2025-10-02 12:15:55.053 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:15:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/413423865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:15:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:15:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/413423865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:15:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:55.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:55 np0005465987 nova_compute[230713]: 2025-10-02 12:15:55.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:55.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:56 np0005465987 nova_compute[230713]: 2025-10-02 12:15:56.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:57.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:57 np0005465987 nova_compute[230713]: 2025-10-02 12:15:57.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:57.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:15:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:15:58 np0005465987 nova_compute[230713]: 2025-10-02 12:15:58.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:15:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:15:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:59.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:15:59 np0005465987 nova_compute[230713]: 2025-10-02 12:15:59.988 2 DEBUG nova.network.neutron [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.014 2 DEBUG oslo_concurrency.lockutils [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.016 2 DEBUG oslo_concurrency.lockutils [req-9f310a11-1691-4eae-89c3-71a41f28b837 req-d8a017c4-437d-4f3b-ad6d-22b711bdd7f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.017 2 DEBUG nova.network.neutron [req-9f310a11-1691-4eae-89c3-71a41f28b837 req-d8a017c4-437d-4f3b-ad6d-22b711bdd7f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Refreshing network info cache for port 92dcc2e6-3d83-4983-aa84-91b164c9dac4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.022 2 DEBUG nova.virt.libvirt.vif [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.022 2 DEBUG nova.network.os_vif_util [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.024 2 DEBUG nova.network.os_vif_util [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.024 2 DEBUG os_vif [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92dcc2e6-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92dcc2e6-3d, col_values=(('external_ids', {'iface-id': '92dcc2e6-3d83-4983-aa84-91b164c9dac4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:2a:b6', 'vm-uuid': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:00 np0005465987 NetworkManager[44910]: <info>  [1759407360.0359] manager: (tap92dcc2e6-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.066 2 INFO os_vif [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d')#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.067 2 DEBUG nova.virt.libvirt.vif [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.068 2 DEBUG nova.network.os_vif_util [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.069 2 DEBUG nova.network.os_vif_util [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.073 2 DEBUG nova.virt.libvirt.guest [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] attach device xml: <interface type="ethernet">
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:b5:2a:b6"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <target dev="tap92dcc2e6-3d"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:16:00 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:16:00 np0005465987 NetworkManager[44910]: <info>  [1759407360.0864] manager: (tap92dcc2e6-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Oct  2 08:16:00 np0005465987 kernel: tap92dcc2e6-3d: entered promiscuous mode
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:00Z|00206|binding|INFO|Claiming lport 92dcc2e6-3d83-4983-aa84-91b164c9dac4 for this chassis.
Oct  2 08:16:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:00Z|00207|binding|INFO|92dcc2e6-3d83-4983-aa84-91b164c9dac4: Claiming fa:16:3e:b5:2a:b6 10.100.0.14
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.098 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:2a:b6 10.100.0.14'], port_security=['fa:16:3e:b5:2a:b6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-129720265', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-129720265', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=92dcc2e6-3d83-4983-aa84-91b164c9dac4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.100 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 92dcc2e6-3d83-4983-aa84-91b164c9dac4 in datapath ee114210-598c-482f-83c7-26c3363a45c4 bound to our chassis#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.101 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:16:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:00Z|00208|binding|INFO|Setting lport 92dcc2e6-3d83-4983-aa84-91b164c9dac4 ovn-installed in OVS
Oct  2 08:16:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:00Z|00209|binding|INFO|Setting lport 92dcc2e6-3d83-4983-aa84-91b164c9dac4 up in Southbound
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.115 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[07d31d03-173e-44b2-9222-7be73214cb66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:00 np0005465987 systemd-udevd[256998]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:16:00 np0005465987 NetworkManager[44910]: <info>  [1759407360.1334] device (tap92dcc2e6-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:16:00 np0005465987 NetworkManager[44910]: <info>  [1759407360.1342] device (tap92dcc2e6-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.156 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[55f60181-4e0b-4ee9-a330-015a8d078c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.161 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[31de10cd-d292-4b2e-8c04-4ed4b36c0c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.182 2 DEBUG nova.virt.libvirt.driver [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.182 2 DEBUG nova.virt.libvirt.driver [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.182 2 DEBUG nova.virt.libvirt.driver [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:53:0c:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.182 2 DEBUG nova.virt.libvirt.driver [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:86:f3:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.183 2 DEBUG nova.virt.libvirt.driver [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:77:ee:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.183 2 DEBUG nova.virt.libvirt.driver [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:b5:2a:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.199 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f489040c-80b3-4c3a-90db-861f18b2d371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.212 2 DEBUG nova.virt.libvirt.guest [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:16:00</nova:creationTime>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:16:00 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:port uuid="07033da6-8cf1-4845-b0f0-525229bc100d">
Oct  2 08:16:00 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:port uuid="e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1">
Oct  2 08:16:00 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    <nova:port uuid="92dcc2e6-3d83-4983-aa84-91b164c9dac4">
Oct  2 08:16:00 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:00 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:16:00 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:16:00 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.227 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[af26465b-2fbc-4bbf-b5ad-aa1503f02361]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535511, 'reachable_time': 21730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257005, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.237 2 DEBUG oslo_concurrency.lockutils [None req-79c9cd81-99d1-4980-b982-b75250761a45 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-92dcc2e6-3d83-4983-aa84-91b164c9dac4" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.242 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[462e5c14-ebd4-4aed-9b29-c39012b3438c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535521, 'tstamp': 535521}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257006, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535523, 'tstamp': 535523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257006, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.244 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.249 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.249 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.249 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:00.250 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.497 2 DEBUG nova.compute.manager [req-5fcbd427-3f03-4d0c-95ad-3d1a8d5b3e36 req-1be6de1f-48dd-4de2-9b36-a4abfee675b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-92dcc2e6-3d83-4983-aa84-91b164c9dac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.498 2 DEBUG oslo_concurrency.lockutils [req-5fcbd427-3f03-4d0c-95ad-3d1a8d5b3e36 req-1be6de1f-48dd-4de2-9b36-a4abfee675b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.499 2 DEBUG oslo_concurrency.lockutils [req-5fcbd427-3f03-4d0c-95ad-3d1a8d5b3e36 req-1be6de1f-48dd-4de2-9b36-a4abfee675b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.499 2 DEBUG oslo_concurrency.lockutils [req-5fcbd427-3f03-4d0c-95ad-3d1a8d5b3e36 req-1be6de1f-48dd-4de2-9b36-a4abfee675b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.500 2 DEBUG nova.compute.manager [req-5fcbd427-3f03-4d0c-95ad-3d1a8d5b3e36 req-1be6de1f-48dd-4de2-9b36-a4abfee675b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-92dcc2e6-3d83-4983-aa84-91b164c9dac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:00 np0005465987 nova_compute[230713]: 2025-10-02 12:16:00.500 2 WARNING nova.compute.manager [req-5fcbd427-3f03-4d0c-95ad-3d1a8d5b3e36 req-1be6de1f-48dd-4de2-9b36-a4abfee675b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-92dcc2e6-3d83-4983-aa84-91b164c9dac4 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:16:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:01.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:01 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:01Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:2a:b6 10.100.0.14
Oct  2 08:16:01 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:01Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:2a:b6 10.100.0.14
Oct  2 08:16:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:01.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.631 2 DEBUG nova.compute.manager [req-eeeea906-89ef-4a4d-b581-9f277c8cef63 req-9c0d398e-9aa7-4a33-bbaf-67e819a89c4b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-92dcc2e6-3d83-4983-aa84-91b164c9dac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.632 2 DEBUG oslo_concurrency.lockutils [req-eeeea906-89ef-4a4d-b581-9f277c8cef63 req-9c0d398e-9aa7-4a33-bbaf-67e819a89c4b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.632 2 DEBUG oslo_concurrency.lockutils [req-eeeea906-89ef-4a4d-b581-9f277c8cef63 req-9c0d398e-9aa7-4a33-bbaf-67e819a89c4b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.632 2 DEBUG oslo_concurrency.lockutils [req-eeeea906-89ef-4a4d-b581-9f277c8cef63 req-9c0d398e-9aa7-4a33-bbaf-67e819a89c4b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.632 2 DEBUG nova.compute.manager [req-eeeea906-89ef-4a4d-b581-9f277c8cef63 req-9c0d398e-9aa7-4a33-bbaf-67e819a89c4b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-92dcc2e6-3d83-4983-aa84-91b164c9dac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.633 2 WARNING nova.compute.manager [req-eeeea906-89ef-4a4d-b581-9f277c8cef63 req-9c0d398e-9aa7-4a33-bbaf-67e819a89c4b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-92dcc2e6-3d83-4983-aa84-91b164c9dac4 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.658 2 DEBUG nova.network.neutron [req-9f310a11-1691-4eae-89c3-71a41f28b837 req-d8a017c4-437d-4f3b-ad6d-22b711bdd7f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updated VIF entry in instance network info cache for port 92dcc2e6-3d83-4983-aa84-91b164c9dac4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.659 2 DEBUG nova.network.neutron [req-9f310a11-1691-4eae-89c3-71a41f28b837 req-d8a017c4-437d-4f3b-ad6d-22b711bdd7f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.681 2 DEBUG oslo_concurrency.lockutils [req-9f310a11-1691-4eae-89c3-71a41f28b837 req-d8a017c4-437d-4f3b-ad6d-22b711bdd7f1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.830 2 DEBUG oslo_concurrency.lockutils [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-07033da6-8cf1-4845-b0f0-525229bc100d" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.830 2 DEBUG oslo_concurrency.lockutils [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-07033da6-8cf1-4845-b0f0-525229bc100d" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.854 2 DEBUG nova.objects.instance [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'flavor' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.926 2 DEBUG nova.virt.libvirt.vif [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.926 2 DEBUG nova.network.os_vif_util [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.927 2 DEBUG nova.network.os_vif_util [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.930 2 DEBUG nova.virt.libvirt.guest [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.932 2 DEBUG nova.virt.libvirt.guest [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.934 2 DEBUG nova.virt.libvirt.driver [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Attempting to detach device tap07033da6-8c from instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.934 2 DEBUG nova.virt.libvirt.guest [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:86:f3:f7"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <target dev="tap07033da6-8c"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:16:02 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.960 2 DEBUG nova.virt.libvirt.guest [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.964 2 DEBUG nova.virt.libvirt.guest [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface>not found in domain: <domain type='kvm' id='31'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <name>instance-00000040</name>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <uuid>a7e79d88-7a39-4a1a-94de-77b469dd67e2</uuid>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:16:00</nova:creationTime>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:port uuid="07033da6-8cf1-4845-b0f0-525229bc100d">
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:port uuid="e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1">
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <nova:port uuid="92dcc2e6-3d83-4983-aa84-91b164c9dac4">
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:16:02 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <entry name='serial'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <entry name='uuid'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk' index='2'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config' index='1'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:53:0c:3b'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target dev='tap8fad7e34-75'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:86:f3:f7'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target dev='tap07033da6-8c'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='net1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:77:ee:eb'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target dev='tape5e52b7f-1b'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='net2'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:b5:2a:b6'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target dev='tap92dcc2e6-3d'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='net3'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c402,c504</label>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c402,c504</imagelabel>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:16:02 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:16:02 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.964 2 INFO nova.virt.libvirt.driver [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully detached device tap07033da6-8c from instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 from the persistent domain config.#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.964 2 DEBUG nova.virt.libvirt.driver [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] (1/8): Attempting to detach device tap07033da6-8c with device alias net1 from instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:16:02 np0005465987 nova_compute[230713]: 2025-10-02 12:16:02.965 2 DEBUG nova.virt.libvirt.guest [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:86:f3:f7"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:16:02 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <target dev="tap07033da6-8c"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:16:03 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:16:03 np0005465987 kernel: tap07033da6-8c (unregistering): left promiscuous mode
Oct  2 08:16:03 np0005465987 NetworkManager[44910]: <info>  [1759407363.0192] device (tap07033da6-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:03Z|00210|binding|INFO|Releasing lport 07033da6-8cf1-4845-b0f0-525229bc100d from this chassis (sb_readonly=0)
Oct  2 08:16:03 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:03Z|00211|binding|INFO|Setting lport 07033da6-8cf1-4845-b0f0-525229bc100d down in Southbound
Oct  2 08:16:03 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:03Z|00212|binding|INFO|Removing iface tap07033da6-8c ovn-installed in OVS
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.037 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:f3:f7 10.100.0.8'], port_security=['fa:16:3e:86:f3:f7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=07033da6-8cf1-4845-b0f0-525229bc100d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.039 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 07033da6-8cf1-4845-b0f0-525229bc100d in datapath ee114210-598c-482f-83c7-26c3363a45c4 unbound from our chassis#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.040 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.044 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759407363.044162, a7e79d88-7a39-4a1a-94de-77b469dd67e2 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.051 2 DEBUG nova.virt.libvirt.driver [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Start waiting for the detach event from libvirt for device tap07033da6-8c with device alias net1 for instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.051 2 DEBUG nova.virt.libvirt.guest [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.055 2 DEBUG nova.virt.libvirt.guest [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface>not found in domain: <domain type='kvm' id='31'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <name>instance-00000040</name>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <uuid>a7e79d88-7a39-4a1a-94de-77b469dd67e2</uuid>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:16:00</nova:creationTime>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:port uuid="07033da6-8cf1-4845-b0f0-525229bc100d">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:port uuid="e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:port uuid="92dcc2e6-3d83-4983-aa84-91b164c9dac4">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:16:03 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <entry name='serial'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <entry name='uuid'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk' index='2'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config' index='1'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:53:0c:3b'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target dev='tap8fad7e34-75'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:77:ee:eb'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target dev='tape5e52b7f-1b'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='net2'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:b5:2a:b6'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target dev='tap92dcc2e6-3d'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='net3'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c402,c504</label>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c402,c504</imagelabel>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:16:03 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:16:03 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.055 2 INFO nova.virt.libvirt.driver [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully detached device tap07033da6-8c from instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 from the live domain config.#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.056 2 DEBUG nova.virt.libvirt.vif [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.056 2 DEBUG nova.network.os_vif_util [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.057 2 DEBUG nova.network.os_vif_util [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.057 2 DEBUG os_vif [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07033da6-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.060 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[930d46e1-d27f-496c-be83-8b1d2ac982ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.070 2 INFO os_vif [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c')#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.070 2 DEBUG nova.virt.libvirt.guest [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:16:03</nova:creationTime>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:port uuid="e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    <nova:port uuid="92dcc2e6-3d83-4983-aa84-91b164c9dac4">
Oct  2 08:16:03 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:03 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:16:03 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:16:03 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.100 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa7e85a-61c2-47db-9e38-1d430f4ff2ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.105 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d97fc29c-31fb-4548-8d28-41a4c7de9fdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.142 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a5935de9-a7a3-4dae-8843-c1200ac5d621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:03.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.164 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e95086cc-fcdb-4354-beaf-90b999a35b04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535511, 'reachable_time': 21730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257016, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.178 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[74e3ea0c-af98-4397-84b1-e6a7163ee4e2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535521, 'tstamp': 535521}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257017, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535523, 'tstamp': 535523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257017, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.181 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.184 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.184 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.185 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:03.185 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.303 2 DEBUG nova.compute.manager [req-9cbbeaa1-9d1c-43ce-9d71-0de146a5eb52 req-1308d186-bf36-4c2d-ab1c-16ee0416387a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-unplugged-07033da6-8cf1-4845-b0f0-525229bc100d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.303 2 DEBUG oslo_concurrency.lockutils [req-9cbbeaa1-9d1c-43ce-9d71-0de146a5eb52 req-1308d186-bf36-4c2d-ab1c-16ee0416387a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.304 2 DEBUG oslo_concurrency.lockutils [req-9cbbeaa1-9d1c-43ce-9d71-0de146a5eb52 req-1308d186-bf36-4c2d-ab1c-16ee0416387a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.304 2 DEBUG oslo_concurrency.lockutils [req-9cbbeaa1-9d1c-43ce-9d71-0de146a5eb52 req-1308d186-bf36-4c2d-ab1c-16ee0416387a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.304 2 DEBUG nova.compute.manager [req-9cbbeaa1-9d1c-43ce-9d71-0de146a5eb52 req-1308d186-bf36-4c2d-ab1c-16ee0416387a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-unplugged-07033da6-8cf1-4845-b0f0-525229bc100d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.304 2 WARNING nova.compute.manager [req-9cbbeaa1-9d1c-43ce-9d71-0de146a5eb52 req-1308d186-bf36-4c2d-ab1c-16ee0416387a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-unplugged-07033da6-8cf1-4845-b0f0-525229bc100d for instance with vm_state active and task_state None.#033[00m
Oct  2 08:16:03 np0005465987 nova_compute[230713]: 2025-10-02 12:16:03.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:03.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.020 2 DEBUG oslo_concurrency.lockutils [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.021 2 DEBUG oslo_concurrency.lockutils [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.021 2 DEBUG nova.network.neutron [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.096 2 DEBUG nova.compute.manager [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-deleted-07033da6-8cf1-4845-b0f0-525229bc100d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.096 2 INFO nova.compute.manager [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Neutron deleted interface 07033da6-8cf1-4845-b0f0-525229bc100d; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.096 2 DEBUG nova.network.neutron [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.143 2 DEBUG nova.objects.instance [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'system_metadata' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.189 2 DEBUG nova.objects.instance [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'flavor' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.217 2 DEBUG nova.virt.libvirt.vif [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.218 2 DEBUG nova.network.os_vif_util [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.219 2 DEBUG nova.network.os_vif_util [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.223 2 DEBUG nova.virt.libvirt.guest [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.228 2 DEBUG nova.virt.libvirt.guest [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface>not found in domain: <domain type='kvm' id='31'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <name>instance-00000040</name>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <uuid>a7e79d88-7a39-4a1a-94de-77b469dd67e2</uuid>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:16:03</nova:creationTime>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:port uuid="e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:port uuid="92dcc2e6-3d83-4983-aa84-91b164c9dac4">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:16:04 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='serial'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='uuid'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk' index='2'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config' index='1'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:53:0c:3b'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='tap8fad7e34-75'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:77:ee:eb'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='tape5e52b7f-1b'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='net2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:b5:2a:b6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='tap92dcc2e6-3d'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='net3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c402,c504</label>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c402,c504</imagelabel>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:16:04 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:16:04 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.228 2 DEBUG nova.virt.libvirt.guest [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.232 2 DEBUG nova.virt.libvirt.guest [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:86:f3:f7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07033da6-8c"/></interface>not found in domain: <domain type='kvm' id='31'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <name>instance-00000040</name>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <uuid>a7e79d88-7a39-4a1a-94de-77b469dd67e2</uuid>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:16:03</nova:creationTime>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:port uuid="e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:port uuid="92dcc2e6-3d83-4983-aa84-91b164c9dac4">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:16:04 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='serial'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='uuid'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk' index='2'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config' index='1'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:53:0c:3b'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='tap8fad7e34-75'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:77:ee:eb'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='tape5e52b7f-1b'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='net2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:b5:2a:b6'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target dev='tap92dcc2e6-3d'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='net3'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c402,c504</label>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c402,c504</imagelabel>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:16:04 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:16:04 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.232 2 WARNING nova.virt.libvirt.driver [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Detaching interface fa:16:3e:86:f3:f7 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap07033da6-8c' not found.#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.233 2 DEBUG nova.virt.libvirt.vif [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.233 2 DEBUG nova.network.os_vif_util [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "07033da6-8cf1-4845-b0f0-525229bc100d", "address": "fa:16:3e:86:f3:f7", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07033da6-8c", "ovs_interfaceid": "07033da6-8cf1-4845-b0f0-525229bc100d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.234 2 DEBUG nova.network.os_vif_util [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.234 2 DEBUG os_vif [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07033da6-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.238 2 INFO os_vif [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:f3:f7,bridge_name='br-int',has_traffic_filtering=True,id=07033da6-8cf1-4845-b0f0-525229bc100d,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07033da6-8c')#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.239 2 DEBUG nova.virt.libvirt.guest [req-e42b30a9-9e7d-4384-ba94-957c1bb398cf req-4e796ac2-3c5b-4f68-a7e6-8699448294cc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:16:04</nova:creationTime>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:port uuid="e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    <nova:port uuid="92dcc2e6-3d83-4983-aa84-91b164c9dac4">
Oct  2 08:16:04 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:16:04 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:16:04 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:16:04 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:16:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:04.893 138325 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 226b615d-3bdc-4eeb-bfb7-6c05bfa608c5 with type ""#033[00m
Oct  2 08:16:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:04.895 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:2a:b6 10.100.0.14'], port_security=['fa:16:3e:b5:2a:b6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-129720265', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-129720265', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=92dcc2e6-3d83-4983-aa84-91b164c9dac4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:04.898 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 92dcc2e6-3d83-4983-aa84-91b164c9dac4 in datapath ee114210-598c-482f-83c7-26c3363a45c4 unbound from our chassis#033[00m
Oct  2 08:16:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:04.901 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:16:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:04Z|00213|binding|INFO|Removing iface tap92dcc2e6-3d ovn-installed in OVS
Oct  2 08:16:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:04Z|00214|binding|INFO|Removing lport 92dcc2e6-3d83-4983-aa84-91b164c9dac4 ovn-installed in OVS
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:04.916 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b6415f-73b7-4ff8-adb5-bdb31651013f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:04 np0005465987 nova_compute[230713]: 2025-10-02 12:16:04.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:04.944 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[093547f9-9191-4f22-bc07-2adbe965c2f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:04.948 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[04276ec8-26fc-41ea-9120-98d1bd2010ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:04.979 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6a48f361-0438-499f-9675-7637c37351ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.000 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9fa34b-ef35-408c-b28f-142ad0aa6092]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535511, 'reachable_time': 21730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257023, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.018 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[355f909e-b5ec-481b-9209-414872f5e440]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535521, 'tstamp': 535521}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257024, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535523, 'tstamp': 535523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257024, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.019 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.022 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.023 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.023 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.023 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.098 2 DEBUG nova.compute.manager [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-deleted-92dcc2e6-3d83-4983-aa84-91b164c9dac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.098 2 INFO nova.compute.manager [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Neutron deleted interface 92dcc2e6-3d83-4983-aa84-91b164c9dac4; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.098 2 DEBUG nova.network.neutron [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.146 2 DEBUG nova.objects.instance [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'system_metadata' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:05.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.174 2 DEBUG nova.objects.instance [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'flavor' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.185 2 DEBUG oslo_concurrency.lockutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.185 2 DEBUG oslo_concurrency.lockutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.185 2 DEBUG oslo_concurrency.lockutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.185 2 DEBUG oslo_concurrency.lockutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.186 2 DEBUG oslo_concurrency.lockutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.187 2 INFO nova.compute.manager [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Terminating instance#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.188 2 DEBUG nova.compute.manager [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.193 2 DEBUG nova.virt.libvirt.vif [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.193 2 DEBUG nova.network.os_vif_util [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.193 2 DEBUG nova.network.os_vif_util [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.407 2 DEBUG nova.compute.manager [req-66ca8906-5652-4053-a72f-c1446277af43 req-833f0842-7d64-4749-89e6-dfa6b6959c05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-07033da6-8cf1-4845-b0f0-525229bc100d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.407 2 DEBUG oslo_concurrency.lockutils [req-66ca8906-5652-4053-a72f-c1446277af43 req-833f0842-7d64-4749-89e6-dfa6b6959c05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.408 2 DEBUG oslo_concurrency.lockutils [req-66ca8906-5652-4053-a72f-c1446277af43 req-833f0842-7d64-4749-89e6-dfa6b6959c05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.408 2 DEBUG oslo_concurrency.lockutils [req-66ca8906-5652-4053-a72f-c1446277af43 req-833f0842-7d64-4749-89e6-dfa6b6959c05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.409 2 DEBUG nova.compute.manager [req-66ca8906-5652-4053-a72f-c1446277af43 req-833f0842-7d64-4749-89e6-dfa6b6959c05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-07033da6-8cf1-4845-b0f0-525229bc100d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.409 2 WARNING nova.compute.manager [req-66ca8906-5652-4053-a72f-c1446277af43 req-833f0842-7d64-4749-89e6-dfa6b6959c05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-07033da6-8cf1-4845-b0f0-525229bc100d for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:16:05 np0005465987 kernel: tap8fad7e34-75 (unregistering): left promiscuous mode
Oct  2 08:16:05 np0005465987 NetworkManager[44910]: <info>  [1759407365.7052] device (tap8fad7e34-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:16:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:05Z|00215|binding|INFO|Releasing lport 8fad7e34-752f-4cb6-96a5-b506478720d3 from this chassis (sb_readonly=0)
Oct  2 08:16:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:05Z|00216|binding|INFO|Setting lport 8fad7e34-752f-4cb6-96a5-b506478720d3 down in Southbound
Oct  2 08:16:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:05Z|00217|binding|INFO|Removing iface tap8fad7e34-75 ovn-installed in OVS
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.717 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0c:3b 10.100.0.13'], port_security=['fa:16:3e:53:0c:3b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f727668f-4f0a-4e45-a88b-61e109830874', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=8fad7e34-752f-4cb6-96a5-b506478720d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.719 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 8fad7e34-752f-4cb6-96a5-b506478720d3 in datapath ee114210-598c-482f-83c7-26c3363a45c4 unbound from our chassis#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.721 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 kernel: tape5e52b7f-1b (unregistering): left promiscuous mode
Oct  2 08:16:05 np0005465987 NetworkManager[44910]: <info>  [1759407365.7723] device (tape5e52b7f-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.773 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a4da1e35-f5fc-474d-a56e-610e22b9fe19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:05Z|00218|binding|INFO|Releasing lport e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 from this chassis (sb_readonly=0)
Oct  2 08:16:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:05Z|00219|binding|INFO|Setting lport e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 down in Southbound
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:05Z|00220|binding|INFO|Removing iface tape5e52b7f-1b ovn-installed in OVS
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.794 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:ee:eb 10.100.0.11'], port_security=['fa:16:3e:77:ee:eb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a7e79d88-7a39-4a1a-94de-77b469dd67e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 kernel: tap92dcc2e6-3d (unregistering): left promiscuous mode
Oct  2 08:16:05 np0005465987 NetworkManager[44910]: <info>  [1759407365.8139] device (tap92dcc2e6-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.814 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[375a1a5d-c6e5-4ca5-8a1e-4a56bf2b98e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.817 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[8681f25f-f294-4718-bc4f-177eb82066b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.846 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5decc050-b511-4ca9-9d74-94939c20233d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.864 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d175bb-b835-45ce-bbcb-555b17ad659f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535511, 'reachable_time': 21730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257049, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.882 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ecfe6109-23bc-4ab6-abee-1218e11764e0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535521, 'tstamp': 535521}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257050, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535523, 'tstamp': 535523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257050, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.884 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000040.scope: Deactivated successfully.
Oct  2 08:16:05 np0005465987 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000040.scope: Consumed 20.336s CPU time.
Oct  2 08:16:05 np0005465987 systemd-machined[188335]: Machine qemu-31-instance-00000040 terminated.
Oct  2 08:16:05 np0005465987 nova_compute[230713]: 2025-10-02 12:16:05.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.903 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.903 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.903 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.904 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.905 138325 INFO neutron.agent.ovn.metadata.agent [-] Port e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 in datapath ee114210-598c-482f-83c7-26c3363a45c4 unbound from our chassis#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.906 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee114210-598c-482f-83c7-26c3363a45c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.907 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a781122d-ad01-4a55-8445-a1d141f3a978]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:05.907 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 namespace which is not needed anymore#033[00m
Oct  2 08:16:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:05.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.000 2 DEBUG nova.virt.libvirt.guest [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b5:2a:b6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap92dcc2e6-3d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:16:06 np0005465987 NetworkManager[44910]: <info>  [1759407366.0168] manager: (tap8fad7e34-75): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Oct  2 08:16:06 np0005465987 NetworkManager[44910]: <info>  [1759407366.0297] manager: (tape5e52b7f-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Oct  2 08:16:06 np0005465987 NetworkManager[44910]: <info>  [1759407366.0508] manager: (tap92dcc2e6-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.067 2 DEBUG nova.virt.libvirt.guest [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b5:2a:b6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap92dcc2e6-3d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.069 2 DEBUG nova.virt.libvirt.driver [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Attempting to detach device tap92dcc2e6-3d from instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.069 2 DEBUG nova.virt.libvirt.guest [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:b5:2a:b6"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <target dev="tap92dcc2e6-3d"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:16:06 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:16:06 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[256243]: [NOTICE]   (256247) : haproxy version is 2.8.14-c23fe91
Oct  2 08:16:06 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[256243]: [NOTICE]   (256247) : path to executable is /usr/sbin/haproxy
Oct  2 08:16:06 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[256243]: [WARNING]  (256247) : Exiting Master process...
Oct  2 08:16:06 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[256243]: [WARNING]  (256247) : Exiting Master process...
Oct  2 08:16:06 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[256243]: [ALERT]    (256247) : Current worker (256249) exited with code 143 (Terminated)
Oct  2 08:16:06 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[256243]: [WARNING]  (256247) : All workers exited. Exiting... (0)
Oct  2 08:16:06 np0005465987 systemd[1]: libpod-702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2.scope: Deactivated successfully.
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.085 2 DEBUG nova.virt.libvirt.guest [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b5:2a:b6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap92dcc2e6-3d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.086 2 INFO nova.virt.libvirt.driver [-] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Instance destroyed successfully.#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.087 2 DEBUG nova.objects.instance [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'resources' on Instance uuid a7e79d88-7a39-4a1a-94de-77b469dd67e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.090 2 DEBUG nova.virt.libvirt.guest [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b5:2a:b6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap92dcc2e6-3d"/></interface>not found in domain: <domain type='kvm'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <name>instance-00000040</name>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <uuid>a7e79d88-7a39-4a1a-94de-77b469dd67e2</uuid>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1826636369</nova:name>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:14:41</nova:creationTime>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <nova:port uuid="8fad7e34-752f-4cb6-96a5-b506478720d3">
Oct  2 08:16:06 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <entry name='serial'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <entry name='uuid'>a7e79d88-7a39-4a1a-94de-77b469dd67e2</entry>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='partial'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <model fallback='allow'>Nehalem</model>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/a7e79d88-7a39-4a1a-94de-77b469dd67e2_disk.config'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:16:06 np0005465987 podman[257071]: 2025-10-02 12:16:06.091277821 +0000 UTC m=+0.085663064 container died 702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:53:0c:3b'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target dev='tap8fad7e34-75'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:77:ee:eb'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target dev='tape5e52b7f-1b'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <console type='pty'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2/console.log' append='off'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:16:06 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:16:06 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.090 2 INFO nova.virt.libvirt.driver [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Successfully detached device tap92dcc2e6-3d from instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 from the persistent domain config.#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.090 2 DEBUG nova.virt.libvirt.driver [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] (1/8): Attempting to detach device tap92dcc2e6-3d with device alias None from instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.090 2 DEBUG nova.virt.libvirt.guest [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:b5:2a:b6"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]:  <target dev="tap92dcc2e6-3d"/>
Oct  2 08:16:06 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:16:06 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.092 2 DEBUG nova.virt.libvirt.driver [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Libvirt returned error while detaching device tap92dcc2e6-3d from instance a7e79d88-7a39-4a1a-94de-77b469dd67e2. Libvirt error code: 55, error message: Requested operation is not valid: domain is not running. _detach_sync /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2667#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.093 2 WARNING nova.virt.libvirt.driver [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Unexpected libvirt error while detaching device tap92dcc2e6-3d from instance a7e79d88-7a39-4a1a-94de-77b469dd67e2: Requested operation is not valid: domain is not running: libvirt.libvirtError: Requested operation is not valid: domain is not running#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.093 2 DEBUG nova.virt.libvirt.vif [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.094 2 DEBUG nova.network.os_vif_util [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.094 2 DEBUG nova.network.os_vif_util [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.094 2 DEBUG os_vif [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.096 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92dcc2e6-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.106 2 DEBUG nova.virt.libvirt.vif [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.106 2 DEBUG nova.network.os_vif_util [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.107 2 DEBUG nova.network.os_vif_util [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:0c:3b,bridge_name='br-int',has_traffic_filtering=True,id=8fad7e34-752f-4cb6-96a5-b506478720d3,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fad7e34-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.107 2 DEBUG os_vif [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:0c:3b,bridge_name='br-int',has_traffic_filtering=True,id=8fad7e34-752f-4cb6-96a5-b506478720d3,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fad7e34-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8fad7e34-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.110 2 INFO os_vif [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d')#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server [req-390d3241-cb99-4004-9b49-6fead12370cb req-6c7dd899-0fed-4cbe-889a-9ae4908f6427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Exception during message handling: libvirt.libvirtError: Requested operation is not valid: domain is not running
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11064, in external_instance_event
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     self._process_instance_vif_deleted_event(context,
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10871, in _process_instance_vif_deleted_event
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     self.driver.detach_interface(context, instance, vif)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2943, in detach_interface
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     self._detach_with_retry(
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2473, in _detach_with_retry
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     self._detach_from_live_with_retry(
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2529, in _detach_from_live_with_retry
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     self._detach_from_live_and_wait_for_event(
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2591, in _detach_from_live_and_wait_for_event
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     self._detach_sync(
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2663, in _detach_sync
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     guest.detach_device(dev, persistent=persistent, live=live)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py", line 466, in detach_device
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     self._domain.detachDeviceFlags(device_xml, flags=flags)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 193, in doit
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     result = proxy_call(self._autowrap, f, *args, **kwargs)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 151, in proxy_call
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     rv = execute(f, *args, **kwargs)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 132, in execute
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     six.reraise(c, e, tb)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     raise value
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 86, in tworker
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     rv = meth(*args, **kwargs)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/site-packages/libvirt.py", line 1570, in detachDeviceFlags
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server     raise libvirtError('virDomainDetachDeviceFlags() failed')
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server libvirt.libvirtError: Requested operation is not valid: domain is not running
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.112 2 ERROR oslo_messaging.rpc.server #033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.119 2 INFO os_vif [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:0c:3b,bridge_name='br-int',has_traffic_filtering=True,id=8fad7e34-752f-4cb6-96a5-b506478720d3,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8fad7e34-75')#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.120 2 DEBUG nova.virt.libvirt.vif [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.120 2 DEBUG nova.network.os_vif_util [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.121 2 DEBUG nova.network.os_vif_util [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:ee:eb,bridge_name='br-int',has_traffic_filtering=True,id=e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5e52b7f-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.121 2 DEBUG os_vif [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:ee:eb,bridge_name='br-int',has_traffic_filtering=True,id=e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5e52b7f-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5e52b7f-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.127 2 INFO os_vif [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:ee:eb,bridge_name='br-int',has_traffic_filtering=True,id=e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5e52b7f-1b')#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.128 2 DEBUG nova.virt.libvirt.vif [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1826636369',display_name='tempest-AttachInterfacesTestJSON-server-1826636369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1826636369',id=64,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLOhBAzWfSMWuNKYeGwVqCN1I1gEpMWIiE5R6uFkXmvqUWqWc7iq0bgmuhI5sUWaw4LFWQF/SpAO9cDhEN3AtN16rxnmNYMzyqFg6ctzEq/ENueR5zpZlucMqa01RU7e7Q==',key_name='tempest-keypair-1238874616',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-3yomo3ly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=a7e79d88-7a39-4a1a-94de-77b469dd67e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.128 2 DEBUG nova.network.os_vif_util [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:06 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2-userdata-shm.mount: Deactivated successfully.
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.129 2 DEBUG nova.network.os_vif_util [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.129 2 DEBUG os_vif [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92dcc2e6-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.133 2 INFO os_vif [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:2a:b6,bridge_name='br-int',has_traffic_filtering=True,id=92dcc2e6-3d83-4983-aa84-91b164c9dac4,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap92dcc2e6-3d')#033[00m
Oct  2 08:16:06 np0005465987 systemd[1]: var-lib-containers-storage-overlay-6162db7e1e082b1da0c0290e9eca1c8ada325a04f9a61437812b156055394005-merged.mount: Deactivated successfully.
Oct  2 08:16:06 np0005465987 podman[257071]: 2025-10-02 12:16:06.144990317 +0000 UTC m=+0.139375550 container cleanup 702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:16:06 np0005465987 systemd[1]: libpod-conmon-702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2.scope: Deactivated successfully.
Oct  2 08:16:06 np0005465987 podman[257152]: 2025-10-02 12:16:06.230057234 +0000 UTC m=+0.060651708 container remove 702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.238 2 DEBUG nova.compute.manager [req-e67ee312-bc1b-4f4f-9e12-750029e7d4ca req-51a23d8c-7def-429f-9a40-5a162f595eae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-unplugged-8fad7e34-752f-4cb6-96a5-b506478720d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.238 2 DEBUG oslo_concurrency.lockutils [req-e67ee312-bc1b-4f4f-9e12-750029e7d4ca req-51a23d8c-7def-429f-9a40-5a162f595eae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.238 2 DEBUG oslo_concurrency.lockutils [req-e67ee312-bc1b-4f4f-9e12-750029e7d4ca req-51a23d8c-7def-429f-9a40-5a162f595eae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.238 2 DEBUG oslo_concurrency.lockutils [req-e67ee312-bc1b-4f4f-9e12-750029e7d4ca req-51a23d8c-7def-429f-9a40-5a162f595eae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.238 2 DEBUG nova.compute.manager [req-e67ee312-bc1b-4f4f-9e12-750029e7d4ca req-51a23d8c-7def-429f-9a40-5a162f595eae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-unplugged-8fad7e34-752f-4cb6-96a5-b506478720d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.239 2 DEBUG nova.compute.manager [req-e67ee312-bc1b-4f4f-9e12-750029e7d4ca req-51a23d8c-7def-429f-9a40-5a162f595eae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-unplugged-8fad7e34-752f-4cb6-96a5-b506478720d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:16:06 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:06.239 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ae999f4f-c6e9-414e-a0fe-e0aad629de81]: (4, ('Thu Oct  2 12:16:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 (702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2)\n702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2\nThu Oct  2 12:16:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 (702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2)\n702dfa5ad49f1dde2a478ae11c548c7aed64b84566fab0e9d93ac60fc44bbaf2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:06 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:06.242 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d3704de3-992a-4d69-aaac-89a85769c146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:06 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:06.243 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.244 2 INFO nova.network.neutron [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Port 07033da6-8cf1-4845-b0f0-525229bc100d from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 08:16:06 np0005465987 kernel: tapee114210-50: left promiscuous mode
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 nova_compute[230713]: 2025-10-02 12:16:06.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:06.262 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5029bd7a-b5e5-4e5e-8c39-ba3b5948ef3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:06 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:06.288 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9947a259-f79c-4f99-82ea-d8b1202bcdd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:06 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:06.289 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0974cc02-8951-41a1-9da2-3e2e49242414]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:06 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:06.306 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8b1b85-249e-4e5e-9089-a3acf309669c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535504, 'reachable_time': 20956, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257168, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:06 np0005465987 systemd[1]: run-netns-ovnmeta\x2dee114210\x2d598c\x2d482f\x2d83c7\x2d26c3363a45c4.mount: Deactivated successfully.
Oct  2 08:16:06 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:06.308 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:16:06 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:06.308 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[3d202e7d-522c-43c5-b0b8-0a43f90d0d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:07.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.171 2 DEBUG nova.compute.manager [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-unplugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.171 2 DEBUG oslo_concurrency.lockutils [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.172 2 DEBUG oslo_concurrency.lockutils [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.172 2 DEBUG oslo_concurrency.lockutils [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.172 2 DEBUG nova.compute.manager [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-unplugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.173 2 DEBUG nova.compute.manager [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-unplugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.173 2 DEBUG nova.compute.manager [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.174 2 DEBUG oslo_concurrency.lockutils [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.174 2 DEBUG oslo_concurrency.lockutils [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.174 2 DEBUG oslo_concurrency.lockutils [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.175 2 DEBUG nova.compute.manager [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:07 np0005465987 nova_compute[230713]: 2025-10-02 12:16:07.175 2 WARNING nova.compute.manager [req-9a0ebef1-0ad2-4dde-a944-34875a6c9a38 req-4e411d32-5c9c-4510-8f34-759cc042616f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:16:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:07.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.334 2 DEBUG nova.compute.manager [req-0ecb8dc4-cea1-4a08-b661-fe24596e2096 req-c6a06390-e629-4ca3-b91e-ad3ce513d25e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-plugged-8fad7e34-752f-4cb6-96a5-b506478720d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.335 2 DEBUG oslo_concurrency.lockutils [req-0ecb8dc4-cea1-4a08-b661-fe24596e2096 req-c6a06390-e629-4ca3-b91e-ad3ce513d25e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.335 2 DEBUG oslo_concurrency.lockutils [req-0ecb8dc4-cea1-4a08-b661-fe24596e2096 req-c6a06390-e629-4ca3-b91e-ad3ce513d25e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.335 2 DEBUG oslo_concurrency.lockutils [req-0ecb8dc4-cea1-4a08-b661-fe24596e2096 req-c6a06390-e629-4ca3-b91e-ad3ce513d25e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.336 2 DEBUG nova.compute.manager [req-0ecb8dc4-cea1-4a08-b661-fe24596e2096 req-c6a06390-e629-4ca3-b91e-ad3ce513d25e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] No waiting events found dispatching network-vif-plugged-8fad7e34-752f-4cb6-96a5-b506478720d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.336 2 WARNING nova.compute.manager [req-0ecb8dc4-cea1-4a08-b661-fe24596e2096 req-c6a06390-e629-4ca3-b91e-ad3ce513d25e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received unexpected event network-vif-plugged-8fad7e34-752f-4cb6-96a5-b506478720d3 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.808 2 INFO nova.virt.libvirt.driver [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Deleting instance files /var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2_del#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.810 2 INFO nova.virt.libvirt.driver [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Deletion of /var/lib/nova/instances/a7e79d88-7a39-4a1a-94de-77b469dd67e2_del complete#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.902 2 INFO nova.compute.manager [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Took 3.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.902 2 DEBUG oslo.service.loopingcall [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.903 2 DEBUG nova.compute.manager [-] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:16:08 np0005465987 nova_compute[230713]: 2025-10-02 12:16:08.903 2 DEBUG nova.network.neutron [-] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:16:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:09.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:09 np0005465987 nova_compute[230713]: 2025-10-02 12:16:09.180 2 DEBUG nova.network.neutron [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "8fad7e34-752f-4cb6-96a5-b506478720d3", "address": "fa:16:3e:53:0c:3b", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8fad7e34-75", "ovs_interfaceid": "8fad7e34-752f-4cb6-96a5-b506478720d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:09 np0005465987 nova_compute[230713]: 2025-10-02 12:16:09.203 2 DEBUG oslo_concurrency.lockutils [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-a7e79d88-7a39-4a1a-94de-77b469dd67e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:09 np0005465987 nova_compute[230713]: 2025-10-02 12:16:09.239 2 DEBUG oslo_concurrency.lockutils [None req-94aa2fae-0a54-4e35-b156-7f78894a99fe 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-a7e79d88-7a39-4a1a-94de-77b469dd67e2-07033da6-8cf1-4845-b0f0-525229bc100d" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:09.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:09 np0005465987 nova_compute[230713]: 2025-10-02 12:16:09.991 2 DEBUG nova.compute.manager [req-eba65e00-0dc0-4a95-b8ec-d2e8837f60bd req-ea9f5375-db9e-47af-bedf-b3334daf5e5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-deleted-8fad7e34-752f-4cb6-96a5-b506478720d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:09 np0005465987 nova_compute[230713]: 2025-10-02 12:16:09.991 2 INFO nova.compute.manager [req-eba65e00-0dc0-4a95-b8ec-d2e8837f60bd req-ea9f5375-db9e-47af-bedf-b3334daf5e5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Neutron deleted interface 8fad7e34-752f-4cb6-96a5-b506478720d3; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:16:09 np0005465987 nova_compute[230713]: 2025-10-02 12:16:09.991 2 DEBUG nova.network.neutron [req-eba65e00-0dc0-4a95-b8ec-d2e8837f60bd req-ea9f5375-db9e-47af-bedf-b3334daf5e5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [{"id": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "address": "fa:16:3e:77:ee:eb", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5e52b7f-1b", "ovs_interfaceid": "e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "address": "fa:16:3e:b5:2a:b6", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92dcc2e6-3d", "ovs_interfaceid": "92dcc2e6-3d83-4983-aa84-91b164c9dac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:10 np0005465987 nova_compute[230713]: 2025-10-02 12:16:10.028 2 DEBUG nova.compute.manager [req-eba65e00-0dc0-4a95-b8ec-d2e8837f60bd req-ea9f5375-db9e-47af-bedf-b3334daf5e5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Detach interface failed, port_id=8fad7e34-752f-4cb6-96a5-b506478720d3, reason: Instance a7e79d88-7a39-4a1a-94de-77b469dd67e2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:16:10 np0005465987 nova_compute[230713]: 2025-10-02 12:16:10.799 2 DEBUG nova.network.neutron [-] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:10 np0005465987 nova_compute[230713]: 2025-10-02 12:16:10.816 2 INFO nova.compute.manager [-] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Took 1.91 seconds to deallocate network for instance.#033[00m
Oct  2 08:16:10 np0005465987 nova_compute[230713]: 2025-10-02 12:16:10.858 2 DEBUG oslo_concurrency.lockutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:10 np0005465987 nova_compute[230713]: 2025-10-02 12:16:10.858 2 DEBUG oslo_concurrency.lockutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:10 np0005465987 nova_compute[230713]: 2025-10-02 12:16:10.923 2 DEBUG oslo_concurrency.processutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:11 np0005465987 nova_compute[230713]: 2025-10-02 12:16:11.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:11.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3663886084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:11 np0005465987 nova_compute[230713]: 2025-10-02 12:16:11.338 2 DEBUG oslo_concurrency.processutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:11 np0005465987 nova_compute[230713]: 2025-10-02 12:16:11.343 2 DEBUG nova.compute.provider_tree [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:11 np0005465987 nova_compute[230713]: 2025-10-02 12:16:11.358 2 DEBUG nova.scheduler.client.report [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:11 np0005465987 nova_compute[230713]: 2025-10-02 12:16:11.386 2 DEBUG oslo_concurrency.lockutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:11 np0005465987 nova_compute[230713]: 2025-10-02 12:16:11.415 2 INFO nova.scheduler.client.report [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Deleted allocations for instance a7e79d88-7a39-4a1a-94de-77b469dd67e2#033[00m
Oct  2 08:16:11 np0005465987 nova_compute[230713]: 2025-10-02 12:16:11.637 2 DEBUG oslo_concurrency.lockutils [None req-c5a7f30e-e39a-46f1-84ca-bff7bd276389 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "a7e79d88-7a39-4a1a-94de-77b469dd67e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:11.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:12 np0005465987 nova_compute[230713]: 2025-10-02 12:16:12.213 2 DEBUG nova.compute.manager [req-a8e778d9-6403-4a48-a17a-007810dbb0ba req-a692cf85-66ba-4ac2-9a7a-491b145ec24e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Received event network-vif-deleted-e5e52b7f-1ba2-4c09-9053-c49d88d8d1c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:13.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:13 np0005465987 nova_compute[230713]: 2025-10-02 12:16:13.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:13.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:15.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:15 np0005465987 podman[257192]: 2025-10-02 12:16:15.858867057 +0000 UTC m=+0.060286117 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct  2 08:16:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:15.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:16 np0005465987 nova_compute[230713]: 2025-10-02 12:16:16.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:16 np0005465987 nova_compute[230713]: 2025-10-02 12:16:16.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:17.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:17 np0005465987 nova_compute[230713]: 2025-10-02 12:16:17.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:17 np0005465987 nova_compute[230713]: 2025-10-02 12:16:17.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:17.865 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:17.866 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:16:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:18 np0005465987 nova_compute[230713]: 2025-10-02 12:16:18.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:19.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:21 np0005465987 nova_compute[230713]: 2025-10-02 12:16:21.069 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407366.0664246, a7e79d88-7a39-4a1a-94de-77b469dd67e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:21 np0005465987 nova_compute[230713]: 2025-10-02 12:16:21.070 2 INFO nova.compute.manager [-] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:16:21 np0005465987 nova_compute[230713]: 2025-10-02 12:16:21.090 2 DEBUG nova.compute.manager [None req-1d377d5c-f919-4332-b5b3-0a135ea64577 - - - - - -] [instance: a7e79d88-7a39-4a1a-94de-77b469dd67e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:21 np0005465987 nova_compute[230713]: 2025-10-02 12:16:21.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:21.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e221 e221: 3 total, 3 up, 3 in
Oct  2 08:16:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:23.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:23 np0005465987 nova_compute[230713]: 2025-10-02 12:16:23.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:23.869 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:24.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:24 np0005465987 podman[257214]: 2025-10-02 12:16:24.835865753 +0000 UTC m=+0.054185641 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:16:24 np0005465987 podman[257215]: 2025-10-02 12:16:24.925456694 +0000 UTC m=+0.143150465 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd)
Oct  2 08:16:24 np0005465987 podman[257213]: 2025-10-02 12:16:24.944542797 +0000 UTC m=+0.168321665 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:16:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:25.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:26.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:26 np0005465987 nova_compute[230713]: 2025-10-02 12:16:26.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e222 e222: 3 total, 3 up, 3 in
Oct  2 08:16:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:27.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:27.943 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:27.943 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:27.943 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:28.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:28 np0005465987 nova_compute[230713]: 2025-10-02 12:16:28.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:29.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e223 e223: 3 total, 3 up, 3 in
Oct  2 08:16:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:30.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:31 np0005465987 nova_compute[230713]: 2025-10-02 12:16:31.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:31.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:32.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:33.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:33 np0005465987 nova_compute[230713]: 2025-10-02 12:16:33.698 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "04b6ea5a-b329-4bd1-bf27-48a8644550c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:33 np0005465987 nova_compute[230713]: 2025-10-02 12:16:33.698 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "04b6ea5a-b329-4bd1-bf27-48a8644550c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:33 np0005465987 nova_compute[230713]: 2025-10-02 12:16:33.724 2 DEBUG nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:16:33 np0005465987 nova_compute[230713]: 2025-10-02 12:16:33.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:33 np0005465987 nova_compute[230713]: 2025-10-02 12:16:33.809 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:33 np0005465987 nova_compute[230713]: 2025-10-02 12:16:33.810 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:33 np0005465987 nova_compute[230713]: 2025-10-02 12:16:33.818 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:16:33 np0005465987 nova_compute[230713]: 2025-10-02 12:16:33.819 2 INFO nova.compute.claims [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:16:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:34.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.031 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:34 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/37436492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.438 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.445 2 DEBUG nova.compute.provider_tree [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.461 2 DEBUG nova.scheduler.client.report [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.485 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.486 2 DEBUG nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.533 2 DEBUG nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.533 2 DEBUG nova.network.neutron [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.554 2 INFO nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.580 2 DEBUG nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.667 2 DEBUG nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.669 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.670 2 INFO nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Creating image(s)#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.704 2 DEBUG nova.storage.rbd_utils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.727 2 DEBUG nova.storage.rbd_utils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.749 2 DEBUG nova.storage.rbd_utils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.752 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.818 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.819 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.819 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.819 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.845 2 DEBUG nova.storage.rbd_utils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:34 np0005465987 nova_compute[230713]: 2025-10-02 12:16:34.849 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.112 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.173 2 DEBUG nova.network.neutron [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.173 2 DEBUG nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.178 2 DEBUG nova.storage.rbd_utils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] resizing rbd image 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:16:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:35.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.285 2 DEBUG nova.objects.instance [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lazy-loading 'migration_context' on Instance uuid 04b6ea5a-b329-4bd1-bf27-48a8644550c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.302 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.303 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Ensure instance console log exists: /var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.303 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.304 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.304 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.305 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.309 2 WARNING nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.314 2 DEBUG nova.virt.libvirt.host [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.314 2 DEBUG nova.virt.libvirt.host [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.319 2 DEBUG nova.virt.libvirt.host [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.319 2 DEBUG nova.virt.libvirt.host [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.320 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.320 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.321 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.321 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.321 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.322 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.322 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.322 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.323 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.323 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.323 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.323 2 DEBUG nova.virt.hardware [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.326 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/148503303' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.791 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.816 2 DEBUG nova.storage.rbd_utils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:35 np0005465987 nova_compute[230713]: 2025-10-02 12:16:35.820 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1814412693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.320 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.323 2 DEBUG nova.objects.instance [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lazy-loading 'pci_devices' on Instance uuid 04b6ea5a-b329-4bd1-bf27-48a8644550c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.350 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <uuid>04b6ea5a-b329-4bd1-bf27-48a8644550c5</uuid>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <name>instance-00000048</name>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <nova:name>tempest-ListImageFiltersTestJSON-server-2017167006</nova:name>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:16:35</nova:creationTime>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <nova:user uuid="03f516d263c8402682568b55b658f885">tempest-ListImageFiltersTestJSON-713501412-project-member</nova:user>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <nova:project uuid="1f9bd65bc7864ca18e1c478ef7e03926">tempest-ListImageFiltersTestJSON-713501412</nova:project>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <entry name="serial">04b6ea5a-b329-4bd1-bf27-48a8644550c5</entry>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <entry name="uuid">04b6ea5a-b329-4bd1-bf27-48a8644550c5</entry>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk.config">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5/console.log" append="off"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:16:36 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:16:36 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:16:36 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:16:36 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.550 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.551 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.552 2 INFO nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Using config drive#033[00m
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.696 2 DEBUG nova.storage.rbd_utils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.872 2 INFO nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Creating config drive at /var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5/disk.config#033[00m
Oct  2 08:16:36 np0005465987 nova_compute[230713]: 2025-10-02 12:16:36.882 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoc9sz3xr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:37 np0005465987 nova_compute[230713]: 2025-10-02 12:16:37.045 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoc9sz3xr" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:37 np0005465987 nova_compute[230713]: 2025-10-02 12:16:37.093 2 DEBUG nova.storage.rbd_utils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] rbd image 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:37 np0005465987 nova_compute[230713]: 2025-10-02 12:16:37.101 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5/disk.config 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:37.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:37 np0005465987 nova_compute[230713]: 2025-10-02 12:16:37.305 2 DEBUG oslo_concurrency.processutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5/disk.config 04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:37 np0005465987 nova_compute[230713]: 2025-10-02 12:16:37.306 2 INFO nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Deleting local config drive /var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5/disk.config because it was imported into RBD.#033[00m
Oct  2 08:16:37 np0005465987 systemd-machined[188335]: New machine qemu-32-instance-00000048.
Oct  2 08:16:37 np0005465987 systemd[1]: Started Virtual Machine qemu-32-instance-00000048.
Oct  2 08:16:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:38.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.395 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.397 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.429 2 DEBUG nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.503 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407398.5031514, 04b6ea5a-b329-4bd1-bf27-48a8644550c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.504 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.506 2 DEBUG nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.506 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.510 2 INFO nova.virt.libvirt.driver [-] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Instance spawned successfully.#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.510 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.550 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.556 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.556 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.556 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.557 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.557 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.557 2 DEBUG nova.virt.libvirt.driver [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.561 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.593 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.594 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.596 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.596 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407398.5055325, 04b6ea5a-b329-4bd1-bf27-48a8644550c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.596 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] VM Started (Lifecycle Event)#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.606 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.606 2 INFO nova.compute.claims [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.637 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.641 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.661 2 INFO nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Took 3.99 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.662 2 DEBUG nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.674 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:16:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.726 2 INFO nova.compute.manager [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Took 4.95 seconds to build instance.#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.742 2 DEBUG oslo_concurrency.lockutils [None req-28d2354b-8602-4d8e-9dbb-9f1a10c804fd 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "04b6ea5a-b329-4bd1-bf27-48a8644550c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.745 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:38 np0005465987 nova_compute[230713]: 2025-10-02 12:16:38.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/751861772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:39.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.233 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.238 2 DEBUG nova.compute.provider_tree [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.258 2 DEBUG nova.scheduler.client.report [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.300 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.302 2 DEBUG nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.373 2 DEBUG nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.374 2 DEBUG nova.network.neutron [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.405 2 INFO nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.422 2 DEBUG nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.555 2 DEBUG nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.556 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.557 2 INFO nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Creating image(s)#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.584 2 DEBUG nova.storage.rbd_utils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.609 2 DEBUG nova.storage.rbd_utils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.636 2 DEBUG nova.storage.rbd_utils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.641 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.678 2 DEBUG nova.policy [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e627e098f2b465d97099fee8f489b71', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.705 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.706 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.707 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.707 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.734 2 DEBUG nova.storage.rbd_utils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:39 np0005465987 nova_compute[230713]: 2025-10-02 12:16:39.739 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:40.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.344 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.417 2 DEBUG nova.storage.rbd_utils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] resizing rbd image 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.526 2 DEBUG nova.objects.instance [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'migration_context' on Instance uuid 6fbb173d-d50b-4c82-bfb8-d8b3384c81df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.546 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.546 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Ensure instance console log exists: /var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.547 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.547 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.547 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.672 2 DEBUG nova.network.neutron [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Successfully created port: 62e8e068-3aad-45d3-984e-6d434301bf50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:16:40 np0005465987 nova_compute[230713]: 2025-10-02 12:16:40.992 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:41.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.251 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-04b6ea5a-b329-4bd1-bf27-48a8644550c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.251 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-04b6ea5a-b329-4bd1-bf27-48a8644550c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.251 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.251 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 04b6ea5a-b329-4bd1-bf27-48a8644550c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.463 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.589 2 DEBUG nova.network.neutron [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Successfully updated port: 62e8e068-3aad-45d3-984e-6d434301bf50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.601 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.602 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.602 2 DEBUG nova.network.neutron [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.710 2 DEBUG nova.compute.manager [req-e55864ab-77ad-4143-8b38-5dac182725d1 req-7ebb0c7a-320c-42fb-a00f-e38d593dd257 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-changed-62e8e068-3aad-45d3-984e-6d434301bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.711 2 DEBUG nova.compute.manager [req-e55864ab-77ad-4143-8b38-5dac182725d1 req-7ebb0c7a-320c-42fb-a00f-e38d593dd257 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing instance network info cache due to event network-changed-62e8e068-3aad-45d3-984e-6d434301bf50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.711 2 DEBUG oslo_concurrency.lockutils [req-e55864ab-77ad-4143-8b38-5dac182725d1 req-7ebb0c7a-320c-42fb-a00f-e38d593dd257 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.764 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.777 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-04b6ea5a-b329-4bd1-bf27-48a8644550c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.778 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:16:41 np0005465987 nova_compute[230713]: 2025-10-02 12:16:41.790 2 DEBUG nova.network.neutron [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:42.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e224 e224: 3 total, 3 up, 3 in
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.662 2 DEBUG nova.network.neutron [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updating instance_info_cache with network_info: [{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.690 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.691 2 DEBUG nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Instance network_info: |[{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.692 2 DEBUG oslo_concurrency.lockutils [req-e55864ab-77ad-4143-8b38-5dac182725d1 req-7ebb0c7a-320c-42fb-a00f-e38d593dd257 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.692 2 DEBUG nova.network.neutron [req-e55864ab-77ad-4143-8b38-5dac182725d1 req-7ebb0c7a-320c-42fb-a00f-e38d593dd257 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing network info cache for port 62e8e068-3aad-45d3-984e-6d434301bf50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.697 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Start _get_guest_xml network_info=[{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.704 2 WARNING nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.711 2 DEBUG nova.virt.libvirt.host [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.712 2 DEBUG nova.virt.libvirt.host [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.721 2 DEBUG nova.virt.libvirt.host [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.722 2 DEBUG nova.virt.libvirt.host [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.723 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.723 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.723 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.724 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.724 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.724 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.724 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.724 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.725 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.725 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.725 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.725 2 DEBUG nova.virt.hardware [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:16:42 np0005465987 nova_compute[230713]: 2025-10-02 12:16:42.727 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:43.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1354439005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.490 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.523 2 DEBUG nova.storage.rbd_utils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.527 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.771 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3515920537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.991 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.992 2 DEBUG nova.virt.libvirt.vif [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632945682',display_name='tempest-tempest.common.compute-instance-632945682',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632945682',id=74,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATuG0Igs/aClVmsQdGPZDgNmvV7Bgyq0Y4nFm5M4crDAlAvWuUKcpXWJIzqspIHqd22HRQCaWbqLnJi4PrcYo9nS6OUtnl8L8k0zpd1d2KcZKsipjVMcOXUmRyImfEBhg==',key_name='tempest-keypair-716714059',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-j8xh03md',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=6fbb173d-d50b-4c82-bfb8-d8b3384c81df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.993 2 DEBUG nova.network.os_vif_util [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.994 2 DEBUG nova.network.os_vif_util [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:55:af,bridge_name='br-int',has_traffic_filtering=True,id=62e8e068-3aad-45d3-984e-6d434301bf50,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62e8e068-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:43 np0005465987 nova_compute[230713]: 2025-10-02 12:16:43.994 2 DEBUG nova.objects.instance [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fbb173d-d50b-4c82-bfb8-d8b3384c81df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.008 2 DEBUG nova.network.neutron [req-e55864ab-77ad-4143-8b38-5dac182725d1 req-7ebb0c7a-320c-42fb-a00f-e38d593dd257 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updated VIF entry in instance network info cache for port 62e8e068-3aad-45d3-984e-6d434301bf50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.009 2 DEBUG nova.network.neutron [req-e55864ab-77ad-4143-8b38-5dac182725d1 req-7ebb0c7a-320c-42fb-a00f-e38d593dd257 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updating instance_info_cache with network_info: [{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.017 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <uuid>6fbb173d-d50b-4c82-bfb8-d8b3384c81df</uuid>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <name>instance-0000004a</name>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <nova:name>tempest-tempest.common.compute-instance-632945682</nova:name>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:16:42</nova:creationTime>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <nova:port uuid="62e8e068-3aad-45d3-984e-6d434301bf50">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <entry name="serial">6fbb173d-d50b-4c82-bfb8-d8b3384c81df</entry>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <entry name="uuid">6fbb173d-d50b-4c82-bfb8-d8b3384c81df</entry>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk.config">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:63:55:af"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <target dev="tap62e8e068-3a"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/console.log" append="off"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:16:44 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:16:44 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:16:44 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:16:44 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.020 2 DEBUG nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Preparing to wait for external event network-vif-plugged-62e8e068-3aad-45d3-984e-6d434301bf50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.020 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.021 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.021 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.022 2 DEBUG nova.virt.libvirt.vif [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632945682',display_name='tempest-tempest.common.compute-instance-632945682',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632945682',id=74,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATuG0Igs/aClVmsQdGPZDgNmvV7Bgyq0Y4nFm5M4crDAlAvWuUKcpXWJIzqspIHqd22HRQCaWbqLnJi4PrcYo9nS6OUtnl8L8k0zpd1d2KcZKsipjVMcOXUmRyImfEBhg==',key_name='tempest-keypair-716714059',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-j8xh03md',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:16:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=6fbb173d-d50b-4c82-bfb8-d8b3384c81df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.023 2 DEBUG nova.network.os_vif_util [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.024 2 DEBUG nova.network.os_vif_util [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:55:af,bridge_name='br-int',has_traffic_filtering=True,id=62e8e068-3aad-45d3-984e-6d434301bf50,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62e8e068-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.025 2 DEBUG os_vif [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:55:af,bridge_name='br-int',has_traffic_filtering=True,id=62e8e068-3aad-45d3-984e-6d434301bf50,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62e8e068-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:44.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e8e068-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62e8e068-3a, col_values=(('external_ids', {'iface-id': '62e8e068-3aad-45d3-984e-6d434301bf50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:55:af', 'vm-uuid': '6fbb173d-d50b-4c82-bfb8-d8b3384c81df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:44 np0005465987 NetworkManager[44910]: <info>  [1759407404.0363] manager: (tap62e8e068-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.043 2 DEBUG oslo_concurrency.lockutils [req-e55864ab-77ad-4143-8b38-5dac182725d1 req-7ebb0c7a-320c-42fb-a00f-e38d593dd257 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.044 2 INFO os_vif [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:55:af,bridge_name='br-int',has_traffic_filtering=True,id=62e8e068-3aad-45d3-984e-6d434301bf50,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62e8e068-3a')#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.211 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.212 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.212 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:63:55:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.213 2 INFO nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Using config drive#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.336 2 DEBUG nova.storage.rbd_utils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.712 2 INFO nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Creating config drive at /var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/disk.config#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.811 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnp54ymse execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:44 np0005465987 nova_compute[230713]: 2025-10-02 12:16:44.962 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnp54ymse" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.005 2 DEBUG nova.storage.rbd_utils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] rbd image 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.009 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/disk.config 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:45.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.274 2 DEBUG oslo_concurrency.processutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/disk.config 6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.276 2 INFO nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Deleting local config drive /var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/disk.config because it was imported into RBD.#033[00m
Oct  2 08:16:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e225 e225: 3 total, 3 up, 3 in
Oct  2 08:16:45 np0005465987 kernel: tap62e8e068-3a: entered promiscuous mode
Oct  2 08:16:45 np0005465987 NetworkManager[44910]: <info>  [1759407405.3293] manager: (tap62e8e068-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Oct  2 08:16:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:45Z|00221|binding|INFO|Claiming lport 62e8e068-3aad-45d3-984e-6d434301bf50 for this chassis.
Oct  2 08:16:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:45Z|00222|binding|INFO|62e8e068-3aad-45d3-984e-6d434301bf50: Claiming fa:16:3e:63:55:af 10.100.0.10
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 NetworkManager[44910]: <info>  [1759407405.3479] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct  2 08:16:45 np0005465987 NetworkManager[44910]: <info>  [1759407405.3493] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.355 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:55:af 10.100.0.10'], port_security=['fa:16:3e:63:55:af 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6fbb173d-d50b-4c82-bfb8-d8b3384c81df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd3b62a4-b346-4888-8f6a-ca787221af6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=62e8e068-3aad-45d3-984e-6d434301bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.358 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 62e8e068-3aad-45d3-984e-6d434301bf50 in datapath ee114210-598c-482f-83c7-26c3363a45c4 bound to our chassis#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.359 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:16:45 np0005465987 systemd-udevd[257965]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:16:45 np0005465987 systemd-machined[188335]: New machine qemu-33-instance-0000004a.
Oct  2 08:16:45 np0005465987 NetworkManager[44910]: <info>  [1759407405.3737] device (tap62e8e068-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:16:45 np0005465987 NetworkManager[44910]: <info>  [1759407405.3760] device (tap62e8e068-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.375 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2e64be1c-327a-4c4d-9854-5ae5f8e0242e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.377 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee114210-51 in ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.378 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee114210-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.378 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[27dcb764-bd6b-4097-9bce-2fc38eb87273]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.379 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c6875a-9abc-47f0-8bb8-923e85e6347c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 systemd[1]: Started Virtual Machine qemu-33-instance-0000004a.
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.391 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[3c166b1a-4b97-471a-b8ec-01a423edf17b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.414 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8a605b-eed5-4722-be15-e13d2bcde48b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.442 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c804dd61-8c9f-4f61-aac6-5455a3a4a33b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 NetworkManager[44910]: <info>  [1759407405.4508] manager: (tapee114210-50): new Veth device (/org/freedesktop/NetworkManager/Devices/129)
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.452 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2b16090a-a3e9-47e1-9188-1d452b92e07d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.484 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5040feb6-6a8e-4259-a213-ea349e8ecdd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.491 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc70e93-f077-4c64-838c-6b2a0388f48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 NetworkManager[44910]: <info>  [1759407405.5116] device (tapee114210-50): carrier: link connected
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.519 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd31ffd-e9ae-4d9e-adc9-baa0d11eb789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.539 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1db3ff-e754-45a5-930e-b9f6a421d2b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547408, 'reachable_time': 39230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257999, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.556 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6d30fd67-7a3f-4d51-ba1e-9c0740798b84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:8699'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547408, 'tstamp': 547408}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258000, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.579 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[32a76474-9ee7-4dcc-9951-ca42187ee832]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547408, 'reachable_time': 39230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258001, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.613 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bcc323-5ddc-45a3-b29d-6f09895fe233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:45Z|00223|binding|INFO|Setting lport 62e8e068-3aad-45d3-984e-6d434301bf50 ovn-installed in OVS
Oct  2 08:16:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:45Z|00224|binding|INFO|Setting lport 62e8e068-3aad-45d3-984e-6d434301bf50 up in Southbound
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.676 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[89bcc301-c3f4-43b5-958a-5a7d80610874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.678 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.678 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.678 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 kernel: tapee114210-50: entered promiscuous mode
Oct  2 08:16:45 np0005465987 NetworkManager[44910]: <info>  [1759407405.6806] manager: (tapee114210-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.682 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:16:45Z|00225|binding|INFO|Releasing lport 544b20f9-e292-46bc-a770-180c318d534e from this chassis (sb_readonly=0)
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.699 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee114210-598c-482f-83c7-26c3363a45c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee114210-598c-482f-83c7-26c3363a45c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.700 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3747a9-da8f-4a8a-9319-ac46fefd200f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.701 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-ee114210-598c-482f-83c7-26c3363a45c4
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/ee114210-598c-482f-83c7-26c3363a45c4.pid.haproxy
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID ee114210-598c-482f-83c7-26c3363a45c4
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:16:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:16:45.703 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'env', 'PROCESS_TAG=haproxy-ee114210-598c-482f-83c7-26c3363a45c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee114210-598c-482f-83c7-26c3363a45c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:16:45 np0005465987 nova_compute[230713]: 2025-10-02 12:16:45.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:46.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:46 np0005465987 podman[258033]: 2025-10-02 12:16:46.066603425 +0000 UTC m=+0.056811522 container create 11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:16:46 np0005465987 systemd[1]: Started libpod-conmon-11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3.scope.
Oct  2 08:16:46 np0005465987 podman[258033]: 2025-10-02 12:16:46.0380362 +0000 UTC m=+0.028244327 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:16:46 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:16:46 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9431b0a525e33ab78eaa4b9eccc66a410abd187d32109ab853068d2b0c757b25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:16:46 np0005465987 podman[258046]: 2025-10-02 12:16:46.169953524 +0000 UTC m=+0.065145530 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:16:46 np0005465987 podman[258033]: 2025-10-02 12:16:46.180250267 +0000 UTC m=+0.170458414 container init 11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:16:46 np0005465987 podman[258033]: 2025-10-02 12:16:46.18729131 +0000 UTC m=+0.177499427 container start 11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:16:46 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[258061]: [NOTICE]   (258071) : New worker (258073) forked
Oct  2 08:16:46 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[258061]: [NOTICE]   (258071) : Loading success.
Oct  2 08:16:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e226 e226: 3 total, 3 up, 3 in
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.928 2 DEBUG nova.compute.manager [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-plugged-62e8e068-3aad-45d3-984e-6d434301bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.928 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.929 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.929 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.929 2 DEBUG nova.compute.manager [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Processing event network-vif-plugged-62e8e068-3aad-45d3-984e-6d434301bf50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.930 2 DEBUG nova.compute.manager [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-plugged-62e8e068-3aad-45d3-984e-6d434301bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.930 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.930 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.931 2 DEBUG oslo_concurrency.lockutils [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.931 2 DEBUG nova.compute.manager [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] No waiting events found dispatching network-vif-plugged-62e8e068-3aad-45d3-984e-6d434301bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:46 np0005465987 nova_compute[230713]: 2025-10-02 12:16:46.931 2 WARNING nova.compute.manager [req-ef4715aa-c04c-4bb5-ace5-735fb94ed577 req-d494d450-0940-4103-9a34-f3723a7e6cfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received unexpected event network-vif-plugged-62e8e068-3aad-45d3-984e-6d434301bf50 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.020 2 DEBUG nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.021 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407407.0200026, 6fbb173d-d50b-4c82-bfb8-d8b3384c81df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.022 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] VM Started (Lifecycle Event)#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.024 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.027 2 INFO nova.virt.libvirt.driver [-] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Instance spawned successfully.#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.027 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.053 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.059 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.061 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.061 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.062 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.062 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.063 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.063 2 DEBUG nova.virt.libvirt.driver [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.109 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.110 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407407.0201886, 6fbb173d-d50b-4c82-bfb8-d8b3384c81df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.110 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.141 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.144 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407407.023959, 6fbb173d-d50b-4c82-bfb8-d8b3384c81df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.144 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.149 2 INFO nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Took 7.59 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.150 2 DEBUG nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.159 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.160 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.190 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:16:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:47.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.254 2 INFO nova.compute.manager [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Took 8.71 seconds to build instance.#033[00m
Oct  2 08:16:47 np0005465987 nova_compute[230713]: 2025-10-02 12:16:47.274 2 DEBUG oslo_concurrency.lockutils [None req-ef3df813-b20b-480e-898b-5679ac263458 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:48.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:48 np0005465987 nova_compute[230713]: 2025-10-02 12:16:48.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:48 np0005465987 nova_compute[230713]: 2025-10-02 12:16:48.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:48 np0005465987 nova_compute[230713]: 2025-10-02 12:16:48.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:48 np0005465987 nova_compute[230713]: 2025-10-02 12:16:48.988 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:48 np0005465987 nova_compute[230713]: 2025-10-02 12:16:48.989 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:48 np0005465987 nova_compute[230713]: 2025-10-02 12:16:48.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:48 np0005465987 nova_compute[230713]: 2025-10-02 12:16:48.990 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:16:48 np0005465987 nova_compute[230713]: 2025-10-02 12:16:48.991 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:49.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1989877382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.443 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.508 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.509 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.513 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.513 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.693 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.693 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4362MB free_disk=20.891986846923828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.694 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.694 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.769 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 04b6ea5a-b329-4bd1-bf27-48a8644550c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.770 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 6fbb173d-d50b-4c82-bfb8-d8b3384c81df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.770 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.770 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:16:49 np0005465987 nova_compute[230713]: 2025-10-02 12:16:49.814 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:50.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:50 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/164442424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:50 np0005465987 nova_compute[230713]: 2025-10-02 12:16:50.290 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:50 np0005465987 nova_compute[230713]: 2025-10-02 12:16:50.297 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:50 np0005465987 nova_compute[230713]: 2025-10-02 12:16:50.315 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:50 np0005465987 nova_compute[230713]: 2025-10-02 12:16:50.336 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:16:50 np0005465987 nova_compute[230713]: 2025-10-02 12:16:50.337 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:50 np0005465987 nova_compute[230713]: 2025-10-02 12:16:50.566 2 DEBUG nova.compute.manager [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:50 np0005465987 nova_compute[230713]: 2025-10-02 12:16:50.626 2 INFO nova.compute.manager [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] instance snapshotting#033[00m
Oct  2 08:16:50 np0005465987 nova_compute[230713]: 2025-10-02 12:16:50.870 2 INFO nova.virt.libvirt.driver [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Beginning live snapshot process#033[00m
Oct  2 08:16:51 np0005465987 nova_compute[230713]: 2025-10-02 12:16:51.048 2 DEBUG nova.virt.libvirt.imagebackend [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:16:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:51.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:51 np0005465987 nova_compute[230713]: 2025-10-02 12:16:51.336 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:51 np0005465987 nova_compute[230713]: 2025-10-02 12:16:51.337 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:51 np0005465987 nova_compute[230713]: 2025-10-02 12:16:51.337 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:51 np0005465987 nova_compute[230713]: 2025-10-02 12:16:51.337 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:16:51 np0005465987 nova_compute[230713]: 2025-10-02 12:16:51.380 2 DEBUG nova.storage.rbd_utils [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] creating snapshot(d1bdb548dac14231828c169bc6c96672) on rbd image(04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:16:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e227 e227: 3 total, 3 up, 3 in
Oct  2 08:16:51 np0005465987 nova_compute[230713]: 2025-10-02 12:16:51.684 2 DEBUG nova.storage.rbd_utils [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] cloning vms/04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk@d1bdb548dac14231828c169bc6c96672 to images/358aa82b-067b-458c-9095-e40a3a914f59 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:16:51 np0005465987 nova_compute[230713]: 2025-10-02 12:16:51.842 2 DEBUG nova.storage.rbd_utils [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] flattening images/358aa82b-067b-458c-9095-e40a3a914f59 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:16:51 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct  2 08:16:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:16:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:52.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:16:52 np0005465987 nova_compute[230713]: 2025-10-02 12:16:52.274 2 DEBUG nova.storage.rbd_utils [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] removing snapshot(d1bdb548dac14231828c169bc6c96672) on rbd image(04b6ea5a-b329-4bd1-bf27-48a8644550c5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:16:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e228 e228: 3 total, 3 up, 3 in
Oct  2 08:16:52 np0005465987 nova_compute[230713]: 2025-10-02 12:16:52.671 2 DEBUG nova.storage.rbd_utils [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] creating snapshot(snap) on rbd image(358aa82b-067b-458c-9095-e40a3a914f59) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:16:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:53.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e229 e229: 3 total, 3 up, 3 in
Oct  2 08:16:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e230 e230: 3 total, 3 up, 3 in
Oct  2 08:16:53 np0005465987 nova_compute[230713]: 2025-10-02 12:16:53.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:53 np0005465987 nova_compute[230713]: 2025-10-02 12:16:53.890 2 DEBUG nova.compute.manager [req-de9e3a2c-1392-4278-a41d-70ffe0eec28f req-0ea3bfb8-ead5-4910-8c18-672c89da15f2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-changed-62e8e068-3aad-45d3-984e-6d434301bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:53 np0005465987 nova_compute[230713]: 2025-10-02 12:16:53.890 2 DEBUG nova.compute.manager [req-de9e3a2c-1392-4278-a41d-70ffe0eec28f req-0ea3bfb8-ead5-4910-8c18-672c89da15f2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing instance network info cache due to event network-changed-62e8e068-3aad-45d3-984e-6d434301bf50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:16:53 np0005465987 nova_compute[230713]: 2025-10-02 12:16:53.890 2 DEBUG oslo_concurrency.lockutils [req-de9e3a2c-1392-4278-a41d-70ffe0eec28f req-0ea3bfb8-ead5-4910-8c18-672c89da15f2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:53 np0005465987 nova_compute[230713]: 2025-10-02 12:16:53.891 2 DEBUG oslo_concurrency.lockutils [req-de9e3a2c-1392-4278-a41d-70ffe0eec28f req-0ea3bfb8-ead5-4910-8c18-672c89da15f2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:53 np0005465987 nova_compute[230713]: 2025-10-02 12:16:53.891 2 DEBUG nova.network.neutron [req-de9e3a2c-1392-4278-a41d-70ffe0eec28f req-0ea3bfb8-ead5-4910-8c18-672c89da15f2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing network info cache for port 62e8e068-3aad-45d3-984e-6d434301bf50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:16:54 np0005465987 nova_compute[230713]: 2025-10-02 12:16:54.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:54.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:55 np0005465987 nova_compute[230713]: 2025-10-02 12:16:55.139 2 DEBUG nova.compute.manager [req-617cfbdd-6714-4157-b741-c6377cbdc765 req-64bbab7f-a1a9-4845-867b-3bee8cca19e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-changed-62e8e068-3aad-45d3-984e-6d434301bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:55 np0005465987 nova_compute[230713]: 2025-10-02 12:16:55.140 2 DEBUG nova.compute.manager [req-617cfbdd-6714-4157-b741-c6377cbdc765 req-64bbab7f-a1a9-4845-867b-3bee8cca19e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing instance network info cache due to event network-changed-62e8e068-3aad-45d3-984e-6d434301bf50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:16:55 np0005465987 nova_compute[230713]: 2025-10-02 12:16:55.141 2 DEBUG oslo_concurrency.lockutils [req-617cfbdd-6714-4157-b741-c6377cbdc765 req-64bbab7f-a1a9-4845-867b-3bee8cca19e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:55.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:55 np0005465987 nova_compute[230713]: 2025-10-02 12:16:55.259 2 INFO nova.virt.libvirt.driver [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Snapshot image upload complete#033[00m
Oct  2 08:16:55 np0005465987 nova_compute[230713]: 2025-10-02 12:16:55.260 2 INFO nova.compute.manager [None req-5ba4e8f1-709f-48f7-a267-09ac9d44fbab 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Took 4.63 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:16:55 np0005465987 podman[258311]: 2025-10-02 12:16:55.842746644 +0000 UTC m=+0.052019870 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 08:16:55 np0005465987 podman[258312]: 2025-10-02 12:16:55.845624273 +0000 UTC m=+0.055219468 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:16:55 np0005465987 podman[258310]: 2025-10-02 12:16:55.868207374 +0000 UTC m=+0.083150596 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct  2 08:16:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:56.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:57.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:57 np0005465987 nova_compute[230713]: 2025-10-02 12:16:57.561 2 DEBUG nova.network.neutron [req-de9e3a2c-1392-4278-a41d-70ffe0eec28f req-0ea3bfb8-ead5-4910-8c18-672c89da15f2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updated VIF entry in instance network info cache for port 62e8e068-3aad-45d3-984e-6d434301bf50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:16:57 np0005465987 nova_compute[230713]: 2025-10-02 12:16:57.563 2 DEBUG nova.network.neutron [req-de9e3a2c-1392-4278-a41d-70ffe0eec28f req-0ea3bfb8-ead5-4910-8c18-672c89da15f2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updating instance_info_cache with network_info: [{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:57 np0005465987 nova_compute[230713]: 2025-10-02 12:16:57.588 2 DEBUG oslo_concurrency.lockutils [req-de9e3a2c-1392-4278-a41d-70ffe0eec28f req-0ea3bfb8-ead5-4910-8c18-672c89da15f2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:57 np0005465987 nova_compute[230713]: 2025-10-02 12:16:57.590 2 DEBUG oslo_concurrency.lockutils [req-617cfbdd-6714-4157-b741-c6377cbdc765 req-64bbab7f-a1a9-4845-867b-3bee8cca19e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:57 np0005465987 nova_compute[230713]: 2025-10-02 12:16:57.590 2 DEBUG nova.network.neutron [req-617cfbdd-6714-4157-b741-c6377cbdc765 req-64bbab7f-a1a9-4845-867b-3bee8cca19e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing network info cache for port 62e8e068-3aad-45d3-984e-6d434301bf50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:16:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:58.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e231 e231: 3 total, 3 up, 3 in
Oct  2 08:16:58 np0005465987 nova_compute[230713]: 2025-10-02 12:16:58.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:59 np0005465987 nova_compute[230713]: 2025-10-02 12:16:59.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:16:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:59.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:16:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:16:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:16:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e232 e232: 3 total, 3 up, 3 in
Oct  2 08:17:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:00.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:00Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:55:af 10.100.0.10
Oct  2 08:17:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:00Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:55:af 10.100.0.10
Oct  2 08:17:00 np0005465987 nova_compute[230713]: 2025-10-02 12:17:00.489 2 DEBUG nova.network.neutron [req-617cfbdd-6714-4157-b741-c6377cbdc765 req-64bbab7f-a1a9-4845-867b-3bee8cca19e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updated VIF entry in instance network info cache for port 62e8e068-3aad-45d3-984e-6d434301bf50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:17:00 np0005465987 nova_compute[230713]: 2025-10-02 12:17:00.490 2 DEBUG nova.network.neutron [req-617cfbdd-6714-4157-b741-c6377cbdc765 req-64bbab7f-a1a9-4845-867b-3bee8cca19e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updating instance_info_cache with network_info: [{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:00 np0005465987 nova_compute[230713]: 2025-10-02 12:17:00.514 2 DEBUG oslo_concurrency.lockutils [req-617cfbdd-6714-4157-b741-c6377cbdc765 req-64bbab7f-a1a9-4845-867b-3bee8cca19e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e233 e233: 3 total, 3 up, 3 in
Oct  2 08:17:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:01.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:02.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:03.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e234 e234: 3 total, 3 up, 3 in
Oct  2 08:17:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e235 e235: 3 total, 3 up, 3 in
Oct  2 08:17:03 np0005465987 nova_compute[230713]: 2025-10-02 12:17:03.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:04 np0005465987 nova_compute[230713]: 2025-10-02 12:17:04.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:04.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:05.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:06.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:17:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:17:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e236 e236: 3 total, 3 up, 3 in
Oct  2 08:17:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:07.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e237 e237: 3 total, 3 up, 3 in
Oct  2 08:17:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:08.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e238 e238: 3 total, 3 up, 3 in
Oct  2 08:17:08 np0005465987 nova_compute[230713]: 2025-10-02 12:17:08.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:09 np0005465987 nova_compute[230713]: 2025-10-02 12:17:09.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:09.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:10.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:11.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:11 np0005465987 nova_compute[230713]: 2025-10-02 12:17:11.628 2 DEBUG nova.compute.manager [req-126a1cdb-6f0a-4f8d-9037-28fa52fe4622 req-8727fe52-ddf3-4059-8f2c-4cecd815d2de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-changed-62e8e068-3aad-45d3-984e-6d434301bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:11 np0005465987 nova_compute[230713]: 2025-10-02 12:17:11.628 2 DEBUG nova.compute.manager [req-126a1cdb-6f0a-4f8d-9037-28fa52fe4622 req-8727fe52-ddf3-4059-8f2c-4cecd815d2de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing instance network info cache due to event network-changed-62e8e068-3aad-45d3-984e-6d434301bf50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:17:11 np0005465987 nova_compute[230713]: 2025-10-02 12:17:11.628 2 DEBUG oslo_concurrency.lockutils [req-126a1cdb-6f0a-4f8d-9037-28fa52fe4622 req-8727fe52-ddf3-4059-8f2c-4cecd815d2de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:11 np0005465987 nova_compute[230713]: 2025-10-02 12:17:11.629 2 DEBUG oslo_concurrency.lockutils [req-126a1cdb-6f0a-4f8d-9037-28fa52fe4622 req-8727fe52-ddf3-4059-8f2c-4cecd815d2de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:11 np0005465987 nova_compute[230713]: 2025-10-02 12:17:11.629 2 DEBUG nova.network.neutron [req-126a1cdb-6f0a-4f8d-9037-28fa52fe4622 req-8727fe52-ddf3-4059-8f2c-4cecd815d2de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing network info cache for port 62e8e068-3aad-45d3-984e-6d434301bf50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:17:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e239 e239: 3 total, 3 up, 3 in
Oct  2 08:17:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:12.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:12 np0005465987 nova_compute[230713]: 2025-10-02 12:17:12.446 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:12 np0005465987 nova_compute[230713]: 2025-10-02 12:17:12.446 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:12 np0005465987 nova_compute[230713]: 2025-10-02 12:17:12.466 2 DEBUG nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:17:12 np0005465987 nova_compute[230713]: 2025-10-02 12:17:12.540 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:12 np0005465987 nova_compute[230713]: 2025-10-02 12:17:12.540 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:12 np0005465987 nova_compute[230713]: 2025-10-02 12:17:12.547 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:17:12 np0005465987 nova_compute[230713]: 2025-10-02 12:17:12.548 2 INFO nova.compute.claims [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:17:12 np0005465987 nova_compute[230713]: 2025-10-02 12:17:12.696 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e240 e240: 3 total, 3 up, 3 in
Oct  2 08:17:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2573772718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.147 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.154 2 DEBUG nova.compute.provider_tree [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.171 2 DEBUG nova.network.neutron [req-126a1cdb-6f0a-4f8d-9037-28fa52fe4622 req-8727fe52-ddf3-4059-8f2c-4cecd815d2de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updated VIF entry in instance network info cache for port 62e8e068-3aad-45d3-984e-6d434301bf50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.172 2 DEBUG nova.network.neutron [req-126a1cdb-6f0a-4f8d-9037-28fa52fe4622 req-8727fe52-ddf3-4059-8f2c-4cecd815d2de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updating instance_info_cache with network_info: [{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.178 2 DEBUG nova.scheduler.client.report [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.211 2 DEBUG oslo_concurrency.lockutils [req-126a1cdb-6f0a-4f8d-9037-28fa52fe4622 req-8727fe52-ddf3-4059-8f2c-4cecd815d2de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.213 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.213 2 DEBUG nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.254 2 DEBUG nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.255 2 DEBUG nova.network.neutron [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.271 2 INFO nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:17:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:13.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.292 2 DEBUG nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.317 2 DEBUG oslo_concurrency.lockutils [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "interface-6fbb173d-d50b-4c82-bfb8-d8b3384c81df-2a8b7c9c-6681-4c28-8c65-774622c5e92c" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.317 2 DEBUG oslo_concurrency.lockutils [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-6fbb173d-d50b-4c82-bfb8-d8b3384c81df-2a8b7c9c-6681-4c28-8c65-774622c5e92c" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.318 2 DEBUG nova.objects.instance [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'flavor' on Instance uuid 6fbb173d-d50b-4c82-bfb8-d8b3384c81df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.337 2 INFO nova.virt.block_device [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Booting with blank volume at /dev/vda#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.530 2 DEBUG nova.policy [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '69d8e29c6d3747e98a5985a584f4c814', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8efba404696b40fbbaa6431b934b87f1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:17:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e241 e241: 3 total, 3 up, 3 in
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.931 2 DEBUG nova.objects.instance [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6fbb173d-d50b-4c82-bfb8-d8b3384c81df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:13 np0005465987 nova_compute[230713]: 2025-10-02 12:17:13.946 2 DEBUG nova.network.neutron [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:17:14 np0005465987 nova_compute[230713]: 2025-10-02 12:17:14.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:14.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:14 np0005465987 nova_compute[230713]: 2025-10-02 12:17:14.462 2 DEBUG nova.network.neutron [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Successfully created port: 60c78706-8cfe-409b-91d7-dc811b9de69a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:17:14 np0005465987 nova_compute[230713]: 2025-10-02 12:17:14.481 2 DEBUG nova.policy [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e627e098f2b465d97099fee8f489b71', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:17:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e242 e242: 3 total, 3 up, 3 in
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.265 2 DEBUG nova.network.neutron [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Successfully updated port: 60c78706-8cfe-409b-91d7-dc811b9de69a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:17:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:15.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.298 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.299 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquired lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.299 2 DEBUG nova.network.neutron [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.456 2 DEBUG nova.network.neutron [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.632 2 DEBUG nova.network.neutron [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Successfully updated port: 2a8b7c9c-6681-4c28-8c65-774622c5e92c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.651 2 DEBUG oslo_concurrency.lockutils [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.651 2 DEBUG oslo_concurrency.lockutils [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.651 2 DEBUG nova.network.neutron [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.824 2 WARNING nova.network.neutron [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] ee114210-598c-482f-83c7-26c3363a45c4 already exists in list: networks containing: ['ee114210-598c-482f-83c7-26c3363a45c4']. ignoring it#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.875 2 DEBUG nova.compute.manager [req-b38d2599-e584-4a0d-8913-4d740157f523 req-968ece93-48d2-4d87-b1e0-a8f41c95df84 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-changed-2a8b7c9c-6681-4c28-8c65-774622c5e92c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.875 2 DEBUG nova.compute.manager [req-b38d2599-e584-4a0d-8913-4d740157f523 req-968ece93-48d2-4d87-b1e0-a8f41c95df84 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing instance network info cache due to event network-changed-2a8b7c9c-6681-4c28-8c65-774622c5e92c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:17:15 np0005465987 nova_compute[230713]: 2025-10-02 12:17:15.876 2 DEBUG oslo_concurrency.lockutils [req-b38d2599-e584-4a0d-8913-4d740157f523 req-968ece93-48d2-4d87-b1e0-a8f41c95df84 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:16.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.255 2 DEBUG nova.compute.manager [req-67977dc5-24ce-420f-9672-aef57507b010 req-999aacdb-f35f-46bf-94af-d2bd7efd720b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-changed-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.256 2 DEBUG nova.compute.manager [req-67977dc5-24ce-420f-9672-aef57507b010 req-999aacdb-f35f-46bf-94af-d2bd7efd720b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Refreshing instance network info cache due to event network-changed-60c78706-8cfe-409b-91d7-dc811b9de69a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.256 2 DEBUG oslo_concurrency.lockutils [req-67977dc5-24ce-420f-9672-aef57507b010 req-999aacdb-f35f-46bf-94af-d2bd7efd720b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.416 2 DEBUG nova.network.neutron [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updating instance_info_cache with network_info: [{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.440 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Releasing lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.440 2 DEBUG nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance network_info: |[{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.441 2 DEBUG oslo_concurrency.lockutils [req-67977dc5-24ce-420f-9672-aef57507b010 req-999aacdb-f35f-46bf-94af-d2bd7efd720b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.442 2 DEBUG nova.network.neutron [req-67977dc5-24ce-420f-9672-aef57507b010 req-999aacdb-f35f-46bf-94af-d2bd7efd720b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Refreshing network info cache for port 60c78706-8cfe-409b-91d7-dc811b9de69a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.445 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "04b6ea5a-b329-4bd1-bf27-48a8644550c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.446 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "04b6ea5a-b329-4bd1-bf27-48a8644550c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.446 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "04b6ea5a-b329-4bd1-bf27-48a8644550c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.447 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "04b6ea5a-b329-4bd1-bf27-48a8644550c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.447 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "04b6ea5a-b329-4bd1-bf27-48a8644550c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.449 2 INFO nova.compute.manager [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Terminating instance#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.451 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "refresh_cache-04b6ea5a-b329-4bd1-bf27-48a8644550c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.451 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquired lock "refresh_cache-04b6ea5a-b329-4bd1-bf27-48a8644550c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.452 2 DEBUG nova.network.neutron [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.609 2 DEBUG nova.network.neutron [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.832 2 DEBUG nova.network.neutron [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.846 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Releasing lock "refresh_cache-04b6ea5a-b329-4bd1-bf27-48a8644550c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:16 np0005465987 nova_compute[230713]: 2025-10-02 12:17:16.847 2 DEBUG nova.compute.manager [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:17:16 np0005465987 podman[258577]: 2025-10-02 12:17:16.857738941 +0000 UTC m=+0.069463120 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:17:16 np0005465987 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000048.scope: Deactivated successfully.
Oct  2 08:17:16 np0005465987 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000048.scope: Consumed 13.842s CPU time.
Oct  2 08:17:16 np0005465987 systemd-machined[188335]: Machine qemu-32-instance-00000048 terminated.
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.076 2 INFO nova.virt.libvirt.driver [-] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Instance destroyed successfully.#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.077 2 DEBUG nova.objects.instance [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lazy-loading 'resources' on Instance uuid 04b6ea5a-b329-4bd1-bf27-48a8644550c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:17.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.795 2 DEBUG os_brick.utils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.796 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.809 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.809 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[2fbdcd3e-cebf-4e7d-8c7b-086b3c5bf630]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.811 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.822 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.823 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[fa67902f-f9f8-4bd7-a65f-afb46fb609f9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.824 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.838 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.838 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[7d157721-0afd-4a20-892a-22ba6b6445e2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.839 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd6876f-166b-4509-b9d9-5bbadbab6e05]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.839 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.870 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.872 2 DEBUG os_brick.initiator.connectors.lightos [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.873 2 DEBUG os_brick.initiator.connectors.lightos [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.873 2 DEBUG os_brick.initiator.connectors.lightos [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.873 2 DEBUG os_brick.utils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:17:17 np0005465987 nova_compute[230713]: 2025-10-02 12:17:17.873 2 DEBUG nova.virt.block_device [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updating existing volume attachment record: 81d64467-a589-41df-8763-5a45265a1bef _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.004 2 INFO nova.virt.libvirt.driver [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Deleting instance files /var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5_del#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.004 2 INFO nova.virt.libvirt.driver [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Deletion of /var/lib/nova/instances/04b6ea5a-b329-4bd1-bf27-48a8644550c5_del complete#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.050 2 INFO nova.compute.manager [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.050 2 DEBUG oslo.service.loopingcall [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.051 2 DEBUG nova.compute.manager [-] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.051 2 DEBUG nova.network.neutron [-] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:17:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:18.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.663 2 DEBUG nova.network.neutron [-] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.679 2 DEBUG nova.network.neutron [-] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.698 2 DEBUG nova.network.neutron [req-67977dc5-24ce-420f-9672-aef57507b010 req-999aacdb-f35f-46bf-94af-d2bd7efd720b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updated VIF entry in instance network info cache for port 60c78706-8cfe-409b-91d7-dc811b9de69a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.698 2 DEBUG nova.network.neutron [req-67977dc5-24ce-420f-9672-aef57507b010 req-999aacdb-f35f-46bf-94af-d2bd7efd720b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updating instance_info_cache with network_info: [{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.702 2 INFO nova.compute.manager [-] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Took 0.65 seconds to deallocate network for instance.#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.714 2 DEBUG oslo_concurrency.lockutils [req-67977dc5-24ce-420f-9672-aef57507b010 req-999aacdb-f35f-46bf-94af-d2bd7efd720b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e243 e243: 3 total, 3 up, 3 in
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.770 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.771 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.865 2 DEBUG nova.network.neutron [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updating instance_info_cache with network_info: [{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.870 2 DEBUG oslo_concurrency.processutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.901 2 DEBUG oslo_concurrency.lockutils [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.903 2 DEBUG oslo_concurrency.lockutils [req-b38d2599-e584-4a0d-8913-4d740157f523 req-968ece93-48d2-4d87-b1e0-a8f41c95df84 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.903 2 DEBUG nova.network.neutron [req-b38d2599-e584-4a0d-8913-4d740157f523 req-968ece93-48d2-4d87-b1e0-a8f41c95df84 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Refreshing network info cache for port 2a8b7c9c-6681-4c28-8c65-774622c5e92c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.907 2 DEBUG nova.virt.libvirt.vif [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632945682',display_name='tempest-tempest.common.compute-instance-632945682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632945682',id=74,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATuG0Igs/aClVmsQdGPZDgNmvV7Bgyq0Y4nFm5M4crDAlAvWuUKcpXWJIzqspIHqd22HRQCaWbqLnJi4PrcYo9nS6OUtnl8L8k0zpd1d2KcZKsipjVMcOXUmRyImfEBhg==',key_name='tempest-keypair-716714059',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-j8xh03md',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=6fbb173d-d50b-4c82-bfb8-d8b3384c81df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.907 2 DEBUG nova.network.os_vif_util [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.908 2 DEBUG nova.network.os_vif_util [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.908 2 DEBUG os_vif [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.910 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a8b7c9c-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a8b7c9c-66, col_values=(('external_ids', {'iface-id': '2a8b7c9c-6681-4c28-8c65-774622c5e92c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:88:f2', 'vm-uuid': '6fbb173d-d50b-4c82-bfb8-d8b3384c81df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:17:18 np0005465987 NetworkManager[44910]: <info>  [1759407438.9190] manager: (tap2a8b7c9c-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.935 2 INFO os_vif [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66')#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.936 2 DEBUG nova.virt.libvirt.vif [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632945682',display_name='tempest-tempest.common.compute-instance-632945682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632945682',id=74,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATuG0Igs/aClVmsQdGPZDgNmvV7Bgyq0Y4nFm5M4crDAlAvWuUKcpXWJIzqspIHqd22HRQCaWbqLnJi4PrcYo9nS6OUtnl8L8k0zpd1d2KcZKsipjVMcOXUmRyImfEBhg==',key_name='tempest-keypair-716714059',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-j8xh03md',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=6fbb173d-d50b-4c82-bfb8-d8b3384c81df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.936 2 DEBUG nova.network.os_vif_util [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.937 2 DEBUG nova.network.os_vif_util [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.941 2 DEBUG nova.virt.libvirt.guest [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] attach device xml: <interface type="ethernet">
Oct  2 08:17:18 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:2e:88:f2"/>
Oct  2 08:17:18 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:17:18 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:17:18 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:17:18 np0005465987 nova_compute[230713]:  <target dev="tap2a8b7c9c-66"/>
Oct  2 08:17:18 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:17:18 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:17:18 np0005465987 kernel: tap2a8b7c9c-66: entered promiscuous mode
Oct  2 08:17:18 np0005465987 NetworkManager[44910]: <info>  [1759407438.9589] manager: (tap2a8b7c9c-66): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Oct  2 08:17:18 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:18Z|00226|binding|INFO|Claiming lport 2a8b7c9c-6681-4c28-8c65-774622c5e92c for this chassis.
Oct  2 08:17:18 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:18Z|00227|binding|INFO|2a8b7c9c-6681-4c28-8c65-774622c5e92c: Claiming fa:16:3e:2e:88:f2 10.100.0.8
Oct  2 08:17:18 np0005465987 systemd-udevd[258597]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:18.970 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:88:f2 10.100.0.8'], port_security=['fa:16:3e:2e:88:f2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1498800616', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6fbb173d-d50b-4c82-bfb8-d8b3384c81df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1498800616', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2a8b7c9c-6681-4c28-8c65-774622c5e92c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:18.972 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2a8b7c9c-6681-4c28-8c65-774622c5e92c in datapath ee114210-598c-482f-83c7-26c3363a45c4 bound to our chassis#033[00m
Oct  2 08:17:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:18.974 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:17:18 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:18Z|00228|binding|INFO|Setting lport 2a8b7c9c-6681-4c28-8c65-774622c5e92c ovn-installed in OVS
Oct  2 08:17:18 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:18Z|00229|binding|INFO|Setting lport 2a8b7c9c-6681-4c28-8c65-774622c5e92c up in Southbound
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:18 np0005465987 NetworkManager[44910]: <info>  [1759407438.9866] device (tap2a8b7c9c-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:17:18 np0005465987 nova_compute[230713]: 2025-10-02 12:17:18.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:18 np0005465987 NetworkManager[44910]: <info>  [1759407438.9891] device (tap2a8b7c9c-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:17:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:18.996 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[28d25af4-7c33-406c-877d-edc8ad4960f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.004 2 DEBUG nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.006 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.010 2 INFO nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Creating image(s)#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.011 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.011 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Ensure instance console log exists: /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.012 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.012 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.012 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.015 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Start _get_guest_xml network_info=[{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '81d64467-a589-41df-8763-5a45265a1bef', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-27c2fdcd-9a25-49d0-befe-30f5b9c08453', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '27c2fdcd-9a25-49d0-befe-30f5b9c08453', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'attached_at': '', 'detached_at': '', 'volume_id': '27c2fdcd-9a25-49d0-befe-30f5b9c08453', 'serial': '27c2fdcd-9a25-49d0-befe-30f5b9c08453'}, 'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.024 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef761a0-3aa6-4ecf-806e-8128778e3118]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.027 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[35fba3c0-f7bf-4f72-b996-79fe23c8bcec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.043 2 WARNING nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.048 2 DEBUG nova.virt.libvirt.host [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.049 2 DEBUG nova.virt.libvirt.host [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.051 2 DEBUG nova.virt.libvirt.host [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.052 2 DEBUG nova.virt.libvirt.host [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.053 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.053 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.054 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.054 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.054 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.054 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.054 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.054 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.055 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.055 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.055 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.055 2 DEBUG nova.virt.hardware [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.056 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[796c5ba4-8b01-46c0-a7d1-c979fb57b42c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.077 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[866e00fb-27d0-41ed-8234-eff50ec4e6d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547408, 'reachable_time': 39230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258666, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.092 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[39a25f90-ee72-4ae0-8a16-297f6d128c6b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547421, 'tstamp': 547421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258674, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547424, 'tstamp': 547424}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258674, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.094 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.096 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.097 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.097 2 DEBUG nova.storage.rbd_utils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.097 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.097 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.100 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.162 2 DEBUG nova.virt.libvirt.driver [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.162 2 DEBUG nova.virt.libvirt.driver [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.162 2 DEBUG nova.virt.libvirt.driver [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:63:55:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.163 2 DEBUG nova.virt.libvirt.driver [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] No VIF found with MAC fa:16:3e:2e:88:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.185 2 DEBUG nova.virt.libvirt.guest [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <nova:name>tempest-tempest.common.compute-instance-632945682</nova:name>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:17:19</nova:creationTime>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:port uuid="62e8e068-3aad-45d3-984e-6d434301bf50">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:port uuid="2a8b7c9c-6681-4c28-8c65-774622c5e92c">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:17:19 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:17:19 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.207 2 DEBUG oslo_concurrency.lockutils [None req-d3f91c2e-bc29-44eb-a2c4-f38662340b42 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-6fbb173d-d50b-4c82-bfb8-d8b3384c81df-2a8b7c9c-6681-4c28-8c65-774622c5e92c" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:19.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2249381316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.326 2 DEBUG oslo_concurrency.processutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.330 2 DEBUG nova.compute.provider_tree [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.347 2 DEBUG nova.scheduler.client.report [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.368 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.400 2 INFO nova.scheduler.client.report [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Deleted allocations for instance 04b6ea5a-b329-4bd1-bf27-48a8644550c5#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.490 2 DEBUG oslo_concurrency.lockutils [None req-d2cd0452-8013-4582-b961-167d54575f7c 03f516d263c8402682568b55b658f885 1f9bd65bc7864ca18e1c478ef7e03926 - - default default] Lock "04b6ea5a-b329-4bd1-bf27-48a8644550c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:17:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2212109734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.565 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.590 2 DEBUG nova.virt.libvirt.vif [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-803739487',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-803739487',id=76,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8efba404696b40fbbaa6431b934b87f1',ramdisk_id='',reservation_id='r-8tky9r5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-153154373',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:17:13Z,user_data=None,user_id='69d8e29c6d3747e98a5985a584f4c814',uuid=992b01a0-5a43-42ab-bfcb-96c2db503633,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.591 2 DEBUG nova.network.os_vif_util [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converting VIF {"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.592 2 DEBUG nova.network.os_vif_util [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:85:3a,bridge_name='br-int',has_traffic_filtering=True,id=60c78706-8cfe-409b-91d7-dc811b9de69a,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c78706-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.593 2 DEBUG nova.objects.instance [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.605 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <uuid>992b01a0-5a43-42ab-bfcb-96c2db503633</uuid>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <name>instance-0000004c</name>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-803739487</nova:name>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:17:19</nova:creationTime>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <nova:user uuid="69d8e29c6d3747e98a5985a584f4c814">tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member</nova:user>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <nova:project uuid="8efba404696b40fbbaa6431b934b87f1">tempest-ServerBootFromVolumeStableRescueTest-153154373</nova:project>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <nova:port uuid="60c78706-8cfe-409b-91d7-dc811b9de69a">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <entry name="serial">992b01a0-5a43-42ab-bfcb-96c2db503633</entry>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <entry name="uuid">992b01a0-5a43-42ab-bfcb-96c2db503633</entry>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-27c2fdcd-9a25-49d0-befe-30f5b9c08453">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <serial>27c2fdcd-9a25-49d0-befe-30f5b9c08453</serial>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:f3:85:3a"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <target dev="tap60c78706-8c"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/console.log" append="off"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:17:19 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:17:19 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:17:19 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:17:19 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.606 2 DEBUG nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Preparing to wait for external event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.606 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.607 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.607 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.608 2 DEBUG nova.virt.libvirt.vif [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-803739487',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-803739487',id=76,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8efba404696b40fbbaa6431b934b87f1',ramdisk_id='',reservation_id='r-8tky9r5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-153154373',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:17:13Z,user_data=None,user_id='69d8e29c6d3747e98a5985a584f4c814',uuid=992b01a0-5a43-42ab-bfcb-96c2db503633,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.608 2 DEBUG nova.network.os_vif_util [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converting VIF {"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.609 2 DEBUG nova.network.os_vif_util [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:85:3a,bridge_name='br-int',has_traffic_filtering=True,id=60c78706-8cfe-409b-91d7-dc811b9de69a,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c78706-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.609 2 DEBUG os_vif [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:85:3a,bridge_name='br-int',has_traffic_filtering=True,id=60c78706-8cfe-409b-91d7-dc811b9de69a,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c78706-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.610 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.610 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60c78706-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60c78706-8c, col_values=(('external_ids', {'iface-id': '60c78706-8cfe-409b-91d7-dc811b9de69a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:85:3a', 'vm-uuid': '992b01a0-5a43-42ab-bfcb-96c2db503633'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:19 np0005465987 NetworkManager[44910]: <info>  [1759407439.6183] manager: (tap60c78706-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.626 2 INFO os_vif [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:85:3a,bridge_name='br-int',has_traffic_filtering=True,id=60c78706-8cfe-409b-91d7-dc811b9de69a,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c78706-8c')#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.697 2 DEBUG nova.compute.manager [req-c0466cdf-6926-4743-9a76-9bef64541765 req-5900b34d-a35c-4bbe-97a7-1e113a26237e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-plugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.697 2 DEBUG oslo_concurrency.lockutils [req-c0466cdf-6926-4743-9a76-9bef64541765 req-5900b34d-a35c-4bbe-97a7-1e113a26237e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.697 2 DEBUG oslo_concurrency.lockutils [req-c0466cdf-6926-4743-9a76-9bef64541765 req-5900b34d-a35c-4bbe-97a7-1e113a26237e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.697 2 DEBUG oslo_concurrency.lockutils [req-c0466cdf-6926-4743-9a76-9bef64541765 req-5900b34d-a35c-4bbe-97a7-1e113a26237e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.698 2 DEBUG nova.compute.manager [req-c0466cdf-6926-4743-9a76-9bef64541765 req-5900b34d-a35c-4bbe-97a7-1e113a26237e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] No waiting events found dispatching network-vif-plugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.698 2 WARNING nova.compute.manager [req-c0466cdf-6926-4743-9a76-9bef64541765 req-5900b34d-a35c-4bbe-97a7-1e113a26237e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received unexpected event network-vif-plugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c for instance with vm_state active and task_state None.#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.700 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.701 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.701 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No VIF found with MAC fa:16:3e:f3:85:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.701 2 INFO nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Using config drive#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.726 2 DEBUG nova.storage.rbd_utils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.806 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:19 np0005465987 nova_compute[230713]: 2025-10-02 12:17:19.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:19.807 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.036 2 DEBUG oslo_concurrency.lockutils [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "interface-6fbb173d-d50b-4c82-bfb8-d8b3384c81df-2a8b7c9c-6681-4c28-8c65-774622c5e92c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.037 2 DEBUG oslo_concurrency.lockutils [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-6fbb173d-d50b-4c82-bfb8-d8b3384c81df-2a8b7c9c-6681-4c28-8c65-774622c5e92c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.059 2 DEBUG nova.objects.instance [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'flavor' on Instance uuid 6fbb173d-d50b-4c82-bfb8-d8b3384c81df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:20.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.083 2 DEBUG nova.virt.libvirt.vif [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632945682',display_name='tempest-tempest.common.compute-instance-632945682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632945682',id=74,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATuG0Igs/aClVmsQdGPZDgNmvV7Bgyq0Y4nFm5M4crDAlAvWuUKcpXWJIzqspIHqd22HRQCaWbqLnJi4PrcYo9nS6OUtnl8L8k0zpd1d2KcZKsipjVMcOXUmRyImfEBhg==',key_name='tempest-keypair-716714059',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-j8xh03md',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=6fbb173d-d50b-4c82-bfb8-d8b3384c81df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.084 2 DEBUG nova.network.os_vif_util [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.085 2 DEBUG nova.network.os_vif_util [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.089 2 DEBUG nova.virt.libvirt.guest [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2e:88:f2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2a8b7c9c-66"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.093 2 DEBUG nova.virt.libvirt.guest [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2e:88:f2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2a8b7c9c-66"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.096 2 DEBUG nova.virt.libvirt.driver [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Attempting to detach device tap2a8b7c9c-66 from instance 6fbb173d-d50b-4c82-bfb8-d8b3384c81df from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.096 2 DEBUG nova.virt.libvirt.guest [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:2e:88:f2"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <target dev="tap2a8b7c9c-66"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.117 2 DEBUG nova.virt.libvirt.guest [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2e:88:f2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2a8b7c9c-66"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.121 2 DEBUG nova.virt.libvirt.guest [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2e:88:f2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2a8b7c9c-66"/></interface>not found in domain: <domain type='kvm' id='33'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <name>instance-0000004a</name>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <uuid>6fbb173d-d50b-4c82-bfb8-d8b3384c81df</uuid>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:name>tempest-tempest.common.compute-instance-632945682</nova:name>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:17:19</nova:creationTime>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:port uuid="62e8e068-3aad-45d3-984e-6d434301bf50">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:port uuid="2a8b7c9c-6681-4c28-8c65-774622c5e92c">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='serial'>6fbb173d-d50b-4c82-bfb8-d8b3384c81df</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='uuid'>6fbb173d-d50b-4c82-bfb8-d8b3384c81df</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk' index='2'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk.config' index='1'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:63:55:af'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target dev='tap62e8e068-3a'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:2e:88:f2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target dev='tap2a8b7c9c-66'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='net1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/console.log' append='off'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/console.log' append='off'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c977,c1003</label>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c977,c1003</imagelabel>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.121 2 INFO nova.virt.libvirt.driver [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully detached device tap2a8b7c9c-66 from instance 6fbb173d-d50b-4c82-bfb8-d8b3384c81df from the persistent domain config.#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.122 2 DEBUG nova.virt.libvirt.driver [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] (1/8): Attempting to detach device tap2a8b7c9c-66 with device alias net1 from instance 6fbb173d-d50b-4c82-bfb8-d8b3384c81df from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.122 2 DEBUG nova.virt.libvirt.guest [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:2e:88:f2"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <target dev="tap2a8b7c9c-66"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:17:20 np0005465987 kernel: tap2a8b7c9c-66 (unregistering): left promiscuous mode
Oct  2 08:17:20 np0005465987 NetworkManager[44910]: <info>  [1759407440.2341] device (tap2a8b7c9c-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.246 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759407440.2464018, 6fbb173d-d50b-4c82-bfb8-d8b3384c81df => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.250 2 DEBUG nova.virt.libvirt.driver [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Start waiting for the detach event from libvirt for device tap2a8b7c9c-66 with device alias net1 for instance 6fbb173d-d50b-4c82-bfb8-d8b3384c81df _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.252 2 DEBUG nova.virt.libvirt.guest [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2e:88:f2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2a8b7c9c-66"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:17:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:20Z|00230|binding|INFO|Releasing lport 2a8b7c9c-6681-4c28-8c65-774622c5e92c from this chassis (sb_readonly=0)
Oct  2 08:17:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:20Z|00231|binding|INFO|Setting lport 2a8b7c9c-6681-4c28-8c65-774622c5e92c down in Southbound
Oct  2 08:17:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:20Z|00232|binding|INFO|Removing iface tap2a8b7c9c-66 ovn-installed in OVS
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.264 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:88:f2 10.100.0.8'], port_security=['fa:16:3e:2e:88:f2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1498800616', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6fbb173d-d50b-4c82-bfb8-d8b3384c81df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1498800616', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '8', 'neutron:security_group_ids': '20d10bb5-2bfc-4d22-a5af-27378413539d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2a8b7c9c-6681-4c28-8c65-774622c5e92c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.266 2 DEBUG nova.virt.libvirt.guest [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2e:88:f2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2a8b7c9c-66"/></interface>not found in domain: <domain type='kvm' id='33'>
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.266 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2a8b7c9c-6681-4c28-8c65-774622c5e92c in datapath ee114210-598c-482f-83c7-26c3363a45c4 unbound from our chassis#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <name>instance-0000004a</name>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <uuid>6fbb173d-d50b-4c82-bfb8-d8b3384c81df</uuid>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:name>tempest-tempest.common.compute-instance-632945682</nova:name>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:17:19</nova:creationTime>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:port uuid="62e8e068-3aad-45d3-984e-6d434301bf50">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:port uuid="2a8b7c9c-6681-4c28-8c65-774622c5e92c">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='serial'>6fbb173d-d50b-4c82-bfb8-d8b3384c81df</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='uuid'>6fbb173d-d50b-4c82-bfb8-d8b3384c81df</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk' index='2'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/6fbb173d-d50b-4c82-bfb8-d8b3384c81df_disk.config' index='1'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.269 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee114210-598c-482f-83c7-26c3363a45c4#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:63:55:af'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target dev='tap62e8e068-3a'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/console.log' append='off'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df/console.log' append='off'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c977,c1003</label>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c977,c1003</imagelabel>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.266 2 INFO nova.virt.libvirt.driver [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully detached device tap2a8b7c9c-66 from instance 6fbb173d-d50b-4c82-bfb8-d8b3384c81df from the live domain config.#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.268 2 DEBUG nova.virt.libvirt.vif [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632945682',display_name='tempest-tempest.common.compute-instance-632945682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632945682',id=74,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATuG0Igs/aClVmsQdGPZDgNmvV7Bgyq0Y4nFm5M4crDAlAvWuUKcpXWJIzqspIHqd22HRQCaWbqLnJi4PrcYo9nS6OUtnl8L8k0zpd1d2KcZKsipjVMcOXUmRyImfEBhg==',key_name='tempest-keypair-716714059',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-j8xh03md',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=6fbb173d-d50b-4c82-bfb8-d8b3384c81df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.268 2 DEBUG nova.network.os_vif_util [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.269 2 DEBUG nova.network.os_vif_util [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.270 2 DEBUG os_vif [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a8b7c9c-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.296 2 INFO os_vif [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66')#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.295 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e112ecb7-92b2-4f18-98c8-26d317f38b8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.297 2 DEBUG nova.virt.libvirt.guest [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:name>tempest-tempest.common.compute-instance-632945682</nova:name>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:17:20</nova:creationTime>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:user uuid="8e627e098f2b465d97099fee8f489b71">tempest-AttachInterfacesTestJSON-751325164-project-member</nova:user>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:project uuid="bddf59854a7d4eb1a85ece547923cea0">tempest-AttachInterfacesTestJSON-751325164</nova:project>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    <nova:port uuid="62e8e068-3aad-45d3-984e-6d434301bf50">
Oct  2 08:17:20 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:17:20 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:17:20 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.327 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c23d1d71-63cb-4764-8f7c-dbf361ce464c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.331 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0aaa1ce9-48b0-474c-8db1-d9a89295d0dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.361 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc37c54-12d6-40da-bdfe-62b6f09be4c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.380 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad6e4b5-b35a-45c1-9b2a-24f3bb2902b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee114210-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:86:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547408, 'reachable_time': 39230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258736, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.401 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[64abbf99-d937-4d2b-a0db-dfa0ae880571]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547421, 'tstamp': 547421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258737, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapee114210-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547424, 'tstamp': 547424}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258737, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.402 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:20 np0005465987 nova_compute[230713]: 2025-10-02 12:17:20.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.410 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee114210-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.410 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.410 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee114210-50, col_values=(('external_ids', {'iface-id': '544b20f9-e292-46bc-a770-180c318d534e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:20.411 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.045 2 INFO nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Creating config drive at /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.054 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp51me5d7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.190 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp51me5d7" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.218 2 DEBUG nova.storage.rbd_utils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.222 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:17:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:21.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.403 2 DEBUG oslo_concurrency.processutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.403 2 INFO nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Deleting local config drive /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config because it was imported into RBD.#033[00m
Oct  2 08:17:21 np0005465987 kernel: tap60c78706-8c: entered promiscuous mode
Oct  2 08:17:21 np0005465987 systemd-udevd[258728]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:17:21 np0005465987 NetworkManager[44910]: <info>  [1759407441.4497] manager: (tap60c78706-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:21Z|00233|binding|INFO|Claiming lport 60c78706-8cfe-409b-91d7-dc811b9de69a for this chassis.
Oct  2 08:17:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:21Z|00234|binding|INFO|60c78706-8cfe-409b-91d7-dc811b9de69a: Claiming fa:16:3e:f3:85:3a 10.100.0.13
Oct  2 08:17:21 np0005465987 NetworkManager[44910]: <info>  [1759407441.4629] device (tap60c78706-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:17:21 np0005465987 NetworkManager[44910]: <info>  [1759407441.4639] device (tap60c78706-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.460 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:85:3a 10.100.0.13'], port_security=['fa:16:3e:f3:85:3a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=60c78706-8cfe-409b-91d7-dc811b9de69a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.462 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 60c78706-8cfe-409b-91d7-dc811b9de69a in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 bound to our chassis#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.465 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0#033[00m
Oct  2 08:17:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:21Z|00235|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a ovn-installed in OVS
Oct  2 08:17:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:21Z|00236|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a up in Southbound
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.475 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e5759891-9ff9-4248-a138-e5f012fc0950]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.477 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1725bd8-71 in ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.478 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1725bd8-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.478 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bae40b7c-bf8e-4d6b-b8bc-a2dbb1c62c8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.480 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba898f9-05cb-4e84-b4db-653119930db6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 systemd-machined[188335]: New machine qemu-34-instance-0000004c.
Oct  2 08:17:21 np0005465987 systemd[1]: Started Virtual Machine qemu-34-instance-0000004c.
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.497 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[16969e4b-b621-4b19-8829-b798ee5efa6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.529 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[de7bfd9f-fe9c-4f9c-8df2-0c4aebe928d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.561 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9aec536d-11f6-4476-b585-b3aa973d03ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 NetworkManager[44910]: <info>  [1759407441.5693] manager: (tapf1725bd8-70): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.569 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[180084aa-9bc1-47da-bc4e-3dc8bc55c337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.604 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd72196-320c-4d78-b8d6-15b7e16a8f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.607 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[8844ac8d-0c48-4ea5-82d7-b7e1ae8bf11d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 NetworkManager[44910]: <info>  [1759407441.6318] device (tapf1725bd8-70): carrier: link connected
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.642 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a1c2bf-7b08-40de-8744-c698c2fb265f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.660 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cb952ea8-0569-4704-92ac-112dd325fff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551020, 'reachable_time': 41114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258825, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.675 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[019665e9-d356-40a8-8486-f368819d9aad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:76f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 551020, 'tstamp': 551020}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258826, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.691 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1925468a-6c86-4a73-bd09-bbc50de55032]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551020, 'reachable_time': 41114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258827, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.725 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a916e74f-95c4-4d29-9e3e-05ccb85a3efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.800 2 DEBUG nova.compute.manager [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-plugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.801 2 DEBUG oslo_concurrency.lockutils [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.801 2 DEBUG oslo_concurrency.lockutils [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.801 2 DEBUG oslo_concurrency.lockutils [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.801 2 DEBUG nova.compute.manager [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] No waiting events found dispatching network-vif-plugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.801 2 WARNING nova.compute.manager [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received unexpected event network-vif-plugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c for instance with vm_state active and task_state None.#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.802 2 DEBUG nova.compute.manager [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-unplugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.802 2 DEBUG oslo_concurrency.lockutils [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.802 2 DEBUG oslo_concurrency.lockutils [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.802 2 DEBUG oslo_concurrency.lockutils [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.802 2 DEBUG nova.compute.manager [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] No waiting events found dispatching network-vif-unplugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.803 2 WARNING nova.compute.manager [req-0d90757f-433a-4ad3-934c-5720d590d30d req-4f518795-3d46-4206-9033-6d61fb98adc4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received unexpected event network-vif-unplugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c for instance with vm_state active and task_state None.#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.802 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0ec4a6-2f78-41b3-904f-ad78084cd485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.803 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.804 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.804 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1725bd8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:21 np0005465987 kernel: tapf1725bd8-70: entered promiscuous mode
Oct  2 08:17:21 np0005465987 NetworkManager[44910]: <info>  [1759407441.8076] manager: (tapf1725bd8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.810 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1725bd8-70, col_values=(('external_ids', {'iface-id': '421cd6e3-75aa-44e1-b552-d119c4fcd629'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:21Z|00237|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.813 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.820 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[92396b38-03e4-4b6e-a62c-92c727b86611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.821 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:17:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:21.821 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'env', 'PROCESS_TAG=haproxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:17:21 np0005465987 nova_compute[230713]: 2025-10-02 12:17:21.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:22.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.265 2 DEBUG nova.compute.manager [req-c1a4520d-8431-4006-9245-1f1ebe4b1dc2 req-3c3a3fbb-aaee-4b27-8e35-453dbc419bf0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.265 2 DEBUG oslo_concurrency.lockutils [req-c1a4520d-8431-4006-9245-1f1ebe4b1dc2 req-3c3a3fbb-aaee-4b27-8e35-453dbc419bf0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.266 2 DEBUG oslo_concurrency.lockutils [req-c1a4520d-8431-4006-9245-1f1ebe4b1dc2 req-3c3a3fbb-aaee-4b27-8e35-453dbc419bf0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.266 2 DEBUG oslo_concurrency.lockutils [req-c1a4520d-8431-4006-9245-1f1ebe4b1dc2 req-3c3a3fbb-aaee-4b27-8e35-453dbc419bf0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.266 2 DEBUG nova.compute.manager [req-c1a4520d-8431-4006-9245-1f1ebe4b1dc2 req-3c3a3fbb-aaee-4b27-8e35-453dbc419bf0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Processing event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:17:22 np0005465987 podman[258894]: 2025-10-02 12:17:22.175364352 +0000 UTC m=+0.023048524 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:17:22 np0005465987 podman[258894]: 2025-10-02 12:17:22.278321381 +0000 UTC m=+0.126005533 container create a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.317 2 DEBUG nova.network.neutron [req-b38d2599-e584-4a0d-8913-4d740157f523 req-968ece93-48d2-4d87-b1e0-a8f41c95df84 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updated VIF entry in instance network info cache for port 2a8b7c9c-6681-4c28-8c65-774622c5e92c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.318 2 DEBUG nova.network.neutron [req-b38d2599-e584-4a0d-8913-4d740157f523 req-968ece93-48d2-4d87-b1e0-a8f41c95df84 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updating instance_info_cache with network_info: [{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:22 np0005465987 systemd[1]: Started libpod-conmon-a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721.scope.
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.344 2 DEBUG oslo_concurrency.lockutils [req-b38d2599-e584-4a0d-8913-4d740157f523 req-968ece93-48d2-4d87-b1e0-a8f41c95df84 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:22 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:17:22 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488ac1058c2eaeadba9adfcbe14318f5a23af66e5a1b0aab3432ed8e72a196eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:17:22 np0005465987 podman[258894]: 2025-10-02 12:17:22.383964213 +0000 UTC m=+0.231648365 container init a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:17:22 np0005465987 podman[258894]: 2025-10-02 12:17:22.393527566 +0000 UTC m=+0.241211748 container start a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:17:22 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[258916]: [NOTICE]   (258920) : New worker (258922) forked
Oct  2 08:17:22 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[258916]: [NOTICE]   (258920) : Loading success.
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.702 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407442.7017548, 992b01a0-5a43-42ab-bfcb-96c2db503633 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.703 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] VM Started (Lifecycle Event)#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.707 2 DEBUG nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.713 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.718 2 INFO nova.virt.libvirt.driver [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance spawned successfully.#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.719 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.726 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.737 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.750 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.750 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.751 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.751 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.751 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.752 2 DEBUG nova.virt.libvirt.driver [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.758 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.758 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407442.7022045, 992b01a0-5a43-42ab-bfcb-96c2db503633 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.758 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.795 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.796 2 DEBUG oslo_concurrency.lockutils [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.796 2 DEBUG oslo_concurrency.lockutils [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquired lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.797 2 DEBUG nova.network.neutron [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.800 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407442.711921, 992b01a0-5a43-42ab-bfcb-96c2db503633 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.801 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.829 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.833 2 INFO nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Took 3.83 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.833 2 DEBUG nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.836 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.869 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.896 2 INFO nova.compute.manager [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Took 10.38 seconds to build instance.#033[00m
Oct  2 08:17:22 np0005465987 nova_compute[230713]: 2025-10-02 12:17:22.922 2 DEBUG oslo_concurrency.lockutils [None req-62783a05-db77-4efa-803a-091eb76fbf3c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:23.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e244 e244: 3 total, 3 up, 3 in
Oct  2 08:17:23 np0005465987 nova_compute[230713]: 2025-10-02 12:17:23.885 2 DEBUG nova.compute.manager [req-78ea75bf-5f80-4ef0-bee8-df451675ce64 req-181c4bfb-3a91-4c43-95c6-27cf1804ee5f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-plugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:23 np0005465987 nova_compute[230713]: 2025-10-02 12:17:23.886 2 DEBUG oslo_concurrency.lockutils [req-78ea75bf-5f80-4ef0-bee8-df451675ce64 req-181c4bfb-3a91-4c43-95c6-27cf1804ee5f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:23 np0005465987 nova_compute[230713]: 2025-10-02 12:17:23.886 2 DEBUG oslo_concurrency.lockutils [req-78ea75bf-5f80-4ef0-bee8-df451675ce64 req-181c4bfb-3a91-4c43-95c6-27cf1804ee5f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:23 np0005465987 nova_compute[230713]: 2025-10-02 12:17:23.886 2 DEBUG oslo_concurrency.lockutils [req-78ea75bf-5f80-4ef0-bee8-df451675ce64 req-181c4bfb-3a91-4c43-95c6-27cf1804ee5f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:23 np0005465987 nova_compute[230713]: 2025-10-02 12:17:23.886 2 DEBUG nova.compute.manager [req-78ea75bf-5f80-4ef0-bee8-df451675ce64 req-181c4bfb-3a91-4c43-95c6-27cf1804ee5f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] No waiting events found dispatching network-vif-plugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:23 np0005465987 nova_compute[230713]: 2025-10-02 12:17:23.886 2 WARNING nova.compute.manager [req-78ea75bf-5f80-4ef0-bee8-df451675ce64 req-181c4bfb-3a91-4c43-95c6-27cf1804ee5f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received unexpected event network-vif-plugged-2a8b7c9c-6681-4c28-8c65-774622c5e92c for instance with vm_state active and task_state None.#033[00m
Oct  2 08:17:23 np0005465987 nova_compute[230713]: 2025-10-02 12:17:23.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.034 2 DEBUG oslo_concurrency.lockutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.035 2 DEBUG oslo_concurrency.lockutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.036 2 DEBUG oslo_concurrency.lockutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.036 2 DEBUG oslo_concurrency.lockutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.036 2 DEBUG oslo_concurrency.lockutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.038 2 INFO nova.compute.manager [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Terminating instance#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.040 2 DEBUG nova.compute.manager [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:17:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:24.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:24 np0005465987 kernel: tap62e8e068-3a (unregistering): left promiscuous mode
Oct  2 08:17:24 np0005465987 NetworkManager[44910]: <info>  [1759407444.1167] device (tap62e8e068-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:24Z|00238|binding|INFO|Releasing lport 62e8e068-3aad-45d3-984e-6d434301bf50 from this chassis (sb_readonly=0)
Oct  2 08:17:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:24Z|00239|binding|INFO|Setting lport 62e8e068-3aad-45d3-984e-6d434301bf50 down in Southbound
Oct  2 08:17:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:24Z|00240|binding|INFO|Removing iface tap62e8e068-3a ovn-installed in OVS
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.138 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:55:af 10.100.0.10'], port_security=['fa:16:3e:63:55:af 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6fbb173d-d50b-4c82-bfb8-d8b3384c81df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee114210-598c-482f-83c7-26c3363a45c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bddf59854a7d4eb1a85ece547923cea0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd3b62a4-b346-4888-8f6a-ca787221af6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f6a61f-52bf-4639-80ed-e59f98940f90, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=62e8e068-3aad-45d3-984e-6d434301bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.141 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 62e8e068-3aad-45d3-984e-6d434301bf50 in datapath ee114210-598c-482f-83c7-26c3363a45c4 unbound from our chassis#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.146 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee114210-598c-482f-83c7-26c3363a45c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.148 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a3de3185-df60-48b0-aa86-a4238b56603e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.149 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 namespace which is not needed anymore#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:24 np0005465987 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Oct  2 08:17:24 np0005465987 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004a.scope: Consumed 15.433s CPU time.
Oct  2 08:17:24 np0005465987 systemd-machined[188335]: Machine qemu-33-instance-0000004a terminated.
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.279 2 INFO nova.virt.libvirt.driver [-] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Instance destroyed successfully.#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.279 2 DEBUG nova.objects.instance [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lazy-loading 'resources' on Instance uuid 6fbb173d-d50b-4c82-bfb8-d8b3384c81df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[258061]: [NOTICE]   (258071) : haproxy version is 2.8.14-c23fe91
Oct  2 08:17:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[258061]: [NOTICE]   (258071) : path to executable is /usr/sbin/haproxy
Oct  2 08:17:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[258061]: [WARNING]  (258071) : Exiting Master process...
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.295 2 DEBUG nova.virt.libvirt.vif [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632945682',display_name='tempest-tempest.common.compute-instance-632945682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632945682',id=74,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATuG0Igs/aClVmsQdGPZDgNmvV7Bgyq0Y4nFm5M4crDAlAvWuUKcpXWJIzqspIHqd22HRQCaWbqLnJi4PrcYo9nS6OUtnl8L8k0zpd1d2KcZKsipjVMcOXUmRyImfEBhg==',key_name='tempest-keypair-716714059',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-j8xh03md',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=6fbb173d-d50b-4c82-bfb8-d8b3384c81df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.295 2 DEBUG nova.network.os_vif_util [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.296 2 DEBUG nova.network.os_vif_util [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:55:af,bridge_name='br-int',has_traffic_filtering=True,id=62e8e068-3aad-45d3-984e-6d434301bf50,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62e8e068-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.296 2 DEBUG os_vif [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:55:af,bridge_name='br-int',has_traffic_filtering=True,id=62e8e068-3aad-45d3-984e-6d434301bf50,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62e8e068-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:17:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[258061]: [ALERT]    (258071) : Current worker (258073) exited with code 143 (Terminated)
Oct  2 08:17:24 np0005465987 neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4[258061]: [WARNING]  (258071) : All workers exited. Exiting... (0)
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.299 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e8e068-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:24 np0005465987 systemd[1]: libpod-11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3.scope: Deactivated successfully.
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:17:24 np0005465987 podman[258954]: 2025-10-02 12:17:24.309649257 +0000 UTC m=+0.070140718 container died 11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.312 2 INFO os_vif [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:55:af,bridge_name='br-int',has_traffic_filtering=True,id=62e8e068-3aad-45d3-984e-6d434301bf50,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62e8e068-3a')#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.314 2 DEBUG nova.virt.libvirt.vif [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-632945682',display_name='tempest-tempest.common.compute-instance-632945682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-632945682',id=74,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATuG0Igs/aClVmsQdGPZDgNmvV7Bgyq0Y4nFm5M4crDAlAvWuUKcpXWJIzqspIHqd22HRQCaWbqLnJi4PrcYo9nS6OUtnl8L8k0zpd1d2KcZKsipjVMcOXUmRyImfEBhg==',key_name='tempest-keypair-716714059',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bddf59854a7d4eb1a85ece547923cea0',ramdisk_id='',reservation_id='r-j8xh03md',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-751325164',owner_user_name='tempest-AttachInterfacesTestJSON-751325164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e627e098f2b465d97099fee8f489b71',uuid=6fbb173d-d50b-4c82-bfb8-d8b3384c81df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.314 2 DEBUG nova.network.os_vif_util [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converting VIF {"id": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "address": "fa:16:3e:2e:88:f2", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a8b7c9c-66", "ovs_interfaceid": "2a8b7c9c-6681-4c28-8c65-774622c5e92c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.315 2 DEBUG nova.network.os_vif_util [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.315 2 DEBUG os_vif [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.317 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a8b7c9c-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.317 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.320 2 INFO os_vif [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:88:f2,bridge_name='br-int',has_traffic_filtering=True,id=2a8b7c9c-6681-4c28-8c65-774622c5e92c,network=Network(ee114210-598c-482f-83c7-26c3363a45c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2a8b7c9c-66')#033[00m
Oct  2 08:17:24 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3-userdata-shm.mount: Deactivated successfully.
Oct  2 08:17:24 np0005465987 systemd[1]: var-lib-containers-storage-overlay-9431b0a525e33ab78eaa4b9eccc66a410abd187d32109ab853068d2b0c757b25-merged.mount: Deactivated successfully.
Oct  2 08:17:24 np0005465987 podman[258954]: 2025-10-02 12:17:24.405201201 +0000 UTC m=+0.165692672 container cleanup 11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:17:24 np0005465987 systemd[1]: libpod-conmon-11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3.scope: Deactivated successfully.
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.438 2 DEBUG nova.compute.manager [req-b0c44b48-28bc-499f-ab0a-12ce92a566fc req-7755c74e-59e1-444a-9be5-05c8d55288c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.438 2 DEBUG oslo_concurrency.lockutils [req-b0c44b48-28bc-499f-ab0a-12ce92a566fc req-7755c74e-59e1-444a-9be5-05c8d55288c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.439 2 DEBUG oslo_concurrency.lockutils [req-b0c44b48-28bc-499f-ab0a-12ce92a566fc req-7755c74e-59e1-444a-9be5-05c8d55288c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.439 2 DEBUG oslo_concurrency.lockutils [req-b0c44b48-28bc-499f-ab0a-12ce92a566fc req-7755c74e-59e1-444a-9be5-05c8d55288c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.439 2 DEBUG nova.compute.manager [req-b0c44b48-28bc-499f-ab0a-12ce92a566fc req-7755c74e-59e1-444a-9be5-05c8d55288c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.439 2 WARNING nova.compute.manager [req-b0c44b48-28bc-499f-ab0a-12ce92a566fc req-7755c74e-59e1-444a-9be5-05c8d55288c4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state None.#033[00m
Oct  2 08:17:24 np0005465987 podman[259011]: 2025-10-02 12:17:24.542458022 +0000 UTC m=+0.112103240 container remove 11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.547 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5c6c14-9368-4473-9052-4b5f693248d1]: (4, ('Thu Oct  2 12:17:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 (11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3)\n11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3\nThu Oct  2 12:17:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 (11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3)\n11f990260b438910022912952498c9bf999c329753e36cba5c8e914351a690b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.549 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad7f26e-2d4f-4eeb-a6ce-e216d560c324]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.550 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee114210-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:24 np0005465987 kernel: tapee114210-50: left promiscuous mode
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.622 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ab634ba6-0c90-44b5-9836-9106ec1809a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.653 2 DEBUG nova.compute.manager [req-fe82c4d4-003f-4a03-b523-a46bb7f3e4dd req-b969f52e-afeb-45ce-893f-9fdf1c986089 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-unplugged-62e8e068-3aad-45d3-984e-6d434301bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.654 2 DEBUG oslo_concurrency.lockutils [req-fe82c4d4-003f-4a03-b523-a46bb7f3e4dd req-b969f52e-afeb-45ce-893f-9fdf1c986089 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.655 2 DEBUG oslo_concurrency.lockutils [req-fe82c4d4-003f-4a03-b523-a46bb7f3e4dd req-b969f52e-afeb-45ce-893f-9fdf1c986089 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.655 2 DEBUG oslo_concurrency.lockutils [req-fe82c4d4-003f-4a03-b523-a46bb7f3e4dd req-b969f52e-afeb-45ce-893f-9fdf1c986089 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.656 2 DEBUG nova.compute.manager [req-fe82c4d4-003f-4a03-b523-a46bb7f3e4dd req-b969f52e-afeb-45ce-893f-9fdf1c986089 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] No waiting events found dispatching network-vif-unplugged-62e8e068-3aad-45d3-984e-6d434301bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.655 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9321ba-3b5d-4651-a704-02eb6fbb3183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.656 2 DEBUG nova.compute.manager [req-fe82c4d4-003f-4a03-b523-a46bb7f3e4dd req-b969f52e-afeb-45ce-893f-9fdf1c986089 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-unplugged-62e8e068-3aad-45d3-984e-6d434301bf50 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.657 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab582cb-f1f7-4b97-b477-7136ce745784]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.672 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[58d06190-b3f2-46e9-a591-a0ca7ed6c747]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547401, 'reachable_time': 24983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259028, 'error': None, 'target': 'ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.674 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee114210-598c-482f-83c7-26c3363a45c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:17:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:24.675 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1c4975-3abd-4cc0-af56-cec409738be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:24 np0005465987 systemd[1]: run-netns-ovnmeta\x2dee114210\x2d598c\x2d482f\x2d83c7\x2d26c3363a45c4.mount: Deactivated successfully.
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.873 2 INFO nova.virt.libvirt.driver [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Deleting instance files /var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df_del#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.874 2 INFO nova.virt.libvirt.driver [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Deletion of /var/lib/nova/instances/6fbb173d-d50b-4c82-bfb8-d8b3384c81df_del complete#033[00m
Oct  2 08:17:24 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.999 2 INFO nova.compute.manager [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.999 2 DEBUG oslo.service.loopingcall [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:24.999 2 DEBUG nova.compute.manager [-] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:25.000 2 DEBUG nova.network.neutron [-] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:25.165 2 INFO nova.network.neutron [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Port 2a8b7c9c-6681-4c28-8c65-774622c5e92c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:25.166 2 DEBUG nova.network.neutron [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updating instance_info_cache with network_info: [{"id": "62e8e068-3aad-45d3-984e-6d434301bf50", "address": "fa:16:3e:63:55:af", "network": {"id": "ee114210-598c-482f-83c7-26c3363a45c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-985096952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bddf59854a7d4eb1a85ece547923cea0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62e8e068-3a", "ovs_interfaceid": "62e8e068-3aad-45d3-984e-6d434301bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:25.195 2 DEBUG oslo_concurrency.lockutils [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Releasing lock "refresh_cache-6fbb173d-d50b-4c82-bfb8-d8b3384c81df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:25.218 2 DEBUG oslo_concurrency.lockutils [None req-96a81a2d-74fd-4ffe-b459-e9a438cfbd3e 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "interface-6fbb173d-d50b-4c82-bfb8-d8b3384c81df-2a8b7c9c-6681-4c28-8c65-774622c5e92c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:25.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:25.321 2 INFO nova.compute.manager [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Rescuing#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:25.321 2 DEBUG oslo_concurrency.lockutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:25.322 2 DEBUG oslo_concurrency.lockutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquired lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:25 np0005465987 nova_compute[230713]: 2025-10-02 12:17:25.322 2 DEBUG nova.network.neutron [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e245 e245: 3 total, 3 up, 3 in
Oct  2 08:17:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:26.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:26 np0005465987 nova_compute[230713]: 2025-10-02 12:17:26.741 2 DEBUG nova.compute.manager [req-063c2ee5-ffc0-4b2a-83a1-7a92ee023194 req-9020c6ac-b310-40f9-b529-2883ad016f7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-plugged-62e8e068-3aad-45d3-984e-6d434301bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:26 np0005465987 nova_compute[230713]: 2025-10-02 12:17:26.742 2 DEBUG oslo_concurrency.lockutils [req-063c2ee5-ffc0-4b2a-83a1-7a92ee023194 req-9020c6ac-b310-40f9-b529-2883ad016f7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:26 np0005465987 nova_compute[230713]: 2025-10-02 12:17:26.742 2 DEBUG oslo_concurrency.lockutils [req-063c2ee5-ffc0-4b2a-83a1-7a92ee023194 req-9020c6ac-b310-40f9-b529-2883ad016f7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:26 np0005465987 nova_compute[230713]: 2025-10-02 12:17:26.742 2 DEBUG oslo_concurrency.lockutils [req-063c2ee5-ffc0-4b2a-83a1-7a92ee023194 req-9020c6ac-b310-40f9-b529-2883ad016f7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:26 np0005465987 nova_compute[230713]: 2025-10-02 12:17:26.742 2 DEBUG nova.compute.manager [req-063c2ee5-ffc0-4b2a-83a1-7a92ee023194 req-9020c6ac-b310-40f9-b529-2883ad016f7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] No waiting events found dispatching network-vif-plugged-62e8e068-3aad-45d3-984e-6d434301bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:26 np0005465987 nova_compute[230713]: 2025-10-02 12:17:26.742 2 WARNING nova.compute.manager [req-063c2ee5-ffc0-4b2a-83a1-7a92ee023194 req-9020c6ac-b310-40f9-b529-2883ad016f7b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received unexpected event network-vif-plugged-62e8e068-3aad-45d3-984e-6d434301bf50 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:17:26 np0005465987 podman[259031]: 2025-10-02 12:17:26.843661093 +0000 UTC m=+0.060655986 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  2 08:17:26 np0005465987 podman[259030]: 2025-10-02 12:17:26.853111543 +0000 UTC m=+0.062219761 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:17:26 np0005465987 podman[259029]: 2025-10-02 12:17:26.874348636 +0000 UTC m=+0.096340797 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:17:27 np0005465987 nova_compute[230713]: 2025-10-02 12:17:27.192 2 DEBUG nova.network.neutron [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updating instance_info_cache with network_info: [{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:27 np0005465987 nova_compute[230713]: 2025-10-02 12:17:27.230 2 DEBUG oslo_concurrency.lockutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Releasing lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:27.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:27 np0005465987 nova_compute[230713]: 2025-10-02 12:17:27.549 2 DEBUG nova.network.neutron [-] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:27 np0005465987 nova_compute[230713]: 2025-10-02 12:17:27.574 2 INFO nova.compute.manager [-] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Took 2.57 seconds to deallocate network for instance.#033[00m
Oct  2 08:17:27 np0005465987 nova_compute[230713]: 2025-10-02 12:17:27.586 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:17:27 np0005465987 nova_compute[230713]: 2025-10-02 12:17:27.620 2 DEBUG nova.compute.manager [req-e5f2ec15-56e6-43b8-a4c3-6d4c08a6926e req-cfcdd2c9-ab6e-42de-a546-6497c9b6ee90 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Received event network-vif-deleted-62e8e068-3aad-45d3-984e-6d434301bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:27 np0005465987 nova_compute[230713]: 2025-10-02 12:17:27.625 2 DEBUG oslo_concurrency.lockutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:27 np0005465987 nova_compute[230713]: 2025-10-02 12:17:27.625 2 DEBUG oslo_concurrency.lockutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:27 np0005465987 nova_compute[230713]: 2025-10-02 12:17:27.698 2 DEBUG oslo_concurrency.processutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:27.810 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e246 e246: 3 total, 3 up, 3 in
Oct  2 08:17:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:27.943 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:27.944 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:17:27.944 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:28.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3409883207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:28 np0005465987 nova_compute[230713]: 2025-10-02 12:17:28.124 2 DEBUG oslo_concurrency.processutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:28 np0005465987 nova_compute[230713]: 2025-10-02 12:17:28.130 2 DEBUG nova.compute.provider_tree [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:28 np0005465987 nova_compute[230713]: 2025-10-02 12:17:28.151 2 DEBUG nova.scheduler.client.report [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:28 np0005465987 nova_compute[230713]: 2025-10-02 12:17:28.177 2 DEBUG oslo_concurrency.lockutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:28 np0005465987 nova_compute[230713]: 2025-10-02 12:17:28.198 2 INFO nova.scheduler.client.report [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Deleted allocations for instance 6fbb173d-d50b-4c82-bfb8-d8b3384c81df#033[00m
Oct  2 08:17:28 np0005465987 nova_compute[230713]: 2025-10-02 12:17:28.256 2 DEBUG oslo_concurrency.lockutils [None req-f6f2ae50-7f27-4496-a3d7-086836656b91 8e627e098f2b465d97099fee8f489b71 bddf59854a7d4eb1a85ece547923cea0 - - default default] Lock "6fbb173d-d50b-4c82-bfb8-d8b3384c81df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:28 np0005465987 nova_compute[230713]: 2025-10-02 12:17:28.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e247 e247: 3 total, 3 up, 3 in
Oct  2 08:17:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:29.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:29 np0005465987 nova_compute[230713]: 2025-10-02 12:17:29.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:30.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:31.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:32 np0005465987 nova_compute[230713]: 2025-10-02 12:17:32.074 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407437.0732584, 04b6ea5a-b329-4bd1-bf27-48a8644550c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:32 np0005465987 nova_compute[230713]: 2025-10-02 12:17:32.074 2 INFO nova.compute.manager [-] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:17:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:32.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:32 np0005465987 nova_compute[230713]: 2025-10-02 12:17:32.098 2 DEBUG nova.compute.manager [None req-8d34121f-a0d8-46ae-8ddf-7f95f1340695 - - - - - -] [instance: 04b6ea5a-b329-4bd1-bf27-48a8644550c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:33.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e248 e248: 3 total, 3 up, 3 in
Oct  2 08:17:33 np0005465987 nova_compute[230713]: 2025-10-02 12:17:33.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:34.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:34 np0005465987 nova_compute[230713]: 2025-10-02 12:17:34.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:35.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:36.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:17:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:37.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:17:37 np0005465987 nova_compute[230713]: 2025-10-02 12:17:37.631 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:17:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 e249: 3 total, 3 up, 3 in
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.806535) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458806578, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2393, "num_deletes": 268, "total_data_size": 5137116, "memory_usage": 5218704, "flush_reason": "Manual Compaction"}
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458827651, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3369958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36348, "largest_seqno": 38736, "table_properties": {"data_size": 3359936, "index_size": 6388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21114, "raw_average_key_size": 20, "raw_value_size": 3339827, "raw_average_value_size": 3319, "num_data_blocks": 275, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407300, "oldest_key_time": 1759407300, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 21152 microseconds, and 6881 cpu microseconds.
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.827689) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3369958 bytes OK
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.827820) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.829131) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.829145) EVENT_LOG_v1 {"time_micros": 1759407458829140, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.829162) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5126433, prev total WAL file size 5126433, number of live WAL files 2.
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.830379) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3290KB)], [69(7993KB)]
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458830408, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 11554961, "oldest_snapshot_seqno": -1}
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6347 keys, 11404446 bytes, temperature: kUnknown
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458901592, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 11404446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11359618, "index_size": 27886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 162321, "raw_average_key_size": 25, "raw_value_size": 11243366, "raw_average_value_size": 1771, "num_data_blocks": 1122, "num_entries": 6347, "num_filter_entries": 6347, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.901879) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11404446 bytes
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.903241) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.0 rd, 159.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.8 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.8) write-amplify(3.4) OK, records in: 6892, records dropped: 545 output_compression: NoCompression
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.903263) EVENT_LOG_v1 {"time_micros": 1759407458903252, "job": 42, "event": "compaction_finished", "compaction_time_micros": 71320, "compaction_time_cpu_micros": 24648, "output_level": 6, "num_output_files": 1, "total_output_size": 11404446, "num_input_records": 6892, "num_output_records": 6347, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458904257, "job": 42, "event": "table_file_deletion", "file_number": 71}
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458906639, "job": 42, "event": "table_file_deletion", "file_number": 69}
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.830282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.906777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.906783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.906785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.906786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:38.906790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005465987 nova_compute[230713]: 2025-10-02 12:17:38.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:39 np0005465987 nova_compute[230713]: 2025-10-02 12:17:39.276 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407444.2759995, 6fbb173d-d50b-4c82-bfb8-d8b3384c81df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:39 np0005465987 nova_compute[230713]: 2025-10-02 12:17:39.277 2 INFO nova.compute.manager [-] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:17:39 np0005465987 nova_compute[230713]: 2025-10-02 12:17:39.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:39.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:39 np0005465987 nova_compute[230713]: 2025-10-02 12:17:39.405 2 DEBUG nova.compute.manager [None req-c7ec50c7-d0c1-4618-84fe-4e127e206b7a - - - - - -] [instance: 6fbb173d-d50b-4c82-bfb8-d8b3384c81df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:41.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:41 np0005465987 nova_compute[230713]: 2025-10-02 12:17:41.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:41 np0005465987 nova_compute[230713]: 2025-10-02 12:17:41.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:17:41 np0005465987 nova_compute[230713]: 2025-10-02 12:17:41.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:17:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:42.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:42 np0005465987 nova_compute[230713]: 2025-10-02 12:17:42.753 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:42 np0005465987 nova_compute[230713]: 2025-10-02 12:17:42.753 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:42 np0005465987 nova_compute[230713]: 2025-10-02 12:17:42.754 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:17:42 np0005465987 nova_compute[230713]: 2025-10-02 12:17:42.754 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:43.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:43 np0005465987 nova_compute[230713]: 2025-10-02 12:17:43.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:44.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:44Z|00241|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct  2 08:17:44 np0005465987 nova_compute[230713]: 2025-10-02 12:17:44.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:44 np0005465987 nova_compute[230713]: 2025-10-02 12:17:44.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:17:44Z|00242|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct  2 08:17:44 np0005465987 nova_compute[230713]: 2025-10-02 12:17:44.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:45.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:46 np0005465987 nova_compute[230713]: 2025-10-02 12:17:46.023 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updating instance_info_cache with network_info: [{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:46 np0005465987 nova_compute[230713]: 2025-10-02 12:17:46.055 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:46 np0005465987 nova_compute[230713]: 2025-10-02 12:17:46.055 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:17:46 np0005465987 nova_compute[230713]: 2025-10-02 12:17:46.056 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:46.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:46 np0005465987 nova_compute[230713]: 2025-10-02 12:17:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:46 np0005465987 nova_compute[230713]: 2025-10-02 12:17:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:47.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:47 np0005465987 podman[259117]: 2025-10-02 12:17:47.854591345 +0000 UTC m=+0.072382568 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:17:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:48.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:48 np0005465987 nova_compute[230713]: 2025-10-02 12:17:48.678 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:17:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:48 np0005465987 nova_compute[230713]: 2025-10-02 12:17:48.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:49 np0005465987 nova_compute[230713]: 2025-10-02 12:17:49.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:49.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:49 np0005465987 nova_compute[230713]: 2025-10-02 12:17:49.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:17:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:50.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:17:50 np0005465987 nova_compute[230713]: 2025-10-02 12:17:50.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:50 np0005465987 nova_compute[230713]: 2025-10-02 12:17:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:50 np0005465987 nova_compute[230713]: 2025-10-02 12:17:50.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:50 np0005465987 nova_compute[230713]: 2025-10-02 12:17:50.994 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:50 np0005465987 nova_compute[230713]: 2025-10-02 12:17:50.994 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:50 np0005465987 nova_compute[230713]: 2025-10-02 12:17:50.994 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:17:50 np0005465987 nova_compute[230713]: 2025-10-02 12:17:50.994 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:51.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2033746805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.488 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.576 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.576 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.743 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.744 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4558MB free_disk=20.942455291748047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.745 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.745 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.916 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 992b01a0-5a43-42ab-bfcb-96c2db503633 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.916 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.917 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:17:51 np0005465987 nova_compute[230713]: 2025-10-02 12:17:51.981 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:52.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3221452715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:52 np0005465987 nova_compute[230713]: 2025-10-02 12:17:52.430 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:52 np0005465987 nova_compute[230713]: 2025-10-02 12:17:52.437 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:52 np0005465987 nova_compute[230713]: 2025-10-02 12:17:52.851 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:52 np0005465987 nova_compute[230713]: 2025-10-02 12:17:52.882 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:17:52 np0005465987 nova_compute[230713]: 2025-10-02 12:17:52.883 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:53 np0005465987 nova_compute[230713]: 2025-10-02 12:17:53.883 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:53 np0005465987 nova_compute[230713]: 2025-10-02 12:17:53.884 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:53 np0005465987 nova_compute[230713]: 2025-10-02 12:17:53.884 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:17:53 np0005465987 nova_compute[230713]: 2025-10-02 12:17:53.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:54.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:54 np0005465987 nova_compute[230713]: 2025-10-02 12:17:54.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:55.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.764271) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475764323, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 427, "num_deletes": 251, "total_data_size": 468013, "memory_usage": 476120, "flush_reason": "Manual Compaction"}
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475768576, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 308298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38741, "largest_seqno": 39163, "table_properties": {"data_size": 305884, "index_size": 514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5950, "raw_average_key_size": 18, "raw_value_size": 301121, "raw_average_value_size": 949, "num_data_blocks": 23, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407458, "oldest_key_time": 1759407458, "file_creation_time": 1759407475, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 4331 microseconds, and 1874 cpu microseconds.
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.768611) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 308298 bytes OK
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.768626) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.772261) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.772276) EVENT_LOG_v1 {"time_micros": 1759407475772271, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.772289) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 465313, prev total WAL file size 465313, number of live WAL files 2.
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.772774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(301KB)], [72(10MB)]
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475772815, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 11712744, "oldest_snapshot_seqno": -1}
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6154 keys, 9867467 bytes, temperature: kUnknown
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475824431, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 9867467, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9825277, "index_size": 25696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 159017, "raw_average_key_size": 25, "raw_value_size": 9713678, "raw_average_value_size": 1578, "num_data_blocks": 1023, "num_entries": 6154, "num_filter_entries": 6154, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759407475, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.824605) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9867467 bytes
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.825643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.7 rd, 191.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.9 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(70.0) write-amplify(32.0) OK, records in: 6664, records dropped: 510 output_compression: NoCompression
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.825657) EVENT_LOG_v1 {"time_micros": 1759407475825650, "job": 44, "event": "compaction_finished", "compaction_time_micros": 51667, "compaction_time_cpu_micros": 25690, "output_level": 6, "num_output_files": 1, "total_output_size": 9867467, "num_input_records": 6664, "num_output_records": 6154, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475825807, "job": 44, "event": "table_file_deletion", "file_number": 74}
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407475827803, "job": 44, "event": "table_file_deletion", "file_number": 72}
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.772686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.827876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.827882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.827884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.827885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:55 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:17:55.827887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:56.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:57.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:57 np0005465987 podman[259183]: 2025-10-02 12:17:57.853462157 +0000 UTC m=+0.060866884 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:17:57 np0005465987 podman[259184]: 2025-10-02 12:17:57.867488582 +0000 UTC m=+0.073036547 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:17:57 np0005465987 podman[259182]: 2025-10-02 12:17:57.867592445 +0000 UTC m=+0.082517329 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 08:17:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:58.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:58 np0005465987 nova_compute[230713]: 2025-10-02 12:17:58.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:59 np0005465987 nova_compute[230713]: 2025-10-02 12:17:59.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:17:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:59.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:59 np0005465987 nova_compute[230713]: 2025-10-02 12:17:59.724 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:18:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:00.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:00 np0005465987 nova_compute[230713]: 2025-10-02 12:18:00.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:01.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:03.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:03 np0005465987 nova_compute[230713]: 2025-10-02 12:18:03.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:04.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:04 np0005465987 nova_compute[230713]: 2025-10-02 12:18:04.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:05.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:06.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:18:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:18:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:18:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:07.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:08 np0005465987 nova_compute[230713]: 2025-10-02 12:18:08.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:09 np0005465987 nova_compute[230713]: 2025-10-02 12:18:09.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:09.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:10 np0005465987 nova_compute[230713]: 2025-10-02 12:18:10.774 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:18:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:11.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:13.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:13 np0005465987 nova_compute[230713]: 2025-10-02 12:18:13.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:14.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:14 np0005465987 nova_compute[230713]: 2025-10-02 12:18:14.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:15.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:18:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:18:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:16.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:17.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:18.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:18 np0005465987 podman[259430]: 2025-10-02 12:18:18.847415417 +0000 UTC m=+0.056923035 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:18:19 np0005465987 nova_compute[230713]: 2025-10-02 12:18:19.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:19 np0005465987 nova_compute[230713]: 2025-10-02 12:18:19.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:19.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:20.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:21.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:21 np0005465987 nova_compute[230713]: 2025-10-02 12:18:21.821 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:18:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:22.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:23.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:24 np0005465987 nova_compute[230713]: 2025-10-02 12:18:24.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:24 np0005465987 nova_compute[230713]: 2025-10-02 12:18:24.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:25.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:26.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:18:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3737825568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:18:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:18:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3737825568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:18:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:27.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:27 np0005465987 nova_compute[230713]: 2025-10-02 12:18:27.844 2 INFO nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance failed to shutdown in 60 seconds.#033[00m
Oct  2 08:18:27 np0005465987 kernel: tap60c78706-8c (unregistering): left promiscuous mode
Oct  2 08:18:27 np0005465987 NetworkManager[44910]: <info>  [1759407507.8854] device (tap60c78706-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:18:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:27Z|00243|binding|INFO|Releasing lport 60c78706-8cfe-409b-91d7-dc811b9de69a from this chassis (sb_readonly=0)
Oct  2 08:18:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:27Z|00244|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a down in Southbound
Oct  2 08:18:27 np0005465987 nova_compute[230713]: 2025-10-02 12:18:27.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:27Z|00245|binding|INFO|Removing iface tap60c78706-8c ovn-installed in OVS
Oct  2 08:18:27 np0005465987 nova_compute[230713]: 2025-10-02 12:18:27.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:27.909 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:85:3a 10.100.0.13'], port_security=['fa:16:3e:f3:85:3a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=60c78706-8cfe-409b-91d7-dc811b9de69a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:27.910 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 60c78706-8cfe-409b-91d7-dc811b9de69a in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis#033[00m
Oct  2 08:18:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:27.913 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:27.914 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c83f71da-3ed7-4a7f-81e6-6b1a7af899b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:27.914 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace which is not needed anymore#033[00m
Oct  2 08:18:27 np0005465987 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Oct  2 08:18:27 np0005465987 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004c.scope: Consumed 1.882s CPU time.
Oct  2 08:18:27 np0005465987 nova_compute[230713]: 2025-10-02 12:18:27.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:27.944 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:27.945 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:27.945 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:27 np0005465987 systemd-machined[188335]: Machine qemu-34-instance-0000004c terminated.
Oct  2 08:18:27 np0005465987 podman[259454]: 2025-10-02 12:18:27.987673675 +0000 UTC m=+0.072648947 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:18:28 np0005465987 podman[259450]: 2025-10-02 12:18:28.009105653 +0000 UTC m=+0.100559004 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:18:28 np0005465987 podman[259453]: 2025-10-02 12:18:28.010785459 +0000 UTC m=+0.099881495 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct  2 08:18:28 np0005465987 kernel: tap60c78706-8c: entered promiscuous mode
Oct  2 08:18:28 np0005465987 systemd-udevd[259483]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:18:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:28Z|00246|binding|INFO|Claiming lport 60c78706-8cfe-409b-91d7-dc811b9de69a for this chassis.
Oct  2 08:18:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:28Z|00247|binding|INFO|60c78706-8cfe-409b-91d7-dc811b9de69a: Claiming fa:16:3e:f3:85:3a 10.100.0.13
Oct  2 08:18:28 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[258916]: [NOTICE]   (258920) : haproxy version is 2.8.14-c23fe91
Oct  2 08:18:28 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[258916]: [NOTICE]   (258920) : path to executable is /usr/sbin/haproxy
Oct  2 08:18:28 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[258916]: [ALERT]    (258920) : Current worker (258922) exited with code 143 (Terminated)
Oct  2 08:18:28 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[258916]: [WARNING]  (258920) : All workers exited. Exiting... (0)
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:28 np0005465987 systemd[1]: libpod-a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721.scope: Deactivated successfully.
Oct  2 08:18:28 np0005465987 NetworkManager[44910]: <info>  [1759407508.0701] manager: (tap60c78706-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Oct  2 08:18:28 np0005465987 kernel: tap60c78706-8c (unregistering): left promiscuous mode
Oct  2 08:18:28 np0005465987 podman[259534]: 2025-10-02 12:18:28.079588899 +0000 UTC m=+0.054158919 container died a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.078 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:85:3a 10.100.0.13'], port_security=['fa:16:3e:f3:85:3a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=60c78706-8cfe-409b-91d7-dc811b9de69a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:28Z|00248|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a ovn-installed in OVS
Oct  2 08:18:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:28Z|00249|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a up in Southbound
Oct  2 08:18:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:28Z|00250|binding|INFO|Releasing lport 60c78706-8cfe-409b-91d7-dc811b9de69a from this chassis (sb_readonly=1)
Oct  2 08:18:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:28Z|00251|if_status|INFO|Not setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a down as sb is readonly
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:28Z|00252|binding|INFO|Removing iface tap60c78706-8c ovn-installed in OVS
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.100 2 INFO nova.virt.libvirt.driver [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance destroyed successfully.#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.101 2 DEBUG nova.objects.instance [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:28Z|00253|binding|INFO|Releasing lport 60c78706-8cfe-409b-91d7-dc811b9de69a from this chassis (sb_readonly=0)
Oct  2 08:18:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:28Z|00254|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a down in Southbound
Oct  2 08:18:28 np0005465987 systemd[1]: var-lib-containers-storage-overlay-488ac1058c2eaeadba9adfcbe14318f5a23af66e5a1b0aab3432ed8e72a196eb-merged.mount: Deactivated successfully.
Oct  2 08:18:28 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721-userdata-shm.mount: Deactivated successfully.
Oct  2 08:18:28 np0005465987 podman[259534]: 2025-10-02 12:18:28.118796516 +0000 UTC m=+0.093366526 container cleanup a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.123 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:85:3a 10.100.0.13'], port_security=['fa:16:3e:f3:85:3a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=60c78706-8cfe-409b-91d7-dc811b9de69a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.131 2 INFO nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Attempting a stable device rescue#033[00m
Oct  2 08:18:28 np0005465987 systemd[1]: libpod-conmon-a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721.scope: Deactivated successfully.
Oct  2 08:18:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:28.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:28 np0005465987 podman[259569]: 2025-10-02 12:18:28.184988125 +0000 UTC m=+0.040011040 container remove a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.191 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2000ffd8-4cb5-4873-92e3-b595f61173d0]: (4, ('Thu Oct  2 12:18:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721)\na3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721\nThu Oct  2 12:18:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (a3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721)\na3240b1718fe010208385f37889ee0906d9496a53821332d13a366042a7e6721\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.193 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[20ee8101-69d1-4d0b-b1d1-a4a1610bc03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.194 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:28 np0005465987 kernel: tapf1725bd8-70: left promiscuous mode
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.261 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f98275ca-2320-42f5-a7cd-0b739daf552e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.286 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f28d72ff-4353-437a-9a95-fe6dd0508ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.287 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[59e055da-db48-44ca-b4f3-5ca625702903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.300 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[989cd97c-4099-4b48-b012-b3bb382c5ff4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551013, 'reachable_time': 42402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259589, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.302 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.302 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[e1919d33-b635-4071-ad86-b547dc21143f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:28 np0005465987 systemd[1]: run-netns-ovnmeta\x2df1725bd8\x2d7d9d\x2d45cc\x2db992\x2d0cd3db0e30f0.mount: Deactivated successfully.
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.304 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 60c78706-8cfe-409b-91d7-dc811b9de69a in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.305 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.306 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a99a30-41ca-40f3-a8d8-bb78508f68e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.306 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 60c78706-8cfe-409b-91d7-dc811b9de69a in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.307 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.307 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8c253c91-0bb8-4479-a183-3ba9dc1dbfe8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.367 2 DEBUG nova.compute.manager [req-f7ae6435-68da-4d24-95b3-645d6b3479d3 req-1623cc95-d192-4fe5-91bb-79b6882adec0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.369 2 DEBUG oslo_concurrency.lockutils [req-f7ae6435-68da-4d24-95b3-645d6b3479d3 req-1623cc95-d192-4fe5-91bb-79b6882adec0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.369 2 DEBUG oslo_concurrency.lockutils [req-f7ae6435-68da-4d24-95b3-645d6b3479d3 req-1623cc95-d192-4fe5-91bb-79b6882adec0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.369 2 DEBUG oslo_concurrency.lockutils [req-f7ae6435-68da-4d24-95b3-645d6b3479d3 req-1623cc95-d192-4fe5-91bb-79b6882adec0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.370 2 DEBUG nova.compute.manager [req-f7ae6435-68da-4d24-95b3-645d6b3479d3 req-1623cc95-d192-4fe5-91bb-79b6882adec0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.370 2 WARNING nova.compute.manager [req-f7ae6435-68da-4d24-95b3-645d6b3479d3 req-1623cc95-d192-4fe5-91bb-79b6882adec0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.527 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:28.528 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:18:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:18:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194550672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:18:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:18:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194550672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:18:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.783 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.787 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.787 2 INFO nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Creating image(s)#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.815 2 DEBUG nova.storage.rbd_utils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.819 2 DEBUG nova.objects.instance [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.865 2 DEBUG nova.storage.rbd_utils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.894 2 DEBUG nova.storage.rbd_utils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.897 2 DEBUG oslo_concurrency.lockutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "249ec3c5f09247ab2d572fbbf93d5ccce029cf9d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:28 np0005465987 nova_compute[230713]: 2025-10-02 12:18:28.898 2 DEBUG oslo_concurrency.lockutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "249ec3c5f09247ab2d572fbbf93d5ccce029cf9d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:29.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.532 2 DEBUG nova.virt.libvirt.imagebackend [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/e7ad18e6-654f-48ae-a957-a50e1a2c7a2d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/e7ad18e6-654f-48ae-a957-a50e1a2c7a2d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.598 2 DEBUG nova.virt.libvirt.imagebackend [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/e7ad18e6-654f-48ae-a957-a50e1a2c7a2d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.599 2 DEBUG nova.storage.rbd_utils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] cloning images/e7ad18e6-654f-48ae-a957-a50e1a2c7a2d@snap to None/992b01a0-5a43-42ab-bfcb-96c2db503633_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.802 2 DEBUG oslo_concurrency.lockutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "249ec3c5f09247ab2d572fbbf93d5ccce029cf9d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.850 2 DEBUG nova.objects.instance [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'migration_context' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.870 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.873 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Start _get_guest_xml network_info=[{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "vif_mac": "fa:16:3e:f3:85:3a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'e7ad18e6-654f-48ae-a957-a50e1a2c7a2d', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '81d64467-a589-41df-8763-5a45265a1bef', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-27c2fdcd-9a25-49d0-befe-30f5b9c08453', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '27c2fdcd-9a25-49d0-befe-30f5b9c08453', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'attached_at': '', 'detached_at': '', 'volume_id': '27c2fdcd-9a25-49d0-befe-30f5b9c08453', 'serial': '27c2fdcd-9a25-49d0-befe-30f5b9c08453'}, 'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.873 2 DEBUG nova.objects.instance [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'resources' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.889 2 WARNING nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.895 2 DEBUG nova.virt.libvirt.host [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.896 2 DEBUG nova.virt.libvirt.host [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.899 2 DEBUG nova.virt.libvirt.host [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.899 2 DEBUG nova.virt.libvirt.host [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.901 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.901 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.901 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.902 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.902 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.902 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.903 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.903 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.903 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.903 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.904 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.904 2 DEBUG nova.virt.hardware [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.904 2 DEBUG nova.objects.instance [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:29 np0005465987 nova_compute[230713]: 2025-10-02 12:18:29.955 2 DEBUG oslo_concurrency.processutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:30.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:18:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3744318202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.358 2 DEBUG oslo_concurrency.processutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.386 2 DEBUG oslo_concurrency.processutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.459 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.460 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.461 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.461 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.461 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.462 2 WARNING nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.462 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.462 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.463 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.463 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.464 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.464 2 WARNING nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.464 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.465 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.465 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.465 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.466 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.466 2 WARNING nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.466 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.466 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.467 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.467 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.467 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.467 2 WARNING nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.468 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.468 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.468 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.469 2 DEBUG oslo_concurrency.lockutils [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.469 2 DEBUG nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.469 2 WARNING nova.compute.manager [req-115acf2b-f979-470a-a30a-a13ece07b775 req-126956f6-9349-451c-8219-e272c1d3b33d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:18:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:18:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2878355219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.804 2 DEBUG oslo_concurrency.processutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.805 2 DEBUG nova.virt.libvirt.vif [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-803739487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-803739487',id=76,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8efba404696b40fbbaa6431b934b87f1',ramdisk_id='',reservation_id='r-8tky9r5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-153154373',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:17:22Z,user_data=None,user_id='69d8e29c6d3747e98a5985a584f4c814',uuid=992b01a0-5a43-42ab-bfcb-96c2db503633,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "vif_mac": "fa:16:3e:f3:85:3a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.806 2 DEBUG nova.network.os_vif_util [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converting VIF {"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "vif_mac": "fa:16:3e:f3:85:3a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.807 2 DEBUG nova.network.os_vif_util [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:85:3a,bridge_name='br-int',has_traffic_filtering=True,id=60c78706-8cfe-409b-91d7-dc811b9de69a,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c78706-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.809 2 DEBUG nova.objects.instance [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.833 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <uuid>992b01a0-5a43-42ab-bfcb-96c2db503633</uuid>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <name>instance-0000004c</name>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-803739487</nova:name>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:18:29</nova:creationTime>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <nova:user uuid="69d8e29c6d3747e98a5985a584f4c814">tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member</nova:user>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <nova:project uuid="8efba404696b40fbbaa6431b934b87f1">tempest-ServerBootFromVolumeStableRescueTest-153154373</nova:project>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <nova:port uuid="60c78706-8cfe-409b-91d7-dc811b9de69a">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <entry name="serial">992b01a0-5a43-42ab-bfcb-96c2db503633</entry>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <entry name="uuid">992b01a0-5a43-42ab-bfcb-96c2db503633</entry>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-27c2fdcd-9a25-49d0-befe-30f5b9c08453">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <serial>27c2fdcd-9a25-49d0-befe-30f5b9c08453</serial>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/992b01a0-5a43-42ab-bfcb-96c2db503633_disk.rescue">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <boot order="1"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:f3:85:3a"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <target dev="tap60c78706-8c"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/console.log" append="off"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:18:30 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:18:30 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:18:30 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:18:30 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.847 2 INFO nova.virt.libvirt.driver [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance destroyed successfully.#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.902 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.903 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.903 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.903 2 DEBUG nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] No VIF found with MAC fa:16:3e:f3:85:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.904 2 INFO nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Using config drive#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.931 2 DEBUG nova.storage.rbd_utils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.956 2 DEBUG nova.objects.instance [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:30 np0005465987 nova_compute[230713]: 2025-10-02 12:18:30.985 2 DEBUG nova.objects.instance [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'keypairs' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:31.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:32.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:33.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:33.530 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:33 np0005465987 nova_compute[230713]: 2025-10-02 12:18:33.815 2 INFO nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Creating config drive at /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config.rescue#033[00m
Oct  2 08:18:33 np0005465987 nova_compute[230713]: 2025-10-02 12:18:33.821 2 DEBUG oslo_concurrency.processutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6tb02ho execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:33 np0005465987 nova_compute[230713]: 2025-10-02 12:18:33.964 2 DEBUG oslo_concurrency.processutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6tb02ho" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.001 2 DEBUG nova.storage.rbd_utils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] rbd image 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.005 2 DEBUG oslo_concurrency.processutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config.rescue 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.177 2 DEBUG oslo_concurrency.processutils [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config.rescue 992b01a0-5a43-42ab-bfcb-96c2db503633_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.178 2 INFO nova.virt.libvirt.driver [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Deleting local config drive /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:18:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:34.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:34 np0005465987 kernel: tap60c78706-8c: entered promiscuous mode
Oct  2 08:18:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:34Z|00255|binding|INFO|Claiming lport 60c78706-8cfe-409b-91d7-dc811b9de69a for this chassis.
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 NetworkManager[44910]: <info>  [1759407514.2443] manager: (tap60c78706-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Oct  2 08:18:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:34Z|00256|binding|INFO|60c78706-8cfe-409b-91d7-dc811b9de69a: Claiming fa:16:3e:f3:85:3a 10.100.0.13
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.253 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:85:3a 10.100.0.13'], port_security=['fa:16:3e:f3:85:3a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '7', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=60c78706-8cfe-409b-91d7-dc811b9de69a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.255 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 60c78706-8cfe-409b-91d7-dc811b9de69a in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 bound to our chassis#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.256 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0#033[00m
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:34Z|00257|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a ovn-installed in OVS
Oct  2 08:18:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:34Z|00258|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a up in Southbound
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 systemd-udevd[259864]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.268 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2fcd64-1b6d-470e-8abf-5ab0f260b9da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.269 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1725bd8-71 in ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.272 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1725bd8-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.272 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d4553f87-6a01-4ef8-8bd9-90430338283e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.272 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d40b3f5a-415f-428a-bece-a6a7d8378553]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 systemd-machined[188335]: New machine qemu-35-instance-0000004c.
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.283 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[47ddd20f-8b96-4d98-9e60-5f21d38d0070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 NetworkManager[44910]: <info>  [1759407514.2896] device (tap60c78706-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:18:34 np0005465987 NetworkManager[44910]: <info>  [1759407514.2906] device (tap60c78706-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:18:34 np0005465987 systemd[1]: Started Virtual Machine qemu-35-instance-0000004c.
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.307 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[83da4645-2942-4bf6-8c1c-1a9c384deebf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.340 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1764b34c-3ddb-4055-9665-62f182dd52fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.344 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2f4986-0f02-47ba-b982-d135c761fd43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 NetworkManager[44910]: <info>  [1759407514.3454] manager: (tapf1725bd8-70): new Veth device (/org/freedesktop/NetworkManager/Devices/139)
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.376 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[38b5aefc-6373-48cd-ba51-b4e3b7cdf2e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.379 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[031d3ce4-9705-46f4-9c5b-875cc509f80d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 NetworkManager[44910]: <info>  [1759407514.4022] device (tapf1725bd8-70): carrier: link connected
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.407 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[165cc456-cd9a-4404-8e4b-b7c51bad9ce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.423 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e47681aa-7b0c-4d52-becc-b6a43102a403]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558297, 'reachable_time': 22910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259898, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.435 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf0da58-73f3-4fcc-852d-8c0071d748d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:76f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558297, 'tstamp': 558297}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259899, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.451 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6f3ffd-6644-4708-8a8d-6f4e578ee19e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558297, 'reachable_time': 22910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259900, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.484 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2860fb-ecd1-402a-ada4-feba152507e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.541 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fc01e37f-cdff-47e4-bf99-9da38a7b91f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.543 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.543 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.543 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1725bd8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:34 np0005465987 NetworkManager[44910]: <info>  [1759407514.5458] manager: (tapf1725bd8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 kernel: tapf1725bd8-70: entered promiscuous mode
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.549 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1725bd8-70, col_values=(('external_ids', {'iface-id': '421cd6e3-75aa-44e1-b552-d119c4fcd629'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:34Z|00259|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct  2 08:18:34 np0005465987 nova_compute[230713]: 2025-10-02 12:18:34.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.564 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.565 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf74d39-5526-4eb6-afbc-741f89c309bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.566 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:18:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:34.566 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'env', 'PROCESS_TAG=haproxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:18:34 np0005465987 podman[259992]: 2025-10-02 12:18:34.950936445 +0000 UTC m=+0.043290480 container create 8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:18:34 np0005465987 systemd[1]: Started libpod-conmon-8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823.scope.
Oct  2 08:18:35 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:18:35 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/461e3bbeb22095166463eae64bdaf821c7a1f2ee9b6a0c3e045ca66926792f77/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:18:35 np0005465987 podman[259992]: 2025-10-02 12:18:34.927345107 +0000 UTC m=+0.019699162 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:18:35 np0005465987 podman[259992]: 2025-10-02 12:18:35.032894487 +0000 UTC m=+0.125248542 container init 8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 08:18:35 np0005465987 podman[259992]: 2025-10-02 12:18:35.038352946 +0000 UTC m=+0.130706981 container start 8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:18:35 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260007]: [NOTICE]   (260011) : New worker (260013) forked
Oct  2 08:18:35 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260007]: [NOTICE]   (260011) : Loading success.
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.148 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 992b01a0-5a43-42ab-bfcb-96c2db503633 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.149 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407515.1470056, 992b01a0-5a43-42ab-bfcb-96c2db503633 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.149 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.153 2 DEBUG nova.compute.manager [None req-34085946-4c50-485e-9369-e1cf3602f35a 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.189 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.193 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.218 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407515.1478825, 992b01a0-5a43-42ab-bfcb-96c2db503633 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.218 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] VM Started (Lifecycle Event)#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.235 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.239 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:18:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:35.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.714 2 DEBUG nova.compute.manager [req-83171ee2-c217-4115-9618-6a0fda5250ef req-c1eb88f1-08cd-4786-bdc5-98f35d7fcc67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.714 2 DEBUG oslo_concurrency.lockutils [req-83171ee2-c217-4115-9618-6a0fda5250ef req-c1eb88f1-08cd-4786-bdc5-98f35d7fcc67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.715 2 DEBUG oslo_concurrency.lockutils [req-83171ee2-c217-4115-9618-6a0fda5250ef req-c1eb88f1-08cd-4786-bdc5-98f35d7fcc67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.715 2 DEBUG oslo_concurrency.lockutils [req-83171ee2-c217-4115-9618-6a0fda5250ef req-c1eb88f1-08cd-4786-bdc5-98f35d7fcc67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.715 2 DEBUG nova.compute.manager [req-83171ee2-c217-4115-9618-6a0fda5250ef req-c1eb88f1-08cd-4786-bdc5-98f35d7fcc67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:35 np0005465987 nova_compute[230713]: 2025-10-02 12:18:35.715 2 WARNING nova.compute.manager [req-83171ee2-c217-4115-9618-6a0fda5250ef req-c1eb88f1-08cd-4786-bdc5-98f35d7fcc67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:18:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:36.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:18:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:37.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:18:37 np0005465987 nova_compute[230713]: 2025-10-02 12:18:37.817 2 DEBUG nova.compute.manager [req-e37b98ff-cfcc-443a-9a40-dd2e7dd6366e req-1d4c62a3-033d-441d-9eb2-585b9cfb3e35 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:37 np0005465987 nova_compute[230713]: 2025-10-02 12:18:37.817 2 DEBUG oslo_concurrency.lockutils [req-e37b98ff-cfcc-443a-9a40-dd2e7dd6366e req-1d4c62a3-033d-441d-9eb2-585b9cfb3e35 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:37 np0005465987 nova_compute[230713]: 2025-10-02 12:18:37.818 2 DEBUG oslo_concurrency.lockutils [req-e37b98ff-cfcc-443a-9a40-dd2e7dd6366e req-1d4c62a3-033d-441d-9eb2-585b9cfb3e35 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:37 np0005465987 nova_compute[230713]: 2025-10-02 12:18:37.818 2 DEBUG oslo_concurrency.lockutils [req-e37b98ff-cfcc-443a-9a40-dd2e7dd6366e req-1d4c62a3-033d-441d-9eb2-585b9cfb3e35 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:37 np0005465987 nova_compute[230713]: 2025-10-02 12:18:37.818 2 DEBUG nova.compute.manager [req-e37b98ff-cfcc-443a-9a40-dd2e7dd6366e req-1d4c62a3-033d-441d-9eb2-585b9cfb3e35 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:37 np0005465987 nova_compute[230713]: 2025-10-02 12:18:37.819 2 WARNING nova.compute.manager [req-e37b98ff-cfcc-443a-9a40-dd2e7dd6366e req-1d4c62a3-033d-441d-9eb2-585b9cfb3e35 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:18:38 np0005465987 nova_compute[230713]: 2025-10-02 12:18:38.137 2 INFO nova.compute.manager [None req-fced2248-df64-49b6-bb91-3584fb7a8601 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Unrescuing#033[00m
Oct  2 08:18:38 np0005465987 nova_compute[230713]: 2025-10-02 12:18:38.138 2 DEBUG oslo_concurrency.lockutils [None req-fced2248-df64-49b6-bb91-3584fb7a8601 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:38 np0005465987 nova_compute[230713]: 2025-10-02 12:18:38.138 2 DEBUG oslo_concurrency.lockutils [None req-fced2248-df64-49b6-bb91-3584fb7a8601 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquired lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:38 np0005465987 nova_compute[230713]: 2025-10-02 12:18:38.139 2 DEBUG nova.network.neutron [None req-fced2248-df64-49b6-bb91-3584fb7a8601 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:18:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:38.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:39 np0005465987 nova_compute[230713]: 2025-10-02 12:18:39.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:39.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:39 np0005465987 nova_compute[230713]: 2025-10-02 12:18:39.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:40.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.443 2 DEBUG nova.network.neutron [None req-fced2248-df64-49b6-bb91-3584fb7a8601 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updating instance_info_cache with network_info: [{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.465 2 DEBUG oslo_concurrency.lockutils [None req-fced2248-df64-49b6-bb91-3584fb7a8601 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Releasing lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.467 2 DEBUG nova.objects.instance [None req-fced2248-df64-49b6-bb91-3584fb7a8601 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'flavor' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:40 np0005465987 kernel: tap60c78706-8c (unregistering): left promiscuous mode
Oct  2 08:18:40 np0005465987 NetworkManager[44910]: <info>  [1759407520.5839] device (tap60c78706-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:18:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:40Z|00260|binding|INFO|Releasing lport 60c78706-8cfe-409b-91d7-dc811b9de69a from this chassis (sb_readonly=0)
Oct  2 08:18:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:40Z|00261|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a down in Southbound
Oct  2 08:18:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:40Z|00262|binding|INFO|Removing iface tap60c78706-8c ovn-installed in OVS
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:40.602 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:85:3a 10.100.0.13'], port_security=['fa:16:3e:f3:85:3a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=60c78706-8cfe-409b-91d7-dc811b9de69a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:40.605 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 60c78706-8cfe-409b-91d7-dc811b9de69a in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis#033[00m
Oct  2 08:18:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:40.607 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:40.609 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[12d358db-9f08-46b4-8cde-41f30ce00481]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:40.610 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace which is not needed anymore#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:40 np0005465987 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Oct  2 08:18:40 np0005465987 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004c.scope: Consumed 6.422s CPU time.
Oct  2 08:18:40 np0005465987 systemd-machined[188335]: Machine qemu-35-instance-0000004c terminated.
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.736 2 INFO nova.virt.libvirt.driver [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance destroyed successfully.#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.737 2 DEBUG nova.objects.instance [None req-fced2248-df64-49b6-bb91-3584fb7a8601 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:40 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260007]: [NOTICE]   (260011) : haproxy version is 2.8.14-c23fe91
Oct  2 08:18:40 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260007]: [NOTICE]   (260011) : path to executable is /usr/sbin/haproxy
Oct  2 08:18:40 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260007]: [WARNING]  (260011) : Exiting Master process...
Oct  2 08:18:40 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260007]: [ALERT]    (260011) : Current worker (260013) exited with code 143 (Terminated)
Oct  2 08:18:40 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260007]: [WARNING]  (260011) : All workers exited. Exiting... (0)
Oct  2 08:18:40 np0005465987 systemd[1]: libpod-8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823.scope: Deactivated successfully.
Oct  2 08:18:40 np0005465987 podman[260048]: 2025-10-02 12:18:40.749313663 +0000 UTC m=+0.048990407 container died 8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:18:40 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823-userdata-shm.mount: Deactivated successfully.
Oct  2 08:18:40 np0005465987 systemd[1]: var-lib-containers-storage-overlay-461e3bbeb22095166463eae64bdaf821c7a1f2ee9b6a0c3e045ca66926792f77-merged.mount: Deactivated successfully.
Oct  2 08:18:40 np0005465987 kernel: tap60c78706-8c: entered promiscuous mode
Oct  2 08:18:40 np0005465987 systemd-udevd[260025]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:40Z|00263|binding|INFO|Claiming lport 60c78706-8cfe-409b-91d7-dc811b9de69a for this chassis.
Oct  2 08:18:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:40Z|00264|binding|INFO|60c78706-8cfe-409b-91d7-dc811b9de69a: Claiming fa:16:3e:f3:85:3a 10.100.0.13
Oct  2 08:18:40 np0005465987 NetworkManager[44910]: <info>  [1759407520.8913] manager: (tap60c78706-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Oct  2 08:18:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:40.894 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:85:3a 10.100.0.13'], port_security=['fa:16:3e:f3:85:3a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=60c78706-8cfe-409b-91d7-dc811b9de69a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:40 np0005465987 NetworkManager[44910]: <info>  [1759407520.9046] device (tap60c78706-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:18:40 np0005465987 NetworkManager[44910]: <info>  [1759407520.9053] device (tap60c78706-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:18:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:40Z|00265|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a ovn-installed in OVS
Oct  2 08:18:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:40Z|00266|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a up in Southbound
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:40 np0005465987 systemd-machined[188335]: New machine qemu-36-instance-0000004c.
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.940 2 DEBUG nova.compute.manager [req-1e7c6bad-cf86-4ac9-b8ab-da24bc7d42c3 req-5b32aa7c-138b-4062-af5b-b1fc90f0e1ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.941 2 DEBUG oslo_concurrency.lockutils [req-1e7c6bad-cf86-4ac9-b8ab-da24bc7d42c3 req-5b32aa7c-138b-4062-af5b-b1fc90f0e1ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.941 2 DEBUG oslo_concurrency.lockutils [req-1e7c6bad-cf86-4ac9-b8ab-da24bc7d42c3 req-5b32aa7c-138b-4062-af5b-b1fc90f0e1ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.941 2 DEBUG oslo_concurrency.lockutils [req-1e7c6bad-cf86-4ac9-b8ab-da24bc7d42c3 req-5b32aa7c-138b-4062-af5b-b1fc90f0e1ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.942 2 DEBUG nova.compute.manager [req-1e7c6bad-cf86-4ac9-b8ab-da24bc7d42c3 req-5b32aa7c-138b-4062-af5b-b1fc90f0e1ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:40 np0005465987 nova_compute[230713]: 2025-10-02 12:18:40.942 2 WARNING nova.compute.manager [req-1e7c6bad-cf86-4ac9-b8ab-da24bc7d42c3 req-5b32aa7c-138b-4062-af5b-b1fc90f0e1ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:18:40 np0005465987 systemd[1]: Started Virtual Machine qemu-36-instance-0000004c.
Oct  2 08:18:40 np0005465987 podman[260048]: 2025-10-02 12:18:40.978125339 +0000 UTC m=+0.277802093 container cleanup 8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:18:40 np0005465987 systemd[1]: libpod-conmon-8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823.scope: Deactivated successfully.
Oct  2 08:18:41 np0005465987 podman[260103]: 2025-10-02 12:18:41.288216398 +0000 UTC m=+0.280645281 container remove 8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.297 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccfb9cf-819e-4ca9-ab09-157f3d349f40]: (4, ('Thu Oct  2 12:18:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823)\n8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823\nThu Oct  2 12:18:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823)\n8b63bcc44ba8a97646618a03f39d2f2ae2f5a124c0731d2f9eb8ae2fad892823\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.298 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[32525c8d-92c1-4995-b773-a44239cedf07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.300 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:41 np0005465987 kernel: tapf1725bd8-70: left promiscuous mode
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.320 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b1aa8c6c-9dae-43f2-9f15-05112f4506ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.340 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[36ad1930-6517-4be8-83ce-2082e2602f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.341 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[66454bdd-4727-446d-b888-d1b578b32b1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.361 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[63025073-a5b9-4557-89de-4edbd9953603]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558290, 'reachable_time': 20333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260161, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.363 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.363 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[4403a031-29f1-4455-a2a4-4cac8283ebba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 systemd[1]: run-netns-ovnmeta\x2df1725bd8\x2d7d9d\x2d45cc\x2db992\x2d0cd3db0e30f0.mount: Deactivated successfully.
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.365 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 60c78706-8cfe-409b-91d7-dc811b9de69a in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.367 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.382 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e2aefbfc-c09f-47b1-8bcf-a1828ba2a792]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.383 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1725bd8-71 in ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.387 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1725bd8-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.387 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2874e280-599d-420c-a118-8423848f55b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.388 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ba4f48-2896-4760-ba8f-62dad6089de1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.401 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[a58b3555-33e2-4213-b305-c1ec53bd0ab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.421 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8f976eea-0091-4e13-a42b-1a353e179fcd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.457 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[936cf132-dd73-4bcb-9e2d-80fe1eface79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.463 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab1ed46-9d88-4fa3-b80f-6aef4640b482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 NetworkManager[44910]: <info>  [1759407521.4644] manager: (tapf1725bd8-70): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.506 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4f6e65-22e7-4418-924c-c0007b7bf17a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.509 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2eb459-033d-4e84-9586-5902a62073aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 NetworkManager[44910]: <info>  [1759407521.5304] device (tapf1725bd8-70): carrier: link connected
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.537 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a92fc630-e7c4-46bb-8b3e-e27a2bc85969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.552 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2061b5b9-d525-4cbb-a835-05bb1d73ff8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559010, 'reachable_time': 27642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260185, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.567 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7489a132-98be-4422-977a-41bf470282b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:76f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559010, 'tstamp': 559010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260186, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.581 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[beb47677-8bdb-4a37-8430-5f45eea1789e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1725bd8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:76:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559010, 'reachable_time': 27642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260187, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.606 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[809d1a4b-e605-4759-a4a3-ef047900c6b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.655 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1d885543-50ca-4272-9f11-b81614be8543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.657 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.657 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.657 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1725bd8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:41 np0005465987 kernel: tapf1725bd8-70: entered promiscuous mode
Oct  2 08:18:41 np0005465987 NetworkManager[44910]: <info>  [1759407521.6986] manager: (tapf1725bd8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.701 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1725bd8-70, col_values=(('external_ids', {'iface-id': '421cd6e3-75aa-44e1-b552-d119c4fcd629'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:41 np0005465987 ovn_controller[129172]: 2025-10-02T12:18:41Z|00267|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.716 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.716 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f2895b-85e6-401e-90e3-5b8ce778f14d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.717 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.pid.haproxy
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID f1725bd8-7d9d-45cc-b992-0cd3db0e30f0
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:18:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:18:41.718 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'env', 'PROCESS_TAG=haproxy-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1725bd8-7d9d-45cc-b992-0cd3db0e30f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.742 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 992b01a0-5a43-42ab-bfcb-96c2db503633 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.743 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407521.7423182, 992b01a0-5a43-42ab-bfcb-96c2db503633 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.743 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.765 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.771 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.794 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.795 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407521.7445362, 992b01a0-5a43-42ab-bfcb-96c2db503633 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.795 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] VM Started (Lifecycle Event)#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.814 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.832 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.854 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:41 np0005465987 nova_compute[230713]: 2025-10-02 12:18:41.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:18:42 np0005465987 podman[260237]: 2025-10-02 12:18:42.09889262 +0000 UTC m=+0.052381170 container create 96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:18:42 np0005465987 systemd[1]: Started libpod-conmon-96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a.scope.
Oct  2 08:18:42 np0005465987 podman[260237]: 2025-10-02 12:18:42.070762747 +0000 UTC m=+0.024251277 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:18:42 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:18:42 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9672d4e886683278bdde2e8171efe529941acb5aa6f49cc14c18de1e3ffa96f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:18:42 np0005465987 podman[260237]: 2025-10-02 12:18:42.195192835 +0000 UTC m=+0.148681365 container init 96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:18:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:42.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:42 np0005465987 podman[260237]: 2025-10-02 12:18:42.200422429 +0000 UTC m=+0.153910939 container start 96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:18:42 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260253]: [NOTICE]   (260257) : New worker (260259) forked
Oct  2 08:18:42 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260253]: [NOTICE]   (260257) : Loading success.
Oct  2 08:18:42 np0005465987 nova_compute[230713]: 2025-10-02 12:18:42.244 2 DEBUG nova.compute.manager [None req-fced2248-df64-49b6-bb91-3584fb7a8601 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.063 2 DEBUG nova.compute.manager [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.065 2 DEBUG oslo_concurrency.lockutils [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.065 2 DEBUG oslo_concurrency.lockutils [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.065 2 DEBUG oslo_concurrency.lockutils [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.066 2 DEBUG nova.compute.manager [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.066 2 WARNING nova.compute.manager [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state None.#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.066 2 DEBUG nova.compute.manager [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.067 2 DEBUG oslo_concurrency.lockutils [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.067 2 DEBUG oslo_concurrency.lockutils [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.067 2 DEBUG oslo_concurrency.lockutils [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.068 2 DEBUG nova.compute.manager [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.068 2 WARNING nova.compute.manager [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state None.#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.068 2 DEBUG nova.compute.manager [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.069 2 DEBUG oslo_concurrency.lockutils [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.069 2 DEBUG oslo_concurrency.lockutils [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.070 2 DEBUG oslo_concurrency.lockutils [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.070 2 DEBUG nova.compute.manager [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.070 2 WARNING nova.compute.manager [req-9e1129ba-d035-41d4-9305-362307a8771c req-630d58cc-a60b-4229-949b-669d964dfddb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state None.#033[00m
Oct  2 08:18:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:43.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.985 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.986 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:18:43 np0005465987 nova_compute[230713]: 2025-10-02 12:18:43.986 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:18:44 np0005465987 nova_compute[230713]: 2025-10-02 12:18:44.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:44 np0005465987 nova_compute[230713]: 2025-10-02 12:18:44.178 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:44 np0005465987 nova_compute[230713]: 2025-10-02 12:18:44.179 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:44 np0005465987 nova_compute[230713]: 2025-10-02 12:18:44.179 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:18:44 np0005465987 nova_compute[230713]: 2025-10-02 12:18:44.179 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:44.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:44 np0005465987 nova_compute[230713]: 2025-10-02 12:18:44.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:45.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:46.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:47 np0005465987 nova_compute[230713]: 2025-10-02 12:18:47.350 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updating instance_info_cache with network_info: [{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:47 np0005465987 nova_compute[230713]: 2025-10-02 12:18:47.378 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:47 np0005465987 nova_compute[230713]: 2025-10-02 12:18:47.379 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:18:47 np0005465987 nova_compute[230713]: 2025-10-02 12:18:47.380 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:18:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:47.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:18:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:48 np0005465987 nova_compute[230713]: 2025-10-02 12:18:48.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:48 np0005465987 nova_compute[230713]: 2025-10-02 12:18:48.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:48 np0005465987 nova_compute[230713]: 2025-10-02 12:18:48.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:48 np0005465987 nova_compute[230713]: 2025-10-02 12:18:48.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:18:48 np0005465987 nova_compute[230713]: 2025-10-02 12:18:48.980 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:18:48 np0005465987 nova_compute[230713]: 2025-10-02 12:18:48.980 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:49 np0005465987 nova_compute[230713]: 2025-10-02 12:18:49.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:49 np0005465987 nova_compute[230713]: 2025-10-02 12:18:49.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:49.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:49 np0005465987 podman[260268]: 2025-10-02 12:18:49.848865934 +0000 UTC m=+0.063489194 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:18:49 np0005465987 nova_compute[230713]: 2025-10-02 12:18:49.998 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:50.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:50 np0005465987 nova_compute[230713]: 2025-10-02 12:18:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:51.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:51 np0005465987 nova_compute[230713]: 2025-10-02 12:18:51.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:51 np0005465987 nova_compute[230713]: 2025-10-02 12:18:51.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:18:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:52.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:52 np0005465987 nova_compute[230713]: 2025-10-02 12:18:52.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:52 np0005465987 nova_compute[230713]: 2025-10-02 12:18:52.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:52 np0005465987 nova_compute[230713]: 2025-10-02 12:18:52.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:52 np0005465987 nova_compute[230713]: 2025-10-02 12:18:52.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:52 np0005465987 nova_compute[230713]: 2025-10-02 12:18:52.991 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:18:52 np0005465987 nova_compute[230713]: 2025-10-02 12:18:52.991 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:53.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:53 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/132418034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:53 np0005465987 nova_compute[230713]: 2025-10-02 12:18:53.459 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:53 np0005465987 nova_compute[230713]: 2025-10-02 12:18:53.563 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:53 np0005465987 nova_compute[230713]: 2025-10-02 12:18:53.564 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:53 np0005465987 nova_compute[230713]: 2025-10-02 12:18:53.734 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:18:53 np0005465987 nova_compute[230713]: 2025-10-02 12:18:53.735 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4554MB free_disk=20.897048950195312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:18:53 np0005465987 nova_compute[230713]: 2025-10-02 12:18:53.735 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:53 np0005465987 nova_compute[230713]: 2025-10-02 12:18:53.736 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.139 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 992b01a0-5a43-42ab-bfcb-96c2db503633 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.139 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.140 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:18:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:54.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.284 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1918085884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.735 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.743 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.762 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.794 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:18:54 np0005465987 nova_compute[230713]: 2025-10-02 12:18:54.795 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:55.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:55 np0005465987 nova_compute[230713]: 2025-10-02 12:18:55.795 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:56.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:18:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:57.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:18:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:58.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:58 np0005465987 podman[260338]: 2025-10-02 12:18:58.863690191 +0000 UTC m=+0.066759865 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct  2 08:18:58 np0005465987 podman[260339]: 2025-10-02 12:18:58.897171051 +0000 UTC m=+0.083871766 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 08:18:58 np0005465987 podman[260337]: 2025-10-02 12:18:58.941348735 +0000 UTC m=+0.139370661 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:18:59 np0005465987 nova_compute[230713]: 2025-10-02 12:18:59.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:59 np0005465987 nova_compute[230713]: 2025-10-02 12:18:59.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:18:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:00.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:01.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:02.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:02 np0005465987 nova_compute[230713]: 2025-10-02 12:19:02.993 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquiring lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:02 np0005465987 nova_compute[230713]: 2025-10-02 12:19:02.993 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.016 2 DEBUG nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.088 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.089 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.093 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.093 2 INFO nova.compute.claims [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.228 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4013145950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.682 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.688 2 DEBUG nova.compute.provider_tree [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.702 2 DEBUG nova.scheduler.client.report [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.728 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.729 2 DEBUG nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:19:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.771 2 DEBUG nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.772 2 DEBUG nova.network.neutron [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.792 2 INFO nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.809 2 DEBUG nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.901 2 DEBUG nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.902 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.903 2 INFO nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Creating image(s)#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.933 2 DEBUG nova.storage.rbd_utils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] rbd image d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.967 2 DEBUG nova.storage.rbd_utils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] rbd image d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:03 np0005465987 nova_compute[230713]: 2025-10-02 12:19:03.996 2 DEBUG nova.storage.rbd_utils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] rbd image d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.000 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.060 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.061 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.062 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.062 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.089 2 DEBUG nova.storage.rbd_utils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] rbd image d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.093 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.125 2 DEBUG nova.policy [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2810a2f118b84b3cbc959c8ee3315600', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '15a242822e7849ee92179cec8302a315', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:19:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:04.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.389 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.459 2 DEBUG nova.storage.rbd_utils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] resizing rbd image d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.553 2 DEBUG nova.objects.instance [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lazy-loading 'migration_context' on Instance uuid d84ad3d2-1c3d-4bdd-94b9-5a809532dbef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:04 np0005465987 ceph-mgr[81180]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.567 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.568 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Ensure instance console log exists: /var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.568 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.568 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.569 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:04 np0005465987 nova_compute[230713]: 2025-10-02 12:19:04.697 2 DEBUG nova.network.neutron [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Successfully created port: 4e25b301-95cf-4476-bb1d-0069be4cab7e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:19:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:05.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:05 np0005465987 nova_compute[230713]: 2025-10-02 12:19:05.539 2 DEBUG nova.network.neutron [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Successfully updated port: 4e25b301-95cf-4476-bb1d-0069be4cab7e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:19:05 np0005465987 nova_compute[230713]: 2025-10-02 12:19:05.568 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquiring lock "refresh_cache-d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:19:05 np0005465987 nova_compute[230713]: 2025-10-02 12:19:05.568 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquired lock "refresh_cache-d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:19:05 np0005465987 nova_compute[230713]: 2025-10-02 12:19:05.569 2 DEBUG nova.network.neutron [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:19:05 np0005465987 nova_compute[230713]: 2025-10-02 12:19:05.621 2 DEBUG nova.compute.manager [req-241461b6-6eb5-4ae8-b31c-c68b46acc2be req-7689c8b5-a323-4cbc-8b17-6673360be205 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Received event network-changed-4e25b301-95cf-4476-bb1d-0069be4cab7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:05 np0005465987 nova_compute[230713]: 2025-10-02 12:19:05.621 2 DEBUG nova.compute.manager [req-241461b6-6eb5-4ae8-b31c-c68b46acc2be req-7689c8b5-a323-4cbc-8b17-6673360be205 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Refreshing instance network info cache due to event network-changed-4e25b301-95cf-4476-bb1d-0069be4cab7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:19:05 np0005465987 nova_compute[230713]: 2025-10-02 12:19:05.622 2 DEBUG oslo_concurrency.lockutils [req-241461b6-6eb5-4ae8-b31c-c68b46acc2be req-7689c8b5-a323-4cbc-8b17-6673360be205 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:19:05 np0005465987 nova_compute[230713]: 2025-10-02 12:19:05.709 2 DEBUG nova.network.neutron [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:19:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e250 e250: 3 total, 3 up, 3 in
Oct  2 08:19:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:06.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.449 2 DEBUG nova.network.neutron [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Updating instance_info_cache with network_info: [{"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.472 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Releasing lock "refresh_cache-d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.473 2 DEBUG nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Instance network_info: |[{"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.473 2 DEBUG oslo_concurrency.lockutils [req-241461b6-6eb5-4ae8-b31c-c68b46acc2be req-7689c8b5-a323-4cbc-8b17-6673360be205 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.474 2 DEBUG nova.network.neutron [req-241461b6-6eb5-4ae8-b31c-c68b46acc2be req-7689c8b5-a323-4cbc-8b17-6673360be205 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Refreshing network info cache for port 4e25b301-95cf-4476-bb1d-0069be4cab7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.478 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Start _get_guest_xml network_info=[{"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.486 2 WARNING nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.493 2 DEBUG nova.virt.libvirt.host [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.494 2 DEBUG nova.virt.libvirt.host [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.499 2 DEBUG nova.virt.libvirt.host [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.499 2 DEBUG nova.virt.libvirt.host [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.501 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.502 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.503 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.503 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.504 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.504 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.505 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.505 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.506 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.506 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.507 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.507 2 DEBUG nova.virt.hardware [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.512 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1779518239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.964 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.990 2 DEBUG nova.storage.rbd_utils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] rbd image d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:06 np0005465987 nova_compute[230713]: 2025-10-02 12:19:06.994 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e251 e251: 3 total, 3 up, 3 in
Oct  2 08:19:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/946818258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.452 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.455 2 DEBUG nova.virt.libvirt.vif [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1454602797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1454602797',id=80,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15a242822e7849ee92179cec8302a315',ramdisk_id='',reservation_id='r-a9zx0y6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-961595317',owner_user_name='tempest-InstanceActionsV221TestJSON-961595317-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:03Z,user_data=None,user_id='2810a2f118b84b3cbc959c8ee3315600',uuid=d84ad3d2-1c3d-4bdd-94b9-5a809532dbef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.455 2 DEBUG nova.network.os_vif_util [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Converting VIF {"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.456 2 DEBUG nova.network.os_vif_util [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:0f:57,bridge_name='br-int',has_traffic_filtering=True,id=4e25b301-95cf-4476-bb1d-0069be4cab7e,network=Network(e6cfd45d-1255-4d22-98a4-e1161ef8d323),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e25b301-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.457 2 DEBUG nova.objects.instance [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lazy-loading 'pci_devices' on Instance uuid d84ad3d2-1c3d-4bdd-94b9-5a809532dbef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:07.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.471 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <uuid>d84ad3d2-1c3d-4bdd-94b9-5a809532dbef</uuid>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <name>instance-00000050</name>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-1454602797</nova:name>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:19:06</nova:creationTime>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <nova:user uuid="2810a2f118b84b3cbc959c8ee3315600">tempest-InstanceActionsV221TestJSON-961595317-project-member</nova:user>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <nova:project uuid="15a242822e7849ee92179cec8302a315">tempest-InstanceActionsV221TestJSON-961595317</nova:project>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <nova:port uuid="4e25b301-95cf-4476-bb1d-0069be4cab7e">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <entry name="serial">d84ad3d2-1c3d-4bdd-94b9-5a809532dbef</entry>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <entry name="uuid">d84ad3d2-1c3d-4bdd-94b9-5a809532dbef</entry>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk.config">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:28:0f:57"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <target dev="tap4e25b301-95"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef/console.log" append="off"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:19:07 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:19:07 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:19:07 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:19:07 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.473 2 DEBUG nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Preparing to wait for external event network-vif-plugged-4e25b301-95cf-4476-bb1d-0069be4cab7e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.473 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquiring lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.473 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.474 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.474 2 DEBUG nova.virt.libvirt.vif [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1454602797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1454602797',id=80,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15a242822e7849ee92179cec8302a315',ramdisk_id='',reservation_id='r-a9zx0y6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-961595317',owner_user_name='tempest-InstanceActionsV221TestJSON-961595317-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:03Z,user_data=None,user_id='2810a2f118b84b3cbc959c8ee3315600',uuid=d84ad3d2-1c3d-4bdd-94b9-5a809532dbef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.475 2 DEBUG nova.network.os_vif_util [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Converting VIF {"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.475 2 DEBUG nova.network.os_vif_util [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:0f:57,bridge_name='br-int',has_traffic_filtering=True,id=4e25b301-95cf-4476-bb1d-0069be4cab7e,network=Network(e6cfd45d-1255-4d22-98a4-e1161ef8d323),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e25b301-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.476 2 DEBUG os_vif [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:0f:57,bridge_name='br-int',has_traffic_filtering=True,id=4e25b301-95cf-4476-bb1d-0069be4cab7e,network=Network(e6cfd45d-1255-4d22-98a4-e1161ef8d323),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e25b301-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e25b301-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e25b301-95, col_values=(('external_ids', {'iface-id': '4e25b301-95cf-4476-bb1d-0069be4cab7e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:0f:57', 'vm-uuid': 'd84ad3d2-1c3d-4bdd-94b9-5a809532dbef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:07 np0005465987 NetworkManager[44910]: <info>  [1759407547.4830] manager: (tap4e25b301-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.490 2 INFO os_vif [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:0f:57,bridge_name='br-int',has_traffic_filtering=True,id=4e25b301-95cf-4476-bb1d-0069be4cab7e,network=Network(e6cfd45d-1255-4d22-98a4-e1161ef8d323),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e25b301-95')#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.534 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.535 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.535 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] No VIF found with MAC fa:16:3e:28:0f:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.535 2 INFO nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Using config drive#033[00m
Oct  2 08:19:07 np0005465987 nova_compute[230713]: 2025-10-02 12:19:07.558 2 DEBUG nova.storage.rbd_utils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] rbd image d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.042 2 INFO nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Creating config drive at /var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef/disk.config#033[00m
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.047 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc1lm6cli execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e252 e252: 3 total, 3 up, 3 in
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.182 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc1lm6cli" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.211 2 DEBUG nova.storage.rbd_utils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] rbd image d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.215 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef/disk.config d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:08.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.405 2 DEBUG nova.network.neutron [req-241461b6-6eb5-4ae8-b31c-c68b46acc2be req-7689c8b5-a323-4cbc-8b17-6673360be205 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Updated VIF entry in instance network info cache for port 4e25b301-95cf-4476-bb1d-0069be4cab7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.406 2 DEBUG nova.network.neutron [req-241461b6-6eb5-4ae8-b31c-c68b46acc2be req-7689c8b5-a323-4cbc-8b17-6673360be205 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Updating instance_info_cache with network_info: [{"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.438 2 DEBUG oslo_concurrency.lockutils [req-241461b6-6eb5-4ae8-b31c-c68b46acc2be req-7689c8b5-a323-4cbc-8b17-6673360be205 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.604 2 DEBUG oslo_concurrency.processutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef/disk.config d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.605 2 INFO nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Deleting local config drive /var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef/disk.config because it was imported into RBD.#033[00m
Oct  2 08:19:08 np0005465987 kernel: tap4e25b301-95: entered promiscuous mode
Oct  2 08:19:08 np0005465987 NetworkManager[44910]: <info>  [1759407548.6551] manager: (tap4e25b301-95): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Oct  2 08:19:08 np0005465987 systemd-udevd[260723]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:08Z|00268|binding|INFO|Claiming lport 4e25b301-95cf-4476-bb1d-0069be4cab7e for this chassis.
Oct  2 08:19:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:08Z|00269|binding|INFO|4e25b301-95cf-4476-bb1d-0069be4cab7e: Claiming fa:16:3e:28:0f:57 10.100.0.5
Oct  2 08:19:08 np0005465987 NetworkManager[44910]: <info>  [1759407548.7088] device (tap4e25b301-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.708 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:0f:57 10.100.0.5'], port_security=['fa:16:3e:28:0f:57 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd84ad3d2-1c3d-4bdd-94b9-5a809532dbef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6cfd45d-1255-4d22-98a4-e1161ef8d323', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15a242822e7849ee92179cec8302a315', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6fbca520-c2d0-4dd1-b1a0-e9f8395f4235', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=98f0ee22-e32d-4003-9b46-45e07ad376a2, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4e25b301-95cf-4476-bb1d-0069be4cab7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:19:08 np0005465987 NetworkManager[44910]: <info>  [1759407548.7099] device (tap4e25b301-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.710 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4e25b301-95cf-4476-bb1d-0069be4cab7e in datapath e6cfd45d-1255-4d22-98a4-e1161ef8d323 bound to our chassis#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.711 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e6cfd45d-1255-4d22-98a4-e1161ef8d323#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.724 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7b580546-f31e-4875-8e64-394a42d65adc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.725 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape6cfd45d-11 in ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.728 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape6cfd45d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.729 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[33b854d5-835d-4fbb-8338-4c43f84bf450]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 systemd-machined[188335]: New machine qemu-37-instance-00000050.
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.729 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[74ec6cfc-deb1-40d6-ba3e-b8d8d6e7c39a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.738 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[34579faf-2782-4dcc-89ea-9005810aefa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:08 np0005465987 systemd[1]: Started Virtual Machine qemu-37-instance-00000050.
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.762 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d44a78c2-3097-4f3a-bdb2-656c8d073c0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:08Z|00270|binding|INFO|Setting lport 4e25b301-95cf-4476-bb1d-0069be4cab7e ovn-installed in OVS
Oct  2 08:19:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:08Z|00271|binding|INFO|Setting lport 4e25b301-95cf-4476-bb1d-0069be4cab7e up in Southbound
Oct  2 08:19:08 np0005465987 nova_compute[230713]: 2025-10-02 12:19:08.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.789 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcb0074-39bf-4d9c-aa8b-46ec3a60a1c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.794 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4aba3da6-bccc-4d50-932b-4687adb8663d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 NetworkManager[44910]: <info>  [1759407548.7959] manager: (tape6cfd45d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.828 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d77a5e8b-e8df-4dd3-b911-2be8bd29c0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.831 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5c01eb-2391-4aaf-b6a8-1b8dad976ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 NetworkManager[44910]: <info>  [1759407548.8562] device (tape6cfd45d-10): carrier: link connected
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.862 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c1f688-40ab-4b20-b86e-823fbe9a2fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.877 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[30d2025b-4772-4ab3-9278-df35e492f5ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6cfd45d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:96:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561743, 'reachable_time': 26533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260759, 'error': None, 'target': 'ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.892 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[387bd775-0cb2-4fd9-a75f-0f1025ead9be]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:96ae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561743, 'tstamp': 561743}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260760, 'error': None, 'target': 'ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.910 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8dc2f2-92f7-48dc-b810-cba83e764ab5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6cfd45d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:96:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561743, 'reachable_time': 26533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260761, 'error': None, 'target': 'ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:08.938 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[be26a058-3e89-41e2-a463-5fa811e6c005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:09.004 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bc027f53-b270-40d8-96aa-d80d9410f5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:09.006 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6cfd45d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:09.006 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:09.007 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6cfd45d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:09 np0005465987 kernel: tape6cfd45d-10: entered promiscuous mode
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:09 np0005465987 NetworkManager[44910]: <info>  [1759407549.0120] manager: (tape6cfd45d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:09.012 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape6cfd45d-10, col_values=(('external_ids', {'iface-id': '4f1aab35-bb2b-4e1a-8628-489a02a68862'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:09Z|00272|binding|INFO|Releasing lport 4f1aab35-bb2b-4e1a-8628-489a02a68862 from this chassis (sb_readonly=0)
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:09.028 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e6cfd45d-1255-4d22-98a4-e1161ef8d323.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e6cfd45d-1255-4d22-98a4-e1161ef8d323.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:09.029 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[44915bdb-6ed5-4723-b5c7-70b314602ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:09.029 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-e6cfd45d-1255-4d22-98a4-e1161ef8d323
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/e6cfd45d-1255-4d22-98a4-e1161ef8d323.pid.haproxy
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID e6cfd45d-1255-4d22-98a4-e1161ef8d323
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:19:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:09.030 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323', 'env', 'PROCESS_TAG=haproxy-e6cfd45d-1255-4d22-98a4-e1161ef8d323', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e6cfd45d-1255-4d22-98a4-e1161ef8d323.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.138 2 DEBUG nova.compute.manager [req-c243ea60-911c-4f03-af2c-0405a482d19d req-219b7f4b-81d1-4eaf-8f72-3972e20b87f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Received event network-vif-plugged-4e25b301-95cf-4476-bb1d-0069be4cab7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.139 2 DEBUG oslo_concurrency.lockutils [req-c243ea60-911c-4f03-af2c-0405a482d19d req-219b7f4b-81d1-4eaf-8f72-3972e20b87f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.139 2 DEBUG oslo_concurrency.lockutils [req-c243ea60-911c-4f03-af2c-0405a482d19d req-219b7f4b-81d1-4eaf-8f72-3972e20b87f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.139 2 DEBUG oslo_concurrency.lockutils [req-c243ea60-911c-4f03-af2c-0405a482d19d req-219b7f4b-81d1-4eaf-8f72-3972e20b87f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:09 np0005465987 nova_compute[230713]: 2025-10-02 12:19:09.140 2 DEBUG nova.compute.manager [req-c243ea60-911c-4f03-af2c-0405a482d19d req-219b7f4b-81d1-4eaf-8f72-3972e20b87f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Processing event network-vif-plugged-4e25b301-95cf-4476-bb1d-0069be4cab7e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:19:09 np0005465987 podman[260794]: 2025-10-02 12:19:09.355546532 +0000 UTC m=+0.020597248 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:19:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:09.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:09 np0005465987 podman[260794]: 2025-10-02 12:19:09.752347177 +0000 UTC m=+0.417397873 container create ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:19:09 np0005465987 systemd[1]: Started libpod-conmon-ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf.scope.
Oct  2 08:19:09 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:19:09 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5ae55687c2c3a14f6b97c6c6572ef026fc633b04379bf9667d47ffdc2459c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:19:09 np0005465987 podman[260794]: 2025-10-02 12:19:09.955270895 +0000 UTC m=+0.620321691 container init ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:19:09 np0005465987 podman[260794]: 2025-10-02 12:19:09.962834583 +0000 UTC m=+0.627885279 container start ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:19:09 np0005465987 neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323[260834]: [NOTICE]   (260854) : New worker (260856) forked
Oct  2 08:19:09 np0005465987 neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323[260834]: [NOTICE]   (260854) : Loading success.
Oct  2 08:19:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:10.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.448 2 DEBUG nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.449 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407550.4489174, d84ad3d2-1c3d-4bdd-94b9-5a809532dbef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.450 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] VM Started (Lifecycle Event)#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.453 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.456 2 INFO nova.virt.libvirt.driver [-] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Instance spawned successfully.#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.457 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.469 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.475 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.481 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.482 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.483 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.484 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.485 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.486 2 DEBUG nova.virt.libvirt.driver [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.498 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.499 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407550.44897, d84ad3d2-1c3d-4bdd-94b9-5a809532dbef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.500 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.521 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.526 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407550.4531395, d84ad3d2-1c3d-4bdd-94b9-5a809532dbef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.526 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.544 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.549 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.557 2 INFO nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Took 6.66 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.558 2 DEBUG nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.580 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.641 2 INFO nova.compute.manager [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Took 7.58 seconds to build instance.#033[00m
Oct  2 08:19:10 np0005465987 nova_compute[230713]: 2025-10-02 12:19:10.660 2 DEBUG oslo_concurrency.lockutils [None req-dd1e2747-49f0-490c-9285-3109f943c630 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:11 np0005465987 nova_compute[230713]: 2025-10-02 12:19:11.292 2 DEBUG nova.compute.manager [req-a8dc2513-4e67-4864-a985-522fc6ac7ddb req-5afdd47a-0413-433a-bc7b-b8038af18762 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Received event network-vif-plugged-4e25b301-95cf-4476-bb1d-0069be4cab7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:11 np0005465987 nova_compute[230713]: 2025-10-02 12:19:11.293 2 DEBUG oslo_concurrency.lockutils [req-a8dc2513-4e67-4864-a985-522fc6ac7ddb req-5afdd47a-0413-433a-bc7b-b8038af18762 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:11 np0005465987 nova_compute[230713]: 2025-10-02 12:19:11.294 2 DEBUG oslo_concurrency.lockutils [req-a8dc2513-4e67-4864-a985-522fc6ac7ddb req-5afdd47a-0413-433a-bc7b-b8038af18762 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:11 np0005465987 nova_compute[230713]: 2025-10-02 12:19:11.294 2 DEBUG oslo_concurrency.lockutils [req-a8dc2513-4e67-4864-a985-522fc6ac7ddb req-5afdd47a-0413-433a-bc7b-b8038af18762 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:11 np0005465987 nova_compute[230713]: 2025-10-02 12:19:11.295 2 DEBUG nova.compute.manager [req-a8dc2513-4e67-4864-a985-522fc6ac7ddb req-5afdd47a-0413-433a-bc7b-b8038af18762 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] No waiting events found dispatching network-vif-plugged-4e25b301-95cf-4476-bb1d-0069be4cab7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:19:11 np0005465987 nova_compute[230713]: 2025-10-02 12:19:11.295 2 WARNING nova.compute.manager [req-a8dc2513-4e67-4864-a985-522fc6ac7ddb req-5afdd47a-0413-433a-bc7b-b8038af18762 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Received unexpected event network-vif-plugged-4e25b301-95cf-4476-bb1d-0069be4cab7e for instance with vm_state active and task_state None.#033[00m
Oct  2 08:19:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:12.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:12 np0005465987 nova_compute[230713]: 2025-10-02 12:19:12.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.344 2 DEBUG oslo_concurrency.lockutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquiring lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.344 2 DEBUG oslo_concurrency.lockutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.345 2 DEBUG oslo_concurrency.lockutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquiring lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.345 2 DEBUG oslo_concurrency.lockutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.345 2 DEBUG oslo_concurrency.lockutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.347 2 INFO nova.compute.manager [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Terminating instance#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.348 2 DEBUG nova.compute.manager [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:19:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:13.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:13 np0005465987 kernel: tap4e25b301-95 (unregistering): left promiscuous mode
Oct  2 08:19:13 np0005465987 NetworkManager[44910]: <info>  [1759407553.5417] device (tap4e25b301-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:19:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:13Z|00273|binding|INFO|Releasing lport 4e25b301-95cf-4476-bb1d-0069be4cab7e from this chassis (sb_readonly=0)
Oct  2 08:19:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:13Z|00274|binding|INFO|Setting lport 4e25b301-95cf-4476-bb1d-0069be4cab7e down in Southbound
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:13Z|00275|binding|INFO|Removing iface tap4e25b301-95 ovn-installed in OVS
Oct  2 08:19:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:13.599 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:0f:57 10.100.0.5'], port_security=['fa:16:3e:28:0f:57 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd84ad3d2-1c3d-4bdd-94b9-5a809532dbef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6cfd45d-1255-4d22-98a4-e1161ef8d323', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15a242822e7849ee92179cec8302a315', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6fbca520-c2d0-4dd1-b1a0-e9f8395f4235', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=98f0ee22-e32d-4003-9b46-45e07ad376a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4e25b301-95cf-4476-bb1d-0069be4cab7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:19:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:13.601 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4e25b301-95cf-4476-bb1d-0069be4cab7e in datapath e6cfd45d-1255-4d22-98a4-e1161ef8d323 unbound from our chassis#033[00m
Oct  2 08:19:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:13.602 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e6cfd45d-1255-4d22-98a4-e1161ef8d323, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:19:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:13.603 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6a68ce-e691-4165-9879-47c31e6ecfac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:13.603 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323 namespace which is not needed anymore#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:13 np0005465987 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000050.scope: Deactivated successfully.
Oct  2 08:19:13 np0005465987 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000050.scope: Consumed 4.212s CPU time.
Oct  2 08:19:13 np0005465987 systemd-machined[188335]: Machine qemu-37-instance-00000050 terminated.
Oct  2 08:19:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.778 2 INFO nova.virt.libvirt.driver [-] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Instance destroyed successfully.#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.779 2 DEBUG nova.objects.instance [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lazy-loading 'resources' on Instance uuid d84ad3d2-1c3d-4bdd-94b9-5a809532dbef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.792 2 DEBUG nova.virt.libvirt.vif [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1454602797',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1454602797',id=80,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:19:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15a242822e7849ee92179cec8302a315',ramdisk_id='',reservation_id='r-a9zx0y6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-961595317',owner_user_name='tempest-InstanceActionsV221TestJSON-961595317-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:19:10Z,user_data=None,user_id='2810a2f118b84b3cbc959c8ee3315600',uuid=d84ad3d2-1c3d-4bdd-94b9-5a809532dbef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.792 2 DEBUG nova.network.os_vif_util [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Converting VIF {"id": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "address": "fa:16:3e:28:0f:57", "network": {"id": "e6cfd45d-1255-4d22-98a4-e1161ef8d323", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-856694560-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15a242822e7849ee92179cec8302a315", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e25b301-95", "ovs_interfaceid": "4e25b301-95cf-4476-bb1d-0069be4cab7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.793 2 DEBUG nova.network.os_vif_util [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:0f:57,bridge_name='br-int',has_traffic_filtering=True,id=4e25b301-95cf-4476-bb1d-0069be4cab7e,network=Network(e6cfd45d-1255-4d22-98a4-e1161ef8d323),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e25b301-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.793 2 DEBUG os_vif [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:0f:57,bridge_name='br-int',has_traffic_filtering=True,id=4e25b301-95cf-4476-bb1d-0069be4cab7e,network=Network(e6cfd45d-1255-4d22-98a4-e1161ef8d323),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e25b301-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e25b301-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:19:13 np0005465987 nova_compute[230713]: 2025-10-02 12:19:13.799 2 INFO os_vif [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:0f:57,bridge_name='br-int',has_traffic_filtering=True,id=4e25b301-95cf-4476-bb1d-0069be4cab7e,network=Network(e6cfd45d-1255-4d22-98a4-e1161ef8d323),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e25b301-95')#033[00m
Oct  2 08:19:13 np0005465987 neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323[260834]: [NOTICE]   (260854) : haproxy version is 2.8.14-c23fe91
Oct  2 08:19:13 np0005465987 neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323[260834]: [NOTICE]   (260854) : path to executable is /usr/sbin/haproxy
Oct  2 08:19:13 np0005465987 neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323[260834]: [WARNING]  (260854) : Exiting Master process...
Oct  2 08:19:13 np0005465987 neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323[260834]: [WARNING]  (260854) : Exiting Master process...
Oct  2 08:19:13 np0005465987 neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323[260834]: [ALERT]    (260854) : Current worker (260856) exited with code 143 (Terminated)
Oct  2 08:19:13 np0005465987 neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323[260834]: [WARNING]  (260854) : All workers exited. Exiting... (0)
Oct  2 08:19:13 np0005465987 systemd[1]: libpod-ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf.scope: Deactivated successfully.
Oct  2 08:19:13 np0005465987 conmon[260834]: conmon ffacc3d5020468880478 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf.scope/container/memory.events
Oct  2 08:19:13 np0005465987 podman[260889]: 2025-10-02 12:19:13.822885447 +0000 UTC m=+0.131471711 container died ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:19:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 e253: 3 total, 3 up, 3 in
Oct  2 08:19:13 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf-userdata-shm.mount: Deactivated successfully.
Oct  2 08:19:13 np0005465987 systemd[1]: var-lib-containers-storage-overlay-5b5ae55687c2c3a14f6b97c6c6572ef026fc633b04379bf9667d47ffdc2459c8-merged.mount: Deactivated successfully.
Oct  2 08:19:13 np0005465987 podman[260889]: 2025-10-02 12:19:13.918088438 +0000 UTC m=+0.226674712 container cleanup ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 08:19:13 np0005465987 systemd[1]: libpod-conmon-ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf.scope: Deactivated successfully.
Oct  2 08:19:13 np0005465987 podman[260948]: 2025-10-02 12:19:13.999321055 +0000 UTC m=+0.056747453 container remove ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:19:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:14.005 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[aae9ade9-1357-49f7-bd14-2da77162e670]: (4, ('Thu Oct  2 12:19:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323 (ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf)\nffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf\nThu Oct  2 12:19:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323 (ffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf)\nffacc3d5020468880478eb786ac01d6e2c9d91010170869868508f3d2f2276cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:14.007 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d9709477-83fd-46e4-95d8-31558a7b1344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:14.008 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6cfd45d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:14 np0005465987 kernel: tape6cfd45d-10: left promiscuous mode
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:14.026 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[be1849ef-44f3-4b7c-bf94-d2782612e32a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:14.058 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b22337-f19d-4751-99ff-138ce1441fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:14.059 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6dac05-f943-48ad-bd84-494492c5bcf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.066 2 DEBUG nova.compute.manager [req-0992a5ec-30b9-4c7b-9498-1d21947a22a1 req-cd942587-06aa-46a0-9bd1-624d894eabd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Received event network-vif-unplugged-4e25b301-95cf-4476-bb1d-0069be4cab7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.067 2 DEBUG oslo_concurrency.lockutils [req-0992a5ec-30b9-4c7b-9498-1d21947a22a1 req-cd942587-06aa-46a0-9bd1-624d894eabd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.067 2 DEBUG oslo_concurrency.lockutils [req-0992a5ec-30b9-4c7b-9498-1d21947a22a1 req-cd942587-06aa-46a0-9bd1-624d894eabd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.067 2 DEBUG oslo_concurrency.lockutils [req-0992a5ec-30b9-4c7b-9498-1d21947a22a1 req-cd942587-06aa-46a0-9bd1-624d894eabd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.067 2 DEBUG nova.compute.manager [req-0992a5ec-30b9-4c7b-9498-1d21947a22a1 req-cd942587-06aa-46a0-9bd1-624d894eabd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] No waiting events found dispatching network-vif-unplugged-4e25b301-95cf-4476-bb1d-0069be4cab7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.067 2 DEBUG nova.compute.manager [req-0992a5ec-30b9-4c7b-9498-1d21947a22a1 req-cd942587-06aa-46a0-9bd1-624d894eabd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Received event network-vif-unplugged-4e25b301-95cf-4476-bb1d-0069be4cab7e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:19:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:14.074 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[389e296e-468f-4a97-8982-36511dcdb774]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561735, 'reachable_time': 40276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260963, 'error': None, 'target': 'ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:14.076 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e6cfd45d-1255-4d22-98a4-e1161ef8d323 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:19:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:14.076 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c8cbae-5592-4d8c-a109-ca13f2149221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:14 np0005465987 systemd[1]: run-netns-ovnmeta\x2de6cfd45d\x2d1255\x2d4d22\x2d98a4\x2de1161ef8d323.mount: Deactivated successfully.
Oct  2 08:19:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:14.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.705 2 INFO nova.virt.libvirt.driver [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Deleting instance files /var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_del#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.706 2 INFO nova.virt.libvirt.driver [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Deletion of /var/lib/nova/instances/d84ad3d2-1c3d-4bdd-94b9-5a809532dbef_del complete#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.761 2 INFO nova.compute.manager [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Took 1.41 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.762 2 DEBUG oslo.service.loopingcall [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.763 2 DEBUG nova.compute.manager [-] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:19:14 np0005465987 nova_compute[230713]: 2025-10-02 12:19:14.763 2 DEBUG nova.network.neutron [-] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:19:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:15.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:15 np0005465987 nova_compute[230713]: 2025-10-02 12:19:15.597 2 DEBUG nova.network.neutron [-] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:15 np0005465987 nova_compute[230713]: 2025-10-02 12:19:15.640 2 INFO nova.compute.manager [-] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Took 0.88 seconds to deallocate network for instance.#033[00m
Oct  2 08:19:15 np0005465987 nova_compute[230713]: 2025-10-02 12:19:15.703 2 DEBUG nova.compute.manager [req-e833939d-31e1-4aff-b42e-7b3160a2cb5c req-ec85199f-b6dc-42e6-bace-3b38c0d99eb7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Received event network-vif-deleted-4e25b301-95cf-4476-bb1d-0069be4cab7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:15 np0005465987 nova_compute[230713]: 2025-10-02 12:19:15.710 2 DEBUG oslo_concurrency.lockutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:15 np0005465987 nova_compute[230713]: 2025-10-02 12:19:15.710 2 DEBUG oslo_concurrency.lockutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:15 np0005465987 nova_compute[230713]: 2025-10-02 12:19:15.780 2 DEBUG oslo_concurrency.processutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:15 np0005465987 podman[261138]: 2025-10-02 12:19:15.867173575 +0000 UTC m=+0.081569956 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 08:19:15 np0005465987 podman[261138]: 2025-10-02 12:19:15.964449434 +0000 UTC m=+0.178845795 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.163 2 DEBUG nova.compute.manager [req-5824f2d5-a035-4ce9-ba8a-3d61099c5d56 req-3bbd537c-1996-44ad-8a11-437e39c52107 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Received event network-vif-plugged-4e25b301-95cf-4476-bb1d-0069be4cab7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.168 2 DEBUG oslo_concurrency.lockutils [req-5824f2d5-a035-4ce9-ba8a-3d61099c5d56 req-3bbd537c-1996-44ad-8a11-437e39c52107 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.169 2 DEBUG oslo_concurrency.lockutils [req-5824f2d5-a035-4ce9-ba8a-3d61099c5d56 req-3bbd537c-1996-44ad-8a11-437e39c52107 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.169 2 DEBUG oslo_concurrency.lockutils [req-5824f2d5-a035-4ce9-ba8a-3d61099c5d56 req-3bbd537c-1996-44ad-8a11-437e39c52107 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.170 2 DEBUG nova.compute.manager [req-5824f2d5-a035-4ce9-ba8a-3d61099c5d56 req-3bbd537c-1996-44ad-8a11-437e39c52107 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] No waiting events found dispatching network-vif-plugged-4e25b301-95cf-4476-bb1d-0069be4cab7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.170 2 WARNING nova.compute.manager [req-5824f2d5-a035-4ce9-ba8a-3d61099c5d56 req-3bbd537c-1996-44ad-8a11-437e39c52107 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Received unexpected event network-vif-plugged-4e25b301-95cf-4476-bb1d-0069be4cab7e for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:19:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3391873375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.224 2 DEBUG oslo_concurrency.processutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.231 2 DEBUG nova.compute.provider_tree [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:16.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.248 2 DEBUG nova.scheduler.client.report [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.270 2 DEBUG oslo_concurrency.lockutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.304 2 INFO nova.scheduler.client.report [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Deleted allocations for instance d84ad3d2-1c3d-4bdd-94b9-5a809532dbef#033[00m
Oct  2 08:19:16 np0005465987 nova_compute[230713]: 2025-10-02 12:19:16.392 2 DEBUG oslo_concurrency.lockutils [None req-cb511c1c-a86f-4435-9cb9-843b3de4a283 2810a2f118b84b3cbc959c8ee3315600 15a242822e7849ee92179cec8302a315 - - default default] Lock "d84ad3d2-1c3d-4bdd-94b9-5a809532dbef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:19:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:19:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:17.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:18 np0005465987 nova_compute[230713]: 2025-10-02 12:19:18.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:19 np0005465987 nova_compute[230713]: 2025-10-02 12:19:19.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:19:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:19:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:19:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:19:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:19:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:19.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:20.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:20Z|00276|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct  2 08:19:20 np0005465987 nova_compute[230713]: 2025-10-02 12:19:20.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:20 np0005465987 podman[261416]: 2025-10-02 12:19:20.838206439 +0000 UTC m=+0.056558589 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:19:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:21.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:22.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:23.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:23 np0005465987 nova_compute[230713]: 2025-10-02 12:19:23.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:24 np0005465987 nova_compute[230713]: 2025-10-02 12:19:24.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:24.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:25.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:26.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:27.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:27.946 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:27.947 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:27.948 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:28.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:28 np0005465987 nova_compute[230713]: 2025-10-02 12:19:28.778 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407553.7758229, d84ad3d2-1c3d-4bdd-94b9-5a809532dbef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:28 np0005465987 nova_compute[230713]: 2025-10-02 12:19:28.779 2 INFO nova.compute.manager [-] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:19:28 np0005465987 nova_compute[230713]: 2025-10-02 12:19:28.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:28 np0005465987 nova_compute[230713]: 2025-10-02 12:19:28.812 2 DEBUG nova.compute.manager [None req-9adfb378-9722-4a77-b2aa-04dec5f9507c - - - - - -] [instance: d84ad3d2-1c3d-4bdd-94b9-5a809532dbef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:29 np0005465987 nova_compute[230713]: 2025-10-02 12:19:29.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:29 np0005465987 podman[261439]: 2025-10-02 12:19:29.844980576 +0000 UTC m=+0.054224255 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:19:29 np0005465987 podman[261440]: 2025-10-02 12:19:29.857479709 +0000 UTC m=+0.061842103 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  2 08:19:29 np0005465987 podman[261438]: 2025-10-02 12:19:29.976574539 +0000 UTC m=+0.188190214 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:19:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:30.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:31.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:32.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:33.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:33 np0005465987 nova_compute[230713]: 2025-10-02 12:19:33.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:34 np0005465987 nova_compute[230713]: 2025-10-02 12:19:34.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:34.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:35.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:36.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:37.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:37 np0005465987 nova_compute[230713]: 2025-10-02 12:19:37.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:37.579 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:19:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:37.580 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:19:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:38.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:38 np0005465987 nova_compute[230713]: 2025-10-02 12:19:38.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:19:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:19:39 np0005465987 nova_compute[230713]: 2025-10-02 12:19:39.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.003000082s ======
Oct  2 08:19:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Oct  2 08:19:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:40.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:41.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:19:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:42.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:19:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:43.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:43.582 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:43 np0005465987 nova_compute[230713]: 2025-10-02 12:19:43.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:44 np0005465987 nova_compute[230713]: 2025-10-02 12:19:44.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:44.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:44 np0005465987 nova_compute[230713]: 2025-10-02 12:19:44.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:44 np0005465987 nova_compute[230713]: 2025-10-02 12:19:44.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:19:44 np0005465987 nova_compute[230713]: 2025-10-02 12:19:44.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:19:45 np0005465987 nova_compute[230713]: 2025-10-02 12:19:45.191 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:19:45 np0005465987 nova_compute[230713]: 2025-10-02 12:19:45.192 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:19:45 np0005465987 nova_compute[230713]: 2025-10-02 12:19:45.192 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:19:45 np0005465987 nova_compute[230713]: 2025-10-02 12:19:45.192 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:45.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:46.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:47.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:48 np0005465987 nova_compute[230713]: 2025-10-02 12:19:48.078 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updating instance_info_cache with network_info: [{"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:48 np0005465987 nova_compute[230713]: 2025-10-02 12:19:48.096 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-992b01a0-5a43-42ab-bfcb-96c2db503633" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:19:48 np0005465987 nova_compute[230713]: 2025-10-02 12:19:48.096 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:19:48 np0005465987 nova_compute[230713]: 2025-10-02 12:19:48.096 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:48.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:48 np0005465987 nova_compute[230713]: 2025-10-02 12:19:48.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:48 np0005465987 nova_compute[230713]: 2025-10-02 12:19:48.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:48 np0005465987 nova_compute[230713]: 2025-10-02 12:19:48.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:49 np0005465987 nova_compute[230713]: 2025-10-02 12:19:49.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:49.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:50.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:50 np0005465987 nova_compute[230713]: 2025-10-02 12:19:50.834 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:50 np0005465987 nova_compute[230713]: 2025-10-02 12:19:50.835 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:50 np0005465987 nova_compute[230713]: 2025-10-02 12:19:50.854 2 DEBUG nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:19:50 np0005465987 nova_compute[230713]: 2025-10-02 12:19:50.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.027 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.028 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.038 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.038 2 INFO nova.compute.claims [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.134 2 DEBUG nova.scheduler.client.report [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.154 2 DEBUG nova.scheduler.client.report [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.155 2 DEBUG nova.compute.provider_tree [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.174 2 DEBUG nova.scheduler.client.report [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.210 2 DEBUG nova.scheduler.client.report [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.271 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:51.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3427342206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.734 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.741 2 DEBUG nova.compute.provider_tree [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.764 2 DEBUG nova.scheduler.client.report [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.791 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.792 2 DEBUG nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:19:51 np0005465987 podman[261574]: 2025-10-02 12:19:51.829403903 +0000 UTC m=+0.048576088 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.841 2 DEBUG nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.841 2 DEBUG nova.network.neutron [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.864 2 INFO nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:19:51 np0005465987 nova_compute[230713]: 2025-10-02 12:19:51.889 2 DEBUG nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.000 2 DEBUG nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.003 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.004 2 INFO nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Creating image(s)#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.047 2 DEBUG nova.storage.rbd_utils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image ef69bc17-6b51-491e-82e9-c4106abb8d74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.083 2 DEBUG nova.storage.rbd_utils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image ef69bc17-6b51-491e-82e9-c4106abb8d74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.118 2 DEBUG nova.storage.rbd_utils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image ef69bc17-6b51-491e-82e9-c4106abb8d74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.123 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.164 2 DEBUG nova.policy [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '25468893d71641a385711fd2982bb00b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '10fff81da7a54740a53a0771ce916329', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.199 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.200 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.201 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.201 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.236 2 DEBUG nova.storage.rbd_utils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image ef69bc17-6b51-491e-82e9-c4106abb8d74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.242 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 ef69bc17-6b51-491e-82e9-c4106abb8d74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:52.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.562 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 ef69bc17-6b51-491e-82e9-c4106abb8d74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.632 2 DEBUG nova.storage.rbd_utils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] resizing rbd image ef69bc17-6b51-491e-82e9-c4106abb8d74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.755 2 DEBUG nova.objects.instance [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'migration_context' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.768 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.769 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Ensure instance console log exists: /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.769 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.770 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.770 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:52 np0005465987 nova_compute[230713]: 2025-10-02 12:19:52.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:19:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:53.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:53 np0005465987 nova_compute[230713]: 2025-10-02 12:19:53.669 2 DEBUG nova.network.neutron [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Successfully created port: ce91d40e-1bd9-4703-95b5-a23942c1592e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:19:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:53 np0005465987 nova_compute[230713]: 2025-10-02 12:19:53.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:53 np0005465987 nova_compute[230713]: 2025-10-02 12:19:53.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:54 np0005465987 nova_compute[230713]: 2025-10-02 12:19:54.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:54.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:54 np0005465987 nova_compute[230713]: 2025-10-02 12:19:54.677 2 DEBUG nova.network.neutron [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Successfully updated port: ce91d40e-1bd9-4703-95b5-a23942c1592e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:19:54 np0005465987 nova_compute[230713]: 2025-10-02 12:19:54.703 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:19:54 np0005465987 nova_compute[230713]: 2025-10-02 12:19:54.703 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:19:54 np0005465987 nova_compute[230713]: 2025-10-02 12:19:54.704 2 DEBUG nova.network.neutron [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:19:54 np0005465987 nova_compute[230713]: 2025-10-02 12:19:54.840 2 DEBUG nova.compute.manager [req-1b7368af-fd33-43ce-8f90-e822967c35d5 req-d1b62ca1-c17e-45ca-8edc-42875908de88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-changed-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:54 np0005465987 nova_compute[230713]: 2025-10-02 12:19:54.840 2 DEBUG nova.compute.manager [req-1b7368af-fd33-43ce-8f90-e822967c35d5 req-d1b62ca1-c17e-45ca-8edc-42875908de88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Refreshing instance network info cache due to event network-changed-ce91d40e-1bd9-4703-95b5-a23942c1592e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:19:54 np0005465987 nova_compute[230713]: 2025-10-02 12:19:54.841 2 DEBUG oslo_concurrency.lockutils [req-1b7368af-fd33-43ce-8f90-e822967c35d5 req-d1b62ca1-c17e-45ca-8edc-42875908de88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:19:54 np0005465987 nova_compute[230713]: 2025-10-02 12:19:54.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.010 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.010 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.011 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.012 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.087 2 DEBUG nova.network.neutron [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:19:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1336774177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.488 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:55.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.582 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.582 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.721 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.722 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4499MB free_disk=20.885894775390625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.722 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.723 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.790 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 992b01a0-5a43-42ab-bfcb-96c2db503633 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.790 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance ef69bc17-6b51-491e-82e9-c4106abb8d74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.791 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.791 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:19:55 np0005465987 nova_compute[230713]: 2025-10-02 12:19:55.841 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2857518082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.270 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.276 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.297 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:56.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.328 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.328 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.465 2 DEBUG nova.network.neutron [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.487 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.487 2 DEBUG nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance network_info: |[{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.488 2 DEBUG oslo_concurrency.lockutils [req-1b7368af-fd33-43ce-8f90-e822967c35d5 req-d1b62ca1-c17e-45ca-8edc-42875908de88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.488 2 DEBUG nova.network.neutron [req-1b7368af-fd33-43ce-8f90-e822967c35d5 req-d1b62ca1-c17e-45ca-8edc-42875908de88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Refreshing network info cache for port ce91d40e-1bd9-4703-95b5-a23942c1592e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.492 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Start _get_guest_xml network_info=[{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.496 2 WARNING nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.501 2 DEBUG nova.virt.libvirt.host [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.501 2 DEBUG nova.virt.libvirt.host [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.505 2 DEBUG nova.virt.libvirt.host [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.505 2 DEBUG nova.virt.libvirt.host [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.506 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.507 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.507 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.507 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.507 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.508 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.508 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.508 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.508 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.509 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.509 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.509 2 DEBUG nova.virt.hardware [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.512 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1621902664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.931 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.965 2 DEBUG nova.storage.rbd_utils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image ef69bc17-6b51-491e-82e9-c4106abb8d74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:56 np0005465987 nova_compute[230713]: 2025-10-02 12:19:56.970 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3806474939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.396 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.399 2 DEBUG nova.virt.libvirt.vif [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.400 2 DEBUG nova.network.os_vif_util [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.401 2 DEBUG nova.network.os_vif_util [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.403 2 DEBUG nova.objects.instance [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.444 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <uuid>ef69bc17-6b51-491e-82e9-c4106abb8d74</uuid>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <name>instance-00000052</name>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerActionsTestOtherB-server-457871717</nova:name>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:19:56</nova:creationTime>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <nova:user uuid="25468893d71641a385711fd2982bb00b">tempest-ServerActionsTestOtherB-1686489955-project-member</nova:user>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <nova:project uuid="10fff81da7a54740a53a0771ce916329">tempest-ServerActionsTestOtherB-1686489955</nova:project>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <nova:port uuid="ce91d40e-1bd9-4703-95b5-a23942c1592e">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <entry name="serial">ef69bc17-6b51-491e-82e9-c4106abb8d74</entry>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <entry name="uuid">ef69bc17-6b51-491e-82e9-c4106abb8d74</entry>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/ef69bc17-6b51-491e-82e9-c4106abb8d74_disk">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/ef69bc17-6b51-491e-82e9-c4106abb8d74_disk.config">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:18:4b:71"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <target dev="tapce91d40e-1b"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/console.log" append="off"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:19:57 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:19:57 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:19:57 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:19:57 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.446 2 DEBUG nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Preparing to wait for external event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.446 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.447 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.447 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.447 2 DEBUG nova.virt.libvirt.vif [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.448 2 DEBUG nova.network.os_vif_util [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.448 2 DEBUG nova.network.os_vif_util [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.449 2 DEBUG os_vif [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.454 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce91d40e-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.455 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce91d40e-1b, col_values=(('external_ids', {'iface-id': 'ce91d40e-1bd9-4703-95b5-a23942c1592e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:4b:71', 'vm-uuid': 'ef69bc17-6b51-491e-82e9-c4106abb8d74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:57 np0005465987 NetworkManager[44910]: <info>  [1759407597.4575] manager: (tapce91d40e-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.467 2 INFO os_vif [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b')#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.518 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.518 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.519 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No VIF found with MAC fa:16:3e:18:4b:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.519 2 INFO nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Using config drive#033[00m
Oct  2 08:19:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:19:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:57.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:19:57 np0005465987 nova_compute[230713]: 2025-10-02 12:19:57.545 2 DEBUG nova.storage.rbd_utils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image ef69bc17-6b51-491e-82e9-c4106abb8d74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.083 2 INFO nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Creating config drive at /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/disk.config#033[00m
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.096 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkee0f1w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.240 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkee0f1w" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:58.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.304 2 DEBUG nova.storage.rbd_utils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image ef69bc17-6b51-491e-82e9-c4106abb8d74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.310 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/disk.config ef69bc17-6b51-491e-82e9-c4106abb8d74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.578 2 DEBUG nova.network.neutron [req-1b7368af-fd33-43ce-8f90-e822967c35d5 req-d1b62ca1-c17e-45ca-8edc-42875908de88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updated VIF entry in instance network info cache for port ce91d40e-1bd9-4703-95b5-a23942c1592e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.579 2 DEBUG nova.network.neutron [req-1b7368af-fd33-43ce-8f90-e822967c35d5 req-d1b62ca1-c17e-45ca-8edc-42875908de88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.582 2 DEBUG oslo_concurrency.processutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/disk.config ef69bc17-6b51-491e-82e9-c4106abb8d74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.582 2 INFO nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Deleting local config drive /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/disk.config because it was imported into RBD.#033[00m
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.598 2 DEBUG oslo_concurrency.lockutils [req-1b7368af-fd33-43ce-8f90-e822967c35d5 req-d1b62ca1-c17e-45ca-8edc-42875908de88 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:19:58 np0005465987 kernel: tapce91d40e-1b: entered promiscuous mode
Oct  2 08:19:58 np0005465987 NetworkManager[44910]: <info>  [1759407598.6381] manager: (tapce91d40e-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Oct  2 08:19:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:58Z|00277|binding|INFO|Claiming lport ce91d40e-1bd9-4703-95b5-a23942c1592e for this chassis.
Oct  2 08:19:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:58Z|00278|binding|INFO|ce91d40e-1bd9-4703-95b5-a23942c1592e: Claiming fa:16:3e:18:4b:71 10.100.0.4
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.654 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:4b:71 10.100.0.4'], port_security=['fa:16:3e:18:4b:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ef69bc17-6b51-491e-82e9-c4106abb8d74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '2', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ce91d40e-1bd9-4703-95b5-a23942c1592e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.655 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ce91d40e-1bd9-4703-95b5-a23942c1592e in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 bound to our chassis#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.656 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4035a600-4a5e-41ee-a619-d81e2c993b79#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.672 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[152fc90f-fbcb-4963-9e03-63923b63a9cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.673 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4035a600-41 in ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:19:58 np0005465987 systemd-machined[188335]: New machine qemu-38-instance-00000052.
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.675 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4035a600-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.675 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3dcd4b-af68-4810-bfee-d474989bc8c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.676 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[17ca9d4b-0a4d-4970-b885-5b8930f9a0f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.691 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2e4dfd-d5b8-4858-801e-c0c9fe00d86b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 systemd[1]: Started Virtual Machine qemu-38-instance-00000052.
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.719 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[103c7209-53da-45a8-a6b3-dd06e389768f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 systemd-udevd[261943]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:58Z|00279|binding|INFO|Setting lport ce91d40e-1bd9-4703-95b5-a23942c1592e ovn-installed in OVS
Oct  2 08:19:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:58Z|00280|binding|INFO|Setting lport ce91d40e-1bd9-4703-95b5-a23942c1592e up in Southbound
Oct  2 08:19:58 np0005465987 nova_compute[230713]: 2025-10-02 12:19:58.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:58 np0005465987 NetworkManager[44910]: <info>  [1759407598.7438] device (tapce91d40e-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:19:58 np0005465987 NetworkManager[44910]: <info>  [1759407598.7455] device (tapce91d40e-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:19:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.756 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6a6ad8-b6e9-4804-8835-d59e41ffb27f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.762 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8569b2be-0681-4a37-afbf-65daff874245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 NetworkManager[44910]: <info>  [1759407598.7682] manager: (tap4035a600-40): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.798 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[14808fcd-c97c-447b-9087-6d2d7fd6df9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.801 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c7bf8d-ba51-4bd4-9d99-f38815d616b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 NetworkManager[44910]: <info>  [1759407598.8263] device (tap4035a600-40): carrier: link connected
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.832 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b04048-76ba-4c9d-953e-ac75eeade714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.855 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2d9bb3-8229-4979-9627-9ab1774b4f3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566740, 'reachable_time': 27248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261973, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.872 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4f8abb-6a59-4dfe-b178-4f4678eb67db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:fb3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566740, 'tstamp': 566740}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261974, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.892 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1af245-6ced-4330-81b2-8d92eb5774fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566740, 'reachable_time': 27248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261975, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:58.931 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9aaa45-6b79-498c-965f-8041aea5e0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:59.017 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6e80e3e0-2705-48a6-8c60-fa6848f1b7ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:59.019 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:59.019 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:59.019 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4035a600-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:59 np0005465987 NetworkManager[44910]: <info>  [1759407599.0225] manager: (tap4035a600-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct  2 08:19:59 np0005465987 kernel: tap4035a600-40: entered promiscuous mode
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:59.026 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4035a600-40, col_values=(('external_ids', {'iface-id': '1befa812-080f-4694-ba8b-9130fe81621d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:59 np0005465987 ovn_controller[129172]: 2025-10-02T12:19:59Z|00281|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:59.042 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:59.043 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e37e18cb-740b-471f-8229-ccf94fc91c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:59.044 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-4035a600-4a5e-41ee-a619-d81e2c993b79
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/4035a600-4a5e-41ee-a619-d81e2c993b79.pid.haproxy
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 4035a600-4a5e-41ee-a619-d81e2c993b79
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:19:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:19:59.044 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'env', 'PROCESS_TAG=haproxy-4035a600-4a5e-41ee-a619-d81e2c993b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4035a600-4a5e-41ee-a619-d81e2c993b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:59 np0005465987 podman[262049]: 2025-10-02 12:19:59.414820533 +0000 UTC m=+0.069349821 container create 54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:19:59 np0005465987 systemd[1]: Started libpod-conmon-54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e.scope.
Oct  2 08:19:59 np0005465987 podman[262049]: 2025-10-02 12:19:59.379758728 +0000 UTC m=+0.034288056 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:19:59 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:19:59 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5041ebbf2a70033c3cfc3923be61d1a6fffca1f94c5d340e5fd50df40ce80e08/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:19:59 np0005465987 podman[262049]: 2025-10-02 12:19:59.503457124 +0000 UTC m=+0.157986422 container init 54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:19:59 np0005465987 podman[262049]: 2025-10-02 12:19:59.510408585 +0000 UTC m=+0.164937883 container start 54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:19:59 np0005465987 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[262064]: [NOTICE]   (262068) : New worker (262070) forked
Oct  2 08:19:59 np0005465987 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[262064]: [NOTICE]   (262068) : Loading success.
Oct  2 08:19:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:19:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:59.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.590 2 DEBUG nova.compute.manager [req-e34e889a-b134-4bf6-851b-62810dbfe0cc req-7f54be65-e277-4365-a270-3a8ef6657c65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.591 2 DEBUG oslo_concurrency.lockutils [req-e34e889a-b134-4bf6-851b-62810dbfe0cc req-7f54be65-e277-4365-a270-3a8ef6657c65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.591 2 DEBUG oslo_concurrency.lockutils [req-e34e889a-b134-4bf6-851b-62810dbfe0cc req-7f54be65-e277-4365-a270-3a8ef6657c65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.592 2 DEBUG oslo_concurrency.lockutils [req-e34e889a-b134-4bf6-851b-62810dbfe0cc req-7f54be65-e277-4365-a270-3a8ef6657c65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.592 2 DEBUG nova.compute.manager [req-e34e889a-b134-4bf6-851b-62810dbfe0cc req-7f54be65-e277-4365-a270-3a8ef6657c65 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Processing event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.675 2 DEBUG nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.676 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407599.6762624, ef69bc17-6b51-491e-82e9-c4106abb8d74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.676 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] VM Started (Lifecycle Event)#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.679 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.682 2 INFO nova.virt.libvirt.driver [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance spawned successfully.#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.682 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.695 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.699 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.703 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.703 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.704 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.704 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.705 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.705 2 DEBUG nova.virt.libvirt.driver [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.728 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.728 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407599.6764152, ef69bc17-6b51-491e-82e9-c4106abb8d74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.728 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.759 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.761 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407599.6786191, ef69bc17-6b51-491e-82e9-c4106abb8d74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.762 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.772 2 INFO nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Took 7.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.772 2 DEBUG nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.784 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.786 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.820 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.846 2 INFO nova.compute.manager [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Took 8.94 seconds to build instance.#033[00m
Oct  2 08:19:59 np0005465987 nova_compute[230713]: 2025-10-02 12:19:59.868 2 DEBUG oslo_concurrency.lockutils [None req-3a3142c3-a308-483b-a112-cafa639aa221 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:00.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 08:20:00 np0005465987 podman[262080]: 2025-10-02 12:20:00.851175373 +0000 UTC m=+0.063000437 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:20:00 np0005465987 podman[262079]: 2025-10-02 12:20:00.884278224 +0000 UTC m=+0.098747700 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct  2 08:20:00 np0005465987 podman[262081]: 2025-10-02 12:20:00.886239828 +0000 UTC m=+0.093819214 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:20:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:01.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:01 np0005465987 nova_compute[230713]: 2025-10-02 12:20:01.700 2 DEBUG nova.compute.manager [req-dd72288e-fe59-446d-bdd9-810f099cfbf0 req-a35fcd25-b99e-4b69-98be-121bbff99fc3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:01 np0005465987 nova_compute[230713]: 2025-10-02 12:20:01.700 2 DEBUG oslo_concurrency.lockutils [req-dd72288e-fe59-446d-bdd9-810f099cfbf0 req-a35fcd25-b99e-4b69-98be-121bbff99fc3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:01 np0005465987 nova_compute[230713]: 2025-10-02 12:20:01.702 2 DEBUG oslo_concurrency.lockutils [req-dd72288e-fe59-446d-bdd9-810f099cfbf0 req-a35fcd25-b99e-4b69-98be-121bbff99fc3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:01 np0005465987 nova_compute[230713]: 2025-10-02 12:20:01.702 2 DEBUG oslo_concurrency.lockutils [req-dd72288e-fe59-446d-bdd9-810f099cfbf0 req-a35fcd25-b99e-4b69-98be-121bbff99fc3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:01 np0005465987 nova_compute[230713]: 2025-10-02 12:20:01.702 2 DEBUG nova.compute.manager [req-dd72288e-fe59-446d-bdd9-810f099cfbf0 req-a35fcd25-b99e-4b69-98be-121bbff99fc3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] No waiting events found dispatching network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:20:01 np0005465987 nova_compute[230713]: 2025-10-02 12:20:01.703 2 WARNING nova.compute.manager [req-dd72288e-fe59-446d-bdd9-810f099cfbf0 req-a35fcd25-b99e-4b69-98be-121bbff99fc3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received unexpected event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e for instance with vm_state active and task_state None.#033[00m
Oct  2 08:20:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:02.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:02 np0005465987 nova_compute[230713]: 2025-10-02 12:20:02.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:02 np0005465987 NetworkManager[44910]: <info>  [1759407602.4176] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct  2 08:20:02 np0005465987 NetworkManager[44910]: <info>  [1759407602.4185] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Oct  2 08:20:02 np0005465987 nova_compute[230713]: 2025-10-02 12:20:02.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:02 np0005465987 nova_compute[230713]: 2025-10-02 12:20:02.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:02 np0005465987 ovn_controller[129172]: 2025-10-02T12:20:02Z|00282|binding|INFO|Releasing lport 421cd6e3-75aa-44e1-b552-d119c4fcd629 from this chassis (sb_readonly=0)
Oct  2 08:20:02 np0005465987 ovn_controller[129172]: 2025-10-02T12:20:02Z|00283|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:20:02 np0005465987 nova_compute[230713]: 2025-10-02 12:20:02.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:03.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:04 np0005465987 nova_compute[230713]: 2025-10-02 12:20:04.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:04 np0005465987 nova_compute[230713]: 2025-10-02 12:20:04.300 2 DEBUG nova.compute.manager [req-be0ff32f-344b-4ec3-a1f1-ef306fe6c721 req-db1490d8-638c-4ff6-82a3-7182e50672dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-changed-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:04 np0005465987 nova_compute[230713]: 2025-10-02 12:20:04.301 2 DEBUG nova.compute.manager [req-be0ff32f-344b-4ec3-a1f1-ef306fe6c721 req-db1490d8-638c-4ff6-82a3-7182e50672dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Refreshing instance network info cache due to event network-changed-ce91d40e-1bd9-4703-95b5-a23942c1592e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:20:04 np0005465987 nova_compute[230713]: 2025-10-02 12:20:04.301 2 DEBUG oslo_concurrency.lockutils [req-be0ff32f-344b-4ec3-a1f1-ef306fe6c721 req-db1490d8-638c-4ff6-82a3-7182e50672dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:20:04 np0005465987 nova_compute[230713]: 2025-10-02 12:20:04.301 2 DEBUG oslo_concurrency.lockutils [req-be0ff32f-344b-4ec3-a1f1-ef306fe6c721 req-db1490d8-638c-4ff6-82a3-7182e50672dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:20:04 np0005465987 nova_compute[230713]: 2025-10-02 12:20:04.302 2 DEBUG nova.network.neutron [req-be0ff32f-344b-4ec3-a1f1-ef306fe6c721 req-db1490d8-638c-4ff6-82a3-7182e50672dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Refreshing network info cache for port ce91d40e-1bd9-4703-95b5-a23942c1592e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:20:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:04.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:04 np0005465987 nova_compute[230713]: 2025-10-02 12:20:04.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:05.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:06 np0005465987 nova_compute[230713]: 2025-10-02 12:20:06.180 2 DEBUG nova.network.neutron [req-be0ff32f-344b-4ec3-a1f1-ef306fe6c721 req-db1490d8-638c-4ff6-82a3-7182e50672dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updated VIF entry in instance network info cache for port ce91d40e-1bd9-4703-95b5-a23942c1592e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:20:06 np0005465987 nova_compute[230713]: 2025-10-02 12:20:06.181 2 DEBUG nova.network.neutron [req-be0ff32f-344b-4ec3-a1f1-ef306fe6c721 req-db1490d8-638c-4ff6-82a3-7182e50672dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:06 np0005465987 nova_compute[230713]: 2025-10-02 12:20:06.222 2 DEBUG oslo_concurrency.lockutils [req-be0ff32f-344b-4ec3-a1f1-ef306fe6c721 req-db1490d8-638c-4ff6-82a3-7182e50672dd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:20:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:06.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:06 np0005465987 nova_compute[230713]: 2025-10-02 12:20:06.322 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:07 np0005465987 nova_compute[230713]: 2025-10-02 12:20:07.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:07.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:08.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:09 np0005465987 nova_compute[230713]: 2025-10-02 12:20:09.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:09.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:10.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1847149126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:11.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:12.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:12 np0005465987 nova_compute[230713]: 2025-10-02 12:20:12.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:13.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:20:13Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:4b:71 10.100.0.4
Oct  2 08:20:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:20:13Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:4b:71 10.100.0.4
Oct  2 08:20:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:14 np0005465987 nova_compute[230713]: 2025-10-02 12:20:14.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:14.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:20:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:15.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:20:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:16.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:17 np0005465987 nova_compute[230713]: 2025-10-02 12:20:17.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:17.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:18.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:19 np0005465987 nova_compute[230713]: 2025-10-02 12:20:19.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:19.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:20.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:21.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:22.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:22 np0005465987 nova_compute[230713]: 2025-10-02 12:20:22.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005465987 podman[262142]: 2025-10-02 12:20:22.838566802 +0000 UTC m=+0.051955792 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 08:20:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:23.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e254 e254: 3 total, 3 up, 3 in
Oct  2 08:20:24 np0005465987 nova_compute[230713]: 2025-10-02 12:20:24.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:20:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:24.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:20:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:25.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:26.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:27 np0005465987 nova_compute[230713]: 2025-10-02 12:20:27.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:27.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:27.947 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:27.947 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:27.948 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:28.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.297 2 DEBUG oslo_concurrency.lockutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.297 2 DEBUG oslo_concurrency.lockutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.298 2 DEBUG oslo_concurrency.lockutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.298 2 DEBUG oslo_concurrency.lockutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.298 2 DEBUG oslo_concurrency.lockutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.299 2 INFO nova.compute.manager [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Terminating instance#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.300 2 DEBUG nova.compute.manager [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:20:29 np0005465987 kernel: tap60c78706-8c (unregistering): left promiscuous mode
Oct  2 08:20:29 np0005465987 NetworkManager[44910]: <info>  [1759407629.3552] device (tap60c78706-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:20:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:20:29Z|00284|binding|INFO|Releasing lport 60c78706-8cfe-409b-91d7-dc811b9de69a from this chassis (sb_readonly=0)
Oct  2 08:20:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:20:29Z|00285|binding|INFO|Setting lport 60c78706-8cfe-409b-91d7-dc811b9de69a down in Southbound
Oct  2 08:20:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:20:29Z|00286|binding|INFO|Removing iface tap60c78706-8c ovn-installed in OVS
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.371 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:85:3a 10.100.0.13'], port_security=['fa:16:3e:f3:85:3a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '992b01a0-5a43-42ab-bfcb-96c2db503633', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8efba404696b40fbbaa6431b934b87f1', 'neutron:revision_number': '10', 'neutron:security_group_ids': '3d4d8d91-6fd2-4ab6-a30c-6640fa44e7f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d3722e-d182-43fd-9a86-fa7ed68becec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=60c78706-8cfe-409b-91d7-dc811b9de69a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.373 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 60c78706-8cfe-409b-91d7-dc811b9de69a in datapath f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 unbound from our chassis#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.375 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.376 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[40288a8f-d7fc-41a5-8cd4-d57b68537f33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.376 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 namespace which is not needed anymore#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005465987 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Oct  2 08:20:29 np0005465987 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000004c.scope: Consumed 1.768s CPU time.
Oct  2 08:20:29 np0005465987 systemd-machined[188335]: Machine qemu-36-instance-0000004c terminated.
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.536 2 INFO nova.virt.libvirt.driver [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Instance destroyed successfully.#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.536 2 DEBUG nova.objects.instance [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lazy-loading 'resources' on Instance uuid 992b01a0-5a43-42ab-bfcb-96c2db503633 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:29 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260253]: [NOTICE]   (260257) : haproxy version is 2.8.14-c23fe91
Oct  2 08:20:29 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260253]: [NOTICE]   (260257) : path to executable is /usr/sbin/haproxy
Oct  2 08:20:29 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260253]: [WARNING]  (260257) : Exiting Master process...
Oct  2 08:20:29 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260253]: [WARNING]  (260257) : Exiting Master process...
Oct  2 08:20:29 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260253]: [ALERT]    (260257) : Current worker (260259) exited with code 143 (Terminated)
Oct  2 08:20:29 np0005465987 neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0[260253]: [WARNING]  (260257) : All workers exited. Exiting... (0)
Oct  2 08:20:29 np0005465987 systemd[1]: libpod-96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a.scope: Deactivated successfully.
Oct  2 08:20:29 np0005465987 podman[262184]: 2025-10-02 12:20:29.555292352 +0000 UTC m=+0.051522099 container died 96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:20:29 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:20:29 np0005465987 systemd[1]: var-lib-containers-storage-overlay-f9672d4e886683278bdde2e8171efe529941acb5aa6f49cc14c18de1e3ffa96f-merged.mount: Deactivated successfully.
Oct  2 08:20:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:29.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.598 2 DEBUG nova.virt.libvirt.vif [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:17:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-803739487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-803739487',id=76,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8efba404696b40fbbaa6431b934b87f1',ramdisk_id='',reservation_id='r-8tky9r5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-153154373',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-153154373-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:42Z,user_data=None,user_id='69d8e29c6d3747e98a5985a584f4c814',uuid=992b01a0-5a43-42ab-bfcb-96c2db503633,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.599 2 DEBUG nova.network.os_vif_util [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converting VIF {"id": "60c78706-8cfe-409b-91d7-dc811b9de69a", "address": "fa:16:3e:f3:85:3a", "network": {"id": "f1725bd8-7d9d-45cc-b992-0cd3db0e30f0", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-64366215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8efba404696b40fbbaa6431b934b87f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60c78706-8c", "ovs_interfaceid": "60c78706-8cfe-409b-91d7-dc811b9de69a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.600 2 DEBUG nova.network.os_vif_util [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:85:3a,bridge_name='br-int',has_traffic_filtering=True,id=60c78706-8cfe-409b-91d7-dc811b9de69a,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c78706-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.600 2 DEBUG os_vif [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:85:3a,bridge_name='br-int',has_traffic_filtering=True,id=60c78706-8cfe-409b-91d7-dc811b9de69a,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c78706-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.602 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60c78706-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.608 2 INFO os_vif [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:85:3a,bridge_name='br-int',has_traffic_filtering=True,id=60c78706-8cfe-409b-91d7-dc811b9de69a,network=Network(f1725bd8-7d9d-45cc-b992-0cd3db0e30f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60c78706-8c')#033[00m
Oct  2 08:20:29 np0005465987 podman[262184]: 2025-10-02 12:20:29.609494695 +0000 UTC m=+0.105724402 container cleanup 96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:20:29 np0005465987 systemd[1]: libpod-conmon-96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a.scope: Deactivated successfully.
Oct  2 08:20:29 np0005465987 podman[262240]: 2025-10-02 12:20:29.680941582 +0000 UTC m=+0.044416874 container remove 96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.687 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9fc77b-4ae5-4c89-b10c-2a75ff81f7d7]: (4, ('Thu Oct  2 12:20:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a)\n96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a\nThu Oct  2 12:20:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 (96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a)\n96828fd4d6c2e3cb9de089ec1545533e0205d8b782b30977ff075b1ab8d8005a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.689 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6ba5cc-94bb-4cf4-ab5e-15405af92e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.690 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1725bd8-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:29 np0005465987 kernel: tapf1725bd8-70: left promiscuous mode
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.707 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3365bf8e-d14b-45ea-96c0-1042b7bec650]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.745 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[da79783c-259c-4612-88f9-d2d6a2369bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.746 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e0dae9d8-72f9-4392-9e4c-772848d50f3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.766 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbee279-7989-4850-808c-8fd3da05486a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559002, 'reachable_time': 27056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262259, 'error': None, 'target': 'ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:29 np0005465987 systemd[1]: run-netns-ovnmeta\x2df1725bd8\x2d7d9d\x2d45cc\x2db992\x2d0cd3db0e30f0.mount: Deactivated successfully.
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.771 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1725bd8-7d9d-45cc-b992-0cd3db0e30f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:20:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:29.771 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[2de670da-b34e-4546-b2a7-594af48e1cf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.931 2 INFO nova.virt.libvirt.driver [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Deleting instance files /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633_del#033[00m
Oct  2 08:20:29 np0005465987 nova_compute[230713]: 2025-10-02 12:20:29.932 2 INFO nova.virt.libvirt.driver [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Deletion of /var/lib/nova/instances/992b01a0-5a43-42ab-bfcb-96c2db503633_del complete#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.042 2 INFO nova.compute.manager [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.043 2 DEBUG oslo.service.loopingcall [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.043 2 DEBUG nova.compute.manager [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.043 2 DEBUG nova.network.neutron [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:20:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:30.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.703 2 DEBUG nova.compute.manager [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.704 2 DEBUG oslo_concurrency.lockutils [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.704 2 DEBUG oslo_concurrency.lockutils [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.704 2 DEBUG oslo_concurrency.lockutils [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.704 2 DEBUG nova.compute.manager [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.705 2 DEBUG nova.compute.manager [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-unplugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.705 2 DEBUG nova.compute.manager [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.705 2 DEBUG oslo_concurrency.lockutils [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.706 2 DEBUG oslo_concurrency.lockutils [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.706 2 DEBUG oslo_concurrency.lockutils [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.706 2 DEBUG nova.compute.manager [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] No waiting events found dispatching network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.707 2 WARNING nova.compute.manager [req-d9b55e83-b64b-4eaa-8207-060d3cb97c55 req-20272c71-ee8e-4cdc-bc60-c6c741ed3ff7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received unexpected event network-vif-plugged-60c78706-8cfe-409b-91d7-dc811b9de69a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.869 2 DEBUG nova.network.neutron [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.897 2 INFO nova.compute.manager [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Took 0.85 seconds to deallocate network for instance.#033[00m
Oct  2 08:20:30 np0005465987 nova_compute[230713]: 2025-10-02 12:20:30.980 2 DEBUG nova.compute.manager [req-c0bee921-74cb-4813-8078-97ab20979385 req-f402e245-0248-4b7a-b3f9-329085b631f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Received event network-vif-deleted-60c78706-8cfe-409b-91d7-dc811b9de69a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:31 np0005465987 nova_compute[230713]: 2025-10-02 12:20:31.171 2 INFO nova.compute.manager [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Took 0.27 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:20:31 np0005465987 nova_compute[230713]: 2025-10-02 12:20:31.291 2 DEBUG oslo_concurrency.lockutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:31 np0005465987 nova_compute[230713]: 2025-10-02 12:20:31.291 2 DEBUG oslo_concurrency.lockutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:31 np0005465987 nova_compute[230713]: 2025-10-02 12:20:31.380 2 DEBUG oslo_concurrency.processutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:31.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:31 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3978992816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:31 np0005465987 podman[262283]: 2025-10-02 12:20:31.850841429 +0000 UTC m=+0.060942779 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:20:31 np0005465987 nova_compute[230713]: 2025-10-02 12:20:31.858 2 DEBUG oslo_concurrency.processutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:31 np0005465987 nova_compute[230713]: 2025-10-02 12:20:31.864 2 DEBUG nova.compute.provider_tree [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:20:31 np0005465987 podman[262281]: 2025-10-02 12:20:31.868645879 +0000 UTC m=+0.087691165 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:20:31 np0005465987 podman[262282]: 2025-10-02 12:20:31.876449084 +0000 UTC m=+0.093737762 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 08:20:31 np0005465987 nova_compute[230713]: 2025-10-02 12:20:31.880 2 DEBUG nova.scheduler.client.report [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:20:31 np0005465987 nova_compute[230713]: 2025-10-02 12:20:31.899 2 DEBUG oslo_concurrency.lockutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:31 np0005465987 nova_compute[230713]: 2025-10-02 12:20:31.936 2 INFO nova.scheduler.client.report [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Deleted allocations for instance 992b01a0-5a43-42ab-bfcb-96c2db503633#033[00m
Oct  2 08:20:32 np0005465987 nova_compute[230713]: 2025-10-02 12:20:32.008 2 DEBUG oslo_concurrency.lockutils [None req-944db080-a0e1-4606-803c-f7be000f573c 69d8e29c6d3747e98a5985a584f4c814 8efba404696b40fbbaa6431b934b87f1 - - default default] Lock "992b01a0-5a43-42ab-bfcb-96c2db503633" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:33.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e255 e255: 3 total, 3 up, 3 in
Oct  2 08:20:34 np0005465987 nova_compute[230713]: 2025-10-02 12:20:34.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:34.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:34 np0005465987 nova_compute[230713]: 2025-10-02 12:20:34.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:35.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:36.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:36 np0005465987 nova_compute[230713]: 2025-10-02 12:20:36.824 2 DEBUG nova.compute.manager [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:36 np0005465987 nova_compute[230713]: 2025-10-02 12:20:36.870 2 INFO nova.compute.manager [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] instance snapshotting#033[00m
Oct  2 08:20:36 np0005465987 nova_compute[230713]: 2025-10-02 12:20:36.871 2 DEBUG nova.objects.instance [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'flavor' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:37 np0005465987 nova_compute[230713]: 2025-10-02 12:20:37.202 2 INFO nova.virt.libvirt.driver [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Beginning live snapshot process#033[00m
Oct  2 08:20:37 np0005465987 nova_compute[230713]: 2025-10-02 12:20:37.350 2 DEBUG nova.virt.libvirt.imagebackend [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:20:37 np0005465987 nova_compute[230713]: 2025-10-02 12:20:37.529 2 DEBUG nova.storage.rbd_utils [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(757fd30805714a5cb39ab6b64c8b46b5) on rbd image(ef69bc17-6b51-491e-82e9-c4106abb8d74_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:37.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e256 e256: 3 total, 3 up, 3 in
Oct  2 08:20:38 np0005465987 nova_compute[230713]: 2025-10-02 12:20:38.103 2 DEBUG nova.storage.rbd_utils [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] cloning vms/ef69bc17-6b51-491e-82e9-c4106abb8d74_disk@757fd30805714a5cb39ab6b64c8b46b5 to images/6adae2f0-2d5d-4fab-88e2-2a0e5685e8f9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:20:38 np0005465987 nova_compute[230713]: 2025-10-02 12:20:38.309 2 DEBUG nova.storage.rbd_utils [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] flattening images/6adae2f0-2d5d-4fab-88e2-2a0e5685e8f9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:20:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:38.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:38 np0005465987 nova_compute[230713]: 2025-10-02 12:20:38.855 2 DEBUG nova.storage.rbd_utils [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] removing snapshot(757fd30805714a5cb39ab6b64c8b46b5) on rbd image(ef69bc17-6b51-491e-82e9-c4106abb8d74_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:20:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e257 e257: 3 total, 3 up, 3 in
Oct  2 08:20:39 np0005465987 nova_compute[230713]: 2025-10-02 12:20:39.077 2 DEBUG nova.storage.rbd_utils [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(snap) on rbd image(6adae2f0-2d5d-4fab-88e2-2a0e5685e8f9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:39 np0005465987 nova_compute[230713]: 2025-10-02 12:20:39.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:39 np0005465987 nova_compute[230713]: 2025-10-02 12:20:39.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:39.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583138155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e258 e258: 3 total, 3 up, 3 in
Oct  2 08:20:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:40.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:20:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:20:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:20:41 np0005465987 nova_compute[230713]: 2025-10-02 12:20:41.348 2 INFO nova.virt.libvirt.driver [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Snapshot image upload complete#033[00m
Oct  2 08:20:41 np0005465987 nova_compute[230713]: 2025-10-02 12:20:41.349 2 INFO nova.compute.manager [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Took 4.45 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:20:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:41.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:41 np0005465987 nova_compute[230713]: 2025-10-02 12:20:41.658 2 DEBUG nova.compute.manager [None req-22aed91f-d434-4944-8d48-94cffc8c5efc 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct  2 08:20:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:42.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:43.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:43 np0005465987 nova_compute[230713]: 2025-10-02 12:20:43.682 2 DEBUG nova.compute.manager [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:43 np0005465987 nova_compute[230713]: 2025-10-02 12:20:43.731 2 INFO nova.compute.manager [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] instance snapshotting#033[00m
Oct  2 08:20:43 np0005465987 nova_compute[230713]: 2025-10-02 12:20:43.733 2 DEBUG nova.objects.instance [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'flavor' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e259 e259: 3 total, 3 up, 3 in
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.028 2 INFO nova.virt.libvirt.driver [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Beginning live snapshot process#033[00m
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:44.132 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:20:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:44.133 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.221 2 DEBUG nova.virt.libvirt.imagebackend [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:20:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:44.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.468 2 DEBUG nova.storage.rbd_utils [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(2a5c560a5fce4e75a023144e55e1a69b) on rbd image(ef69bc17-6b51-491e-82e9-c4106abb8d74_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.534 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407629.5338788, 992b01a0-5a43-42ab-bfcb-96c2db503633 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.535 2 INFO nova.compute.manager [-] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.570 2 DEBUG nova.compute.manager [None req-bd14f590-0ef3-400e-9eb2-1d4e048f9b83 - - - - - -] [instance: 992b01a0-5a43-42ab-bfcb-96c2db503633] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:20:44Z|00287|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:20:44 np0005465987 nova_compute[230713]: 2025-10-02 12:20:44.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e260 e260: 3 total, 3 up, 3 in
Oct  2 08:20:45 np0005465987 nova_compute[230713]: 2025-10-02 12:20:45.258 2 DEBUG nova.storage.rbd_utils [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] cloning vms/ef69bc17-6b51-491e-82e9-c4106abb8d74_disk@2a5c560a5fce4e75a023144e55e1a69b to images/2410127a-756e-4057-a962-4fe35566fc7f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:20:45 np0005465987 nova_compute[230713]: 2025-10-02 12:20:45.427 2 DEBUG nova.storage.rbd_utils [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] flattening images/2410127a-756e-4057-a962-4fe35566fc7f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:20:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:45.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:45 np0005465987 nova_compute[230713]: 2025-10-02 12:20:45.849 2 DEBUG nova.storage.rbd_utils [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] removing snapshot(2a5c560a5fce4e75a023144e55e1a69b) on rbd image(ef69bc17-6b51-491e-82e9-c4106abb8d74_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:20:45 np0005465987 nova_compute[230713]: 2025-10-02 12:20:45.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e261 e261: 3 total, 3 up, 3 in
Oct  2 08:20:46 np0005465987 nova_compute[230713]: 2025-10-02 12:20:46.246 2 DEBUG nova.storage.rbd_utils [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(snap) on rbd image(2410127a-756e-4057-a962-4fe35566fc7f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:46.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:46 np0005465987 nova_compute[230713]: 2025-10-02 12:20:46.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:46 np0005465987 nova_compute[230713]: 2025-10-02 12:20:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:46 np0005465987 nova_compute[230713]: 2025-10-02 12:20:46.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:20:46 np0005465987 nova_compute[230713]: 2025-10-02 12:20:46.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:20:46 np0005465987 nova_compute[230713]: 2025-10-02 12:20:46.987 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:20:46 np0005465987 nova_compute[230713]: 2025-10-02 12:20:46.987 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:20:46 np0005465987 nova_compute[230713]: 2025-10-02 12:20:46.987 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:20:46 np0005465987 nova_compute[230713]: 2025-10-02 12:20:46.988 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:20:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:20:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e262 e262: 3 total, 3 up, 3 in
Oct  2 08:20:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:47.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:48.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:48 np0005465987 nova_compute[230713]: 2025-10-02 12:20:48.485 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:48 np0005465987 nova_compute[230713]: 2025-10-02 12:20:48.515 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:20:48 np0005465987 nova_compute[230713]: 2025-10-02 12:20:48.516 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:20:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e263 e263: 3 total, 3 up, 3 in
Oct  2 08:20:49 np0005465987 nova_compute[230713]: 2025-10-02 12:20:49.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:49 np0005465987 nova_compute[230713]: 2025-10-02 12:20:49.229 2 INFO nova.virt.libvirt.driver [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Snapshot image upload complete#033[00m
Oct  2 08:20:49 np0005465987 nova_compute[230713]: 2025-10-02 12:20:49.230 2 INFO nova.compute.manager [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Took 5.46 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:20:49 np0005465987 nova_compute[230713]: 2025-10-02 12:20:49.552 2 DEBUG nova.compute.manager [None req-bcb38a90-8785-49e2-b3b7-ed2e521da36e 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct  2 08:20:49 np0005465987 nova_compute[230713]: 2025-10-02 12:20:49.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:49.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:50.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:50 np0005465987 nova_compute[230713]: 2025-10-02 12:20:50.728 2 DEBUG nova.compute.manager [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:50 np0005465987 nova_compute[230713]: 2025-10-02 12:20:50.775 2 INFO nova.compute.manager [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] instance snapshotting#033[00m
Oct  2 08:20:50 np0005465987 nova_compute[230713]: 2025-10-02 12:20:50.776 2 DEBUG nova.objects.instance [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'flavor' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:50 np0005465987 nova_compute[230713]: 2025-10-02 12:20:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:51 np0005465987 nova_compute[230713]: 2025-10-02 12:20:51.130 2 INFO nova.virt.libvirt.driver [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Beginning live snapshot process#033[00m
Oct  2 08:20:51 np0005465987 nova_compute[230713]: 2025-10-02 12:20:51.339 2 DEBUG nova.virt.libvirt.imagebackend [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:20:51 np0005465987 nova_compute[230713]: 2025-10-02 12:20:51.578 2 DEBUG nova.storage.rbd_utils [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(31ca8da183b84dfb922a2768ae34617a) on rbd image(ef69bc17-6b51-491e-82e9-c4106abb8d74_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:51.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e264 e264: 3 total, 3 up, 3 in
Oct  2 08:20:52 np0005465987 nova_compute[230713]: 2025-10-02 12:20:52.364 2 DEBUG nova.storage.rbd_utils [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] cloning vms/ef69bc17-6b51-491e-82e9-c4106abb8d74_disk@31ca8da183b84dfb922a2768ae34617a to images/4b2241f1-ae4c-491b-8811-8c9dd1f13160 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:20:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:52.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:52 np0005465987 nova_compute[230713]: 2025-10-02 12:20:52.675 2 DEBUG nova.storage.rbd_utils [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] flattening images/4b2241f1-ae4c-491b-8811-8c9dd1f13160 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:20:52 np0005465987 nova_compute[230713]: 2025-10-02 12:20:52.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:20:53.135 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:53.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:53 np0005465987 nova_compute[230713]: 2025-10-02 12:20:53.726 2 DEBUG nova.storage.rbd_utils [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] removing snapshot(31ca8da183b84dfb922a2768ae34617a) on rbd image(ef69bc17-6b51-491e-82e9-c4106abb8d74_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:20:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:53 np0005465987 podman[262936]: 2025-10-02 12:20:53.87280492 +0000 UTC m=+0.086443692 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:20:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e265 e265: 3 total, 3 up, 3 in
Oct  2 08:20:54 np0005465987 nova_compute[230713]: 2025-10-02 12:20:54.069 2 DEBUG nova.storage.rbd_utils [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] creating snapshot(snap) on rbd image(4b2241f1-ae4c-491b-8811-8c9dd1f13160) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:54 np0005465987 nova_compute[230713]: 2025-10-02 12:20:54.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:54.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:54 np0005465987 nova_compute[230713]: 2025-10-02 12:20:54.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:54 np0005465987 nova_compute[230713]: 2025-10-02 12:20:54.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:54 np0005465987 nova_compute[230713]: 2025-10-02 12:20:54.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:54 np0005465987 nova_compute[230713]: 2025-10-02 12:20:54.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:20:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e266 e266: 3 total, 3 up, 3 in
Oct  2 08:20:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:20:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:20:55 np0005465987 nova_compute[230713]: 2025-10-02 12:20:55.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:56.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:56 np0005465987 nova_compute[230713]: 2025-10-02 12:20:56.482 2 INFO nova.virt.libvirt.driver [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Snapshot image upload complete#033[00m
Oct  2 08:20:56 np0005465987 nova_compute[230713]: 2025-10-02 12:20:56.483 2 INFO nova.compute.manager [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Took 5.59 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:20:56 np0005465987 nova_compute[230713]: 2025-10-02 12:20:56.956 2 DEBUG nova.compute.manager [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct  2 08:20:56 np0005465987 nova_compute[230713]: 2025-10-02 12:20:56.956 2 DEBUG nova.compute.manager [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Oct  2 08:20:56 np0005465987 nova_compute[230713]: 2025-10-02 12:20:56.957 2 DEBUG nova.compute.manager [None req-23bba29e-035f-4c0c-9e00-ccf56b13639d 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Deleting image 6adae2f0-2d5d-4fab-88e2-2a0e5685e8f9 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Oct  2 08:20:56 np0005465987 nova_compute[230713]: 2025-10-02 12:20:56.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.135 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.135 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.135 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.136 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.136 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/644672697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.574 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:57.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.698 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.698 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.839 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.840 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4327MB free_disk=20.91765594482422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.840 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.840 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.955 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance ef69bc17-6b51-491e-82e9-c4106abb8d74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.956 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.956 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:20:57 np0005465987 nova_compute[230713]: 2025-10-02 12:20:57.986 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:58.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1653184868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:58 np0005465987 nova_compute[230713]: 2025-10-02 12:20:58.421 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:58 np0005465987 nova_compute[230713]: 2025-10-02 12:20:58.426 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:20:58 np0005465987 nova_compute[230713]: 2025-10-02 12:20:58.459 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:20:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e267 e267: 3 total, 3 up, 3 in
Oct  2 08:20:58 np0005465987 nova_compute[230713]: 2025-10-02 12:20:58.549 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:20:58 np0005465987 nova_compute[230713]: 2025-10-02 12:20:58.549 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:59 np0005465987 nova_compute[230713]: 2025-10-02 12:20:59.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:59 np0005465987 nova_compute[230713]: 2025-10-02 12:20:59.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:20:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:59.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:00.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:01.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:02.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e268 e268: 3 total, 3 up, 3 in
Oct  2 08:21:02 np0005465987 podman[263022]: 2025-10-02 12:21:02.864624372 +0000 UTC m=+0.072600759 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:21:02 np0005465987 podman[263021]: 2025-10-02 12:21:02.883855322 +0000 UTC m=+0.088735644 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid)
Oct  2 08:21:02 np0005465987 podman[263020]: 2025-10-02 12:21:02.920228984 +0000 UTC m=+0.135683427 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:21:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:21:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:03.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:21:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e269 e269: 3 total, 3 up, 3 in
Oct  2 08:21:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e270 e270: 3 total, 3 up, 3 in
Oct  2 08:21:04 np0005465987 nova_compute[230713]: 2025-10-02 12:21:04.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:04.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:04 np0005465987 nova_compute[230713]: 2025-10-02 12:21:04.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:05.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:06.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.098777) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667098864, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2516, "num_deletes": 261, "total_data_size": 5691475, "memory_usage": 5777472, "flush_reason": "Manual Compaction"}
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667150448, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 3715692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39169, "largest_seqno": 41679, "table_properties": {"data_size": 3705427, "index_size": 6503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22161, "raw_average_key_size": 21, "raw_value_size": 3684699, "raw_average_value_size": 3509, "num_data_blocks": 281, "num_entries": 1050, "num_filter_entries": 1050, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407476, "oldest_key_time": 1759407476, "file_creation_time": 1759407667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 51746 microseconds, and 13332 cpu microseconds.
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.150521) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 3715692 bytes OK
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.150556) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.155318) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.155347) EVENT_LOG_v1 {"time_micros": 1759407667155337, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.155374) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5680346, prev total WAL file size 5680346, number of live WAL files 2.
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.157739) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(3628KB)], [75(9636KB)]
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667157801, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13583159, "oldest_snapshot_seqno": -1}
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6670 keys, 11644698 bytes, temperature: kUnknown
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667239685, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 11644698, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11597470, "index_size": 29452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 170889, "raw_average_key_size": 25, "raw_value_size": 11475306, "raw_average_value_size": 1720, "num_data_blocks": 1177, "num_entries": 6670, "num_filter_entries": 6670, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759407667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.239921) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11644698 bytes
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.241652) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.7 rd, 142.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7204, records dropped: 534 output_compression: NoCompression
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.241670) EVENT_LOG_v1 {"time_micros": 1759407667241661, "job": 46, "event": "compaction_finished", "compaction_time_micros": 81974, "compaction_time_cpu_micros": 25646, "output_level": 6, "num_output_files": 1, "total_output_size": 11644698, "num_input_records": 7204, "num_output_records": 6670, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667242332, "job": 46, "event": "table_file_deletion", "file_number": 77}
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407667244029, "job": 46, "event": "table_file_deletion", "file_number": 75}
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.157575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.244071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.244075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.244076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.244078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:21:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:21:07.244079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:21:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:07.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e271 e271: 3 total, 3 up, 3 in
Oct  2 08:21:09 np0005465987 nova_compute[230713]: 2025-10-02 12:21:09.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:09 np0005465987 nova_compute[230713]: 2025-10-02 12:21:09.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:09.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:10.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:11.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:12.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:12 np0005465987 nova_compute[230713]: 2025-10-02 12:21:12.747 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "63cd9316-0da7-4274-b2dc-3501e55e6617" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:12 np0005465987 nova_compute[230713]: 2025-10-02 12:21:12.748 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:12 np0005465987 nova_compute[230713]: 2025-10-02 12:21:12.774 2 DEBUG nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:21:12 np0005465987 nova_compute[230713]: 2025-10-02 12:21:12.862 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:12 np0005465987 nova_compute[230713]: 2025-10-02 12:21:12.863 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:12 np0005465987 nova_compute[230713]: 2025-10-02 12:21:12.868 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:21:12 np0005465987 nova_compute[230713]: 2025-10-02 12:21:12.868 2 INFO nova.compute.claims [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.011 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2728080054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.486 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.493 2 DEBUG nova.compute.provider_tree [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.513 2 DEBUG nova.scheduler.client.report [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.552 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.553 2 DEBUG nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.617 2 DEBUG nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.617 2 DEBUG nova.network.neutron [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.636 2 INFO nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.654 2 DEBUG nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:21:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:13.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.739 2 DEBUG nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.741 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.742 2 INFO nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Creating image(s)#033[00m
Oct  2 08:21:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.770 2 DEBUG nova.storage.rbd_utils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 63cd9316-0da7-4274-b2dc-3501e55e6617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.797 2 DEBUG nova.storage.rbd_utils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 63cd9316-0da7-4274-b2dc-3501e55e6617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.836 2 DEBUG nova.storage.rbd_utils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 63cd9316-0da7-4274-b2dc-3501e55e6617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.840 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.926 2 DEBUG nova.policy [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '25468893d71641a385711fd2982bb00b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '10fff81da7a54740a53a0771ce916329', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.931 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.932 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.932 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.932 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.958 2 DEBUG nova.storage.rbd_utils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 63cd9316-0da7-4274-b2dc-3501e55e6617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:13 np0005465987 nova_compute[230713]: 2025-10-02 12:21:13.961 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 63cd9316-0da7-4274-b2dc-3501e55e6617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e272 e272: 3 total, 3 up, 3 in
Oct  2 08:21:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:14.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.700 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 63cd9316-0da7-4274-b2dc-3501e55e6617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.739s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.824 2 DEBUG nova.storage.rbd_utils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] resizing rbd image 63cd9316-0da7-4274-b2dc-3501e55e6617_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.862 2 DEBUG nova.network.neutron [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Successfully created port: 442bf395-5165-4681-8962-72823f5bdbd0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.973 2 DEBUG nova.objects.instance [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'migration_context' on Instance uuid 63cd9316-0da7-4274-b2dc-3501e55e6617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.989 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.990 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Ensure instance console log exists: /var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.990 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.991 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:14 np0005465987 nova_compute[230713]: 2025-10-02 12:21:14.991 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:15.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:15 np0005465987 nova_compute[230713]: 2025-10-02 12:21:15.848 2 DEBUG nova.network.neutron [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Successfully updated port: 442bf395-5165-4681-8962-72823f5bdbd0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:21:15 np0005465987 nova_compute[230713]: 2025-10-02 12:21:15.864 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:15 np0005465987 nova_compute[230713]: 2025-10-02 12:21:15.864 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:15 np0005465987 nova_compute[230713]: 2025-10-02 12:21:15.864 2 DEBUG nova.network.neutron [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:21:15 np0005465987 nova_compute[230713]: 2025-10-02 12:21:15.946 2 DEBUG nova.compute.manager [req-41ed2d5f-9fff-49b8-96ec-6aca4e145695 req-cdc5ffa6-6a7c-451d-a90b-b02ca5eb0208 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Received event network-changed-442bf395-5165-4681-8962-72823f5bdbd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:21:15 np0005465987 nova_compute[230713]: 2025-10-02 12:21:15.947 2 DEBUG nova.compute.manager [req-41ed2d5f-9fff-49b8-96ec-6aca4e145695 req-cdc5ffa6-6a7c-451d-a90b-b02ca5eb0208 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Refreshing instance network info cache due to event network-changed-442bf395-5165-4681-8962-72823f5bdbd0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:21:15 np0005465987 nova_compute[230713]: 2025-10-02 12:21:15.947 2 DEBUG oslo_concurrency.lockutils [req-41ed2d5f-9fff-49b8-96ec-6aca4e145695 req-cdc5ffa6-6a7c-451d-a90b-b02ca5eb0208 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:16 np0005465987 nova_compute[230713]: 2025-10-02 12:21:16.002 2 DEBUG nova.network.neutron [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:21:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:16.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.460 2 DEBUG nova.network.neutron [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updating instance_info_cache with network_info: [{"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.482 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.483 2 DEBUG nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Instance network_info: |[{"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.483 2 DEBUG oslo_concurrency.lockutils [req-41ed2d5f-9fff-49b8-96ec-6aca4e145695 req-cdc5ffa6-6a7c-451d-a90b-b02ca5eb0208 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.484 2 DEBUG nova.network.neutron [req-41ed2d5f-9fff-49b8-96ec-6aca4e145695 req-cdc5ffa6-6a7c-451d-a90b-b02ca5eb0208 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Refreshing network info cache for port 442bf395-5165-4681-8962-72823f5bdbd0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.487 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Start _get_guest_xml network_info=[{"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.493 2 WARNING nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.503 2 DEBUG nova.virt.libvirt.host [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.504 2 DEBUG nova.virt.libvirt.host [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.507 2 DEBUG nova.virt.libvirt.host [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.508 2 DEBUG nova.virt.libvirt.host [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.510 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.510 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.510 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.511 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.511 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.511 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.512 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.512 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.512 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.513 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.513 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.513 2 DEBUG nova.virt.hardware [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.516 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:17.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:21:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3603085431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.974 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.995 2 DEBUG nova.storage.rbd_utils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 63cd9316-0da7-4274-b2dc-3501e55e6617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:17 np0005465987 nova_compute[230713]: 2025-10-02 12:21:17.999 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:21:18 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2073604288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:21:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:18.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.439 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.441 2 DEBUG nova.virt.libvirt.vif [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:21:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1359566530',display_name='tempest-ServerActionsTestOtherB-server-1359566530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1359566530',id=88,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-576mlh6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:21:13Z,user_data=None,user_id='25468893d71641a385711fd2982bb00b',uuid=63cd9316-0da7-4274-b2dc-3501e55e6617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.442 2 DEBUG nova.network.os_vif_util [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.442 2 DEBUG nova.network.os_vif_util [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:e2:a1,bridge_name='br-int',has_traffic_filtering=True,id=442bf395-5165-4681-8962-72823f5bdbd0,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442bf395-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.444 2 DEBUG nova.objects.instance [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'pci_devices' on Instance uuid 63cd9316-0da7-4274-b2dc-3501e55e6617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.460 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <uuid>63cd9316-0da7-4274-b2dc-3501e55e6617</uuid>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <name>instance-00000058</name>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerActionsTestOtherB-server-1359566530</nova:name>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:21:17</nova:creationTime>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <nova:user uuid="25468893d71641a385711fd2982bb00b">tempest-ServerActionsTestOtherB-1686489955-project-member</nova:user>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <nova:project uuid="10fff81da7a54740a53a0771ce916329">tempest-ServerActionsTestOtherB-1686489955</nova:project>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <nova:port uuid="442bf395-5165-4681-8962-72823f5bdbd0">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <entry name="serial">63cd9316-0da7-4274-b2dc-3501e55e6617</entry>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <entry name="uuid">63cd9316-0da7-4274-b2dc-3501e55e6617</entry>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/63cd9316-0da7-4274-b2dc-3501e55e6617_disk">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/63cd9316-0da7-4274-b2dc-3501e55e6617_disk.config">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:72:e2:a1"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <target dev="tap442bf395-51"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617/console.log" append="off"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:21:18 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:21:18 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:21:18 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:21:18 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.462 2 DEBUG nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Preparing to wait for external event network-vif-plugged-442bf395-5165-4681-8962-72823f5bdbd0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.462 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.462 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.463 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.463 2 DEBUG nova.virt.libvirt.vif [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:21:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1359566530',display_name='tempest-ServerActionsTestOtherB-server-1359566530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1359566530',id=88,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-576mlh6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:21:13Z,user_data=None,user_id='25468893d71641a385711fd2982bb00b',uuid=63cd9316-0da7-4274-b2dc-3501e55e6617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.464 2 DEBUG nova.network.os_vif_util [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.464 2 DEBUG nova.network.os_vif_util [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:e2:a1,bridge_name='br-int',has_traffic_filtering=True,id=442bf395-5165-4681-8962-72823f5bdbd0,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442bf395-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.464 2 DEBUG os_vif [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:e2:a1,bridge_name='br-int',has_traffic_filtering=True,id=442bf395-5165-4681-8962-72823f5bdbd0,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442bf395-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.465 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.466 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.469 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap442bf395-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.469 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap442bf395-51, col_values=(('external_ids', {'iface-id': '442bf395-5165-4681-8962-72823f5bdbd0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:e2:a1', 'vm-uuid': '63cd9316-0da7-4274-b2dc-3501e55e6617'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:18 np0005465987 NetworkManager[44910]: <info>  [1759407678.5027] manager: (tap442bf395-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.510 2 INFO os_vif [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:e2:a1,bridge_name='br-int',has_traffic_filtering=True,id=442bf395-5165-4681-8962-72823f5bdbd0,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442bf395-51')#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.571 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.571 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.572 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] No VIF found with MAC fa:16:3e:72:e2:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.572 2 INFO nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Using config drive#033[00m
Oct  2 08:21:18 np0005465987 nova_compute[230713]: 2025-10-02 12:21:18.592 2 DEBUG nova.storage.rbd_utils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 63cd9316-0da7-4274-b2dc-3501e55e6617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:19 np0005465987 nova_compute[230713]: 2025-10-02 12:21:19.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:19 np0005465987 nova_compute[230713]: 2025-10-02 12:21:19.527 2 DEBUG nova.network.neutron [req-41ed2d5f-9fff-49b8-96ec-6aca4e145695 req-cdc5ffa6-6a7c-451d-a90b-b02ca5eb0208 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updated VIF entry in instance network info cache for port 442bf395-5165-4681-8962-72823f5bdbd0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:21:19 np0005465987 nova_compute[230713]: 2025-10-02 12:21:19.528 2 DEBUG nova.network.neutron [req-41ed2d5f-9fff-49b8-96ec-6aca4e145695 req-cdc5ffa6-6a7c-451d-a90b-b02ca5eb0208 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updating instance_info_cache with network_info: [{"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:19 np0005465987 nova_compute[230713]: 2025-10-02 12:21:19.551 2 DEBUG oslo_concurrency.lockutils [req-41ed2d5f-9fff-49b8-96ec-6aca4e145695 req-cdc5ffa6-6a7c-451d-a90b-b02ca5eb0208 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:19.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.310 2 INFO nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Creating config drive at /var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617/disk.config#033[00m
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.315 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_376cz3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:21:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:20.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.466 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_376cz3" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.498 2 DEBUG nova.storage.rbd_utils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] rbd image 63cd9316-0da7-4274-b2dc-3501e55e6617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.502 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617/disk.config 63cd9316-0da7-4274-b2dc-3501e55e6617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.886 2 DEBUG oslo_concurrency.processutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617/disk.config 63cd9316-0da7-4274-b2dc-3501e55e6617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.888 2 INFO nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Deleting local config drive /var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617/disk.config because it was imported into RBD.#033[00m
Oct  2 08:21:20 np0005465987 kernel: tap442bf395-51: entered promiscuous mode
Oct  2 08:21:20 np0005465987 NetworkManager[44910]: <info>  [1759407680.9472] manager: (tap442bf395-51): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Oct  2 08:21:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:20Z|00288|binding|INFO|Claiming lport 442bf395-5165-4681-8962-72823f5bdbd0 for this chassis.
Oct  2 08:21:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:20Z|00289|binding|INFO|442bf395-5165-4681-8962-72823f5bdbd0: Claiming fa:16:3e:72:e2:a1 10.100.0.6
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:20Z|00290|binding|INFO|Setting lport 442bf395-5165-4681-8962-72823f5bdbd0 ovn-installed in OVS
Oct  2 08:21:20 np0005465987 nova_compute[230713]: 2025-10-02 12:21:20.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:20 np0005465987 systemd-udevd[263406]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:21:20 np0005465987 systemd-machined[188335]: New machine qemu-39-instance-00000058.
Oct  2 08:21:20 np0005465987 NetworkManager[44910]: <info>  [1759407680.9891] device (tap442bf395-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:21:20 np0005465987 NetworkManager[44910]: <info>  [1759407680.9900] device (tap442bf395-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:21:20 np0005465987 systemd[1]: Started Virtual Machine qemu-39-instance-00000058.
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:20.999 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:e2:a1 10.100.0.6'], port_security=['fa:16:3e:72:e2:a1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '63cd9316-0da7-4274-b2dc-3501e55e6617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3fd9677a-ce06-4783-a778-d114830d9fab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=442bf395-5165-4681-8962-72823f5bdbd0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:21:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:21Z|00291|binding|INFO|Setting lport 442bf395-5165-4681-8962-72823f5bdbd0 up in Southbound
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.001 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 442bf395-5165-4681-8962-72823f5bdbd0 in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 bound to our chassis#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.004 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4035a600-4a5e-41ee-a619-d81e2c993b79#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.018 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb9d35c-f9a8-4a1c-bc4d-fb3e70595c95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.059 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f5de24fd-8731-4b24-b54f-1661025a099e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.063 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0b807c-6816-4cd4-90f2-b0f8931810e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.102 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9c71e20a-39df-4442-9c1e-558bb5b0451d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.120 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9c526c8e-dbce-4ef7-ab2f-ca9f12fa6812]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566740, 'reachable_time': 29404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263421, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.135 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fc81c80e-c910-41e5-bdfc-d99a562d2d2a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4035a600-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566754, 'tstamp': 566754}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263422, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4035a600-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566758, 'tstamp': 566758}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263422, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.138 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:21 np0005465987 nova_compute[230713]: 2025-10-02 12:21:21.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.144 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4035a600-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.144 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.145 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4035a600-40, col_values=(('external_ids', {'iface-id': '1befa812-080f-4694-ba8b-9130fe81621d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:21.146 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:21:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:21.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:22.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:22 np0005465987 nova_compute[230713]: 2025-10-02 12:21:22.869 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407682.8692575, 63cd9316-0da7-4274-b2dc-3501e55e6617 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:22 np0005465987 nova_compute[230713]: 2025-10-02 12:21:22.870 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] VM Started (Lifecycle Event)#033[00m
Oct  2 08:21:23 np0005465987 nova_compute[230713]: 2025-10-02 12:21:23.094 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:23 np0005465987 nova_compute[230713]: 2025-10-02 12:21:23.098 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407682.8693767, 63cd9316-0da7-4274-b2dc-3501e55e6617 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:23 np0005465987 nova_compute[230713]: 2025-10-02 12:21:23.099 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:21:23 np0005465987 nova_compute[230713]: 2025-10-02 12:21:23.306 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:23 np0005465987 nova_compute[230713]: 2025-10-02 12:21:23.310 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:21:23 np0005465987 nova_compute[230713]: 2025-10-02 12:21:23.459 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:21:23 np0005465987 nova_compute[230713]: 2025-10-02 12:21:23.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:23.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:24 np0005465987 nova_compute[230713]: 2025-10-02 12:21:24.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:24.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:24 np0005465987 podman[263465]: 2025-10-02 12:21:24.86652856 +0000 UTC m=+0.085463743 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.486 2 DEBUG nova.compute.manager [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Received event network-vif-plugged-442bf395-5165-4681-8962-72823f5bdbd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.486 2 DEBUG oslo_concurrency.lockutils [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.487 2 DEBUG oslo_concurrency.lockutils [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.487 2 DEBUG oslo_concurrency.lockutils [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.488 2 DEBUG nova.compute.manager [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Processing event network-vif-plugged-442bf395-5165-4681-8962-72823f5bdbd0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.488 2 DEBUG nova.compute.manager [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Received event network-vif-plugged-442bf395-5165-4681-8962-72823f5bdbd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.488 2 DEBUG oslo_concurrency.lockutils [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.489 2 DEBUG oslo_concurrency.lockutils [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.489 2 DEBUG oslo_concurrency.lockutils [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.489 2 DEBUG nova.compute.manager [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] No waiting events found dispatching network-vif-plugged-442bf395-5165-4681-8962-72823f5bdbd0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.490 2 WARNING nova.compute.manager [req-848041e0-07d0-4412-adc4-fd5a25ce00c2 req-c0df714a-f388-4da6-9cc4-0d3cc927090e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Received unexpected event network-vif-plugged-442bf395-5165-4681-8962-72823f5bdbd0 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.491 2 DEBUG nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.496 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407685.495798, 63cd9316-0da7-4274-b2dc-3501e55e6617 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.496 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.498 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.502 2 INFO nova.virt.libvirt.driver [-] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Instance spawned successfully.#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.503 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.518 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.521 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.533 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.534 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.534 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.535 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.535 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.535 2 DEBUG nova.virt.libvirt.driver [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.541 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.594 2 INFO nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Took 11.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.594 2 DEBUG nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.655 2 INFO nova.compute.manager [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Took 12.83 seconds to build instance.#033[00m
Oct  2 08:21:25 np0005465987 nova_compute[230713]: 2025-10-02 12:21:25.672 2 DEBUG oslo_concurrency.lockutils [None req-4178c943-9589-470d-a67c-c448baab4109 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:25.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:26.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:26 np0005465987 nova_compute[230713]: 2025-10-02 12:21:26.652 2 INFO nova.compute.manager [None req-e02a740e-7f06-4a25-992a-829d324ddca8 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Get console output#033[00m
Oct  2 08:21:26 np0005465987 nova_compute[230713]: 2025-10-02 12:21:26.657 2 INFO oslo.privsep.daemon [None req-e02a740e-7f06-4a25-992a-829d324ddca8 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp3oru8g6a/privsep.sock']#033[00m
Oct  2 08:21:27 np0005465987 nova_compute[230713]: 2025-10-02 12:21:27.560 2 INFO oslo.privsep.daemon [None req-e02a740e-7f06-4a25-992a-829d324ddca8 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 08:21:27 np0005465987 nova_compute[230713]: 2025-10-02 12:21:27.442 14392 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 08:21:27 np0005465987 nova_compute[230713]: 2025-10-02 12:21:27.446 14392 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 08:21:27 np0005465987 nova_compute[230713]: 2025-10-02 12:21:27.448 14392 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  2 08:21:27 np0005465987 nova_compute[230713]: 2025-10-02 12:21:27.448 14392 INFO oslo.privsep.daemon [-] privsep daemon running as pid 14392#033[00m
Oct  2 08:21:27 np0005465987 nova_compute[230713]: 2025-10-02 12:21:27.708 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:21:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:27.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:27.948 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:27.949 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:27.949 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:28.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:28 np0005465987 nova_compute[230713]: 2025-10-02 12:21:28.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:29 np0005465987 nova_compute[230713]: 2025-10-02 12:21:29.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:21:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:29.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:21:30 np0005465987 nova_compute[230713]: 2025-10-02 12:21:30.386 2 DEBUG nova.compute.manager [None req-d20a2649-7e40-433f-b37b-c3ca29a2dd61 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Oct  2 08:21:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:30.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:31.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:32.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:33 np0005465987 nova_compute[230713]: 2025-10-02 12:21:33.168 2 DEBUG oslo_concurrency.lockutils [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:33 np0005465987 nova_compute[230713]: 2025-10-02 12:21:33.169 2 DEBUG oslo_concurrency.lockutils [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:33 np0005465987 nova_compute[230713]: 2025-10-02 12:21:33.169 2 DEBUG nova.compute.manager [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:33 np0005465987 nova_compute[230713]: 2025-10-02 12:21:33.173 2 DEBUG nova.compute.manager [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:21:33 np0005465987 nova_compute[230713]: 2025-10-02 12:21:33.174 2 DEBUG nova.objects.instance [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'flavor' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:33 np0005465987 nova_compute[230713]: 2025-10-02 12:21:33.210 2 DEBUG nova.virt.libvirt.driver [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:21:33 np0005465987 nova_compute[230713]: 2025-10-02 12:21:33.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:33.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:33 np0005465987 podman[263494]: 2025-10-02 12:21:33.853063989 +0000 UTC m=+0.058866202 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:21:33 np0005465987 podman[263492]: 2025-10-02 12:21:33.864597907 +0000 UTC m=+0.079699255 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct  2 08:21:33 np0005465987 podman[263493]: 2025-10-02 12:21:33.88176855 +0000 UTC m=+0.091861130 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:21:34 np0005465987 nova_compute[230713]: 2025-10-02 12:21:34.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:34.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:34 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct  2 08:21:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:35.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:35 np0005465987 kernel: tapce91d40e-1b (unregistering): left promiscuous mode
Oct  2 08:21:35 np0005465987 NetworkManager[44910]: <info>  [1759407695.8671] device (tapce91d40e-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:21:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:35Z|00292|binding|INFO|Releasing lport ce91d40e-1bd9-4703-95b5-a23942c1592e from this chassis (sb_readonly=0)
Oct  2 08:21:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:35Z|00293|binding|INFO|Setting lport ce91d40e-1bd9-4703-95b5-a23942c1592e down in Southbound
Oct  2 08:21:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:35Z|00294|binding|INFO|Removing iface tapce91d40e-1b ovn-installed in OVS
Oct  2 08:21:35 np0005465987 nova_compute[230713]: 2025-10-02 12:21:35.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:35 np0005465987 nova_compute[230713]: 2025-10-02 12:21:35.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:35 np0005465987 nova_compute[230713]: 2025-10-02 12:21:35.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:35.898 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:4b:71 10.100.0.4'], port_security=['fa:16:3e:18:4b:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ef69bc17-6b51-491e-82e9-c4106abb8d74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '4', 'neutron:security_group_ids': '32af0a94-4565-470d-9918-1bc97e347f8f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ce91d40e-1bd9-4703-95b5-a23942c1592e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:21:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:35.899 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ce91d40e-1bd9-4703-95b5-a23942c1592e in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 unbound from our chassis#033[00m
Oct  2 08:21:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:35.901 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4035a600-4a5e-41ee-a619-d81e2c993b79#033[00m
Oct  2 08:21:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:35.916 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a562cf9b-fb5d-49d6-9e6c-71517b90073b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:35 np0005465987 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000052.scope: Deactivated successfully.
Oct  2 08:21:35 np0005465987 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000052.scope: Consumed 17.196s CPU time.
Oct  2 08:21:35 np0005465987 systemd-machined[188335]: Machine qemu-38-instance-00000052 terminated.
Oct  2 08:21:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:35.951 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[2943f9e5-32f9-48df-8475-ab2fca851cff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:35.954 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9f73285c-9fdb-4e36-a4b0-baee0a644d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:35.981 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[859ef487-ad11-4798-aee7-409644493e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:35.998 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[34827bcf-b757-4033-8c1d-dc168012701c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4035a600-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:fb:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566740, 'reachable_time': 29404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263567, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:36.016 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[67310b23-3519-488a-8e45-458ce6e489d6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4035a600-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566754, 'tstamp': 566754}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263568, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4035a600-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566758, 'tstamp': 566758}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263568, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:21:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:36.017 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:36.023 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4035a600-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:36.023 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:21:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:36.023 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4035a600-40, col_values=(('external_ids', {'iface-id': '1befa812-080f-4694-ba8b-9130fe81621d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:36.024 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.233 2 INFO nova.virt.libvirt.driver [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.238 2 INFO nova.virt.libvirt.driver [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance destroyed successfully.#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.238 2 DEBUG nova.objects.instance [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'numa_topology' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.258 2 DEBUG nova.compute.manager [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.353 2 DEBUG oslo_concurrency.lockutils [None req-8eb440c5-b663-4981-9651-fb605ad7e265 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:36.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.548 2 DEBUG nova.compute.manager [req-634dd247-4b9b-4dac-a3e0-5c305b01ee06 req-93418db9-7979-428f-b274-baadc1f64c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-unplugged-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.548 2 DEBUG oslo_concurrency.lockutils [req-634dd247-4b9b-4dac-a3e0-5c305b01ee06 req-93418db9-7979-428f-b274-baadc1f64c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.549 2 DEBUG oslo_concurrency.lockutils [req-634dd247-4b9b-4dac-a3e0-5c305b01ee06 req-93418db9-7979-428f-b274-baadc1f64c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.549 2 DEBUG oslo_concurrency.lockutils [req-634dd247-4b9b-4dac-a3e0-5c305b01ee06 req-93418db9-7979-428f-b274-baadc1f64c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.549 2 DEBUG nova.compute.manager [req-634dd247-4b9b-4dac-a3e0-5c305b01ee06 req-93418db9-7979-428f-b274-baadc1f64c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] No waiting events found dispatching network-vif-unplugged-ce91d40e-1bd9-4703-95b5-a23942c1592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:21:36 np0005465987 nova_compute[230713]: 2025-10-02 12:21:36.550 2 WARNING nova.compute.manager [req-634dd247-4b9b-4dac-a3e0-5c305b01ee06 req-93418db9-7979-428f-b274-baadc1f64c54 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received unexpected event network-vif-unplugged-ce91d40e-1bd9-4703-95b5-a23942c1592e for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:21:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:37.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:38 np0005465987 nova_compute[230713]: 2025-10-02 12:21:38.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:38.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:38 np0005465987 nova_compute[230713]: 2025-10-02 12:21:38.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:38 np0005465987 nova_compute[230713]: 2025-10-02 12:21:38.693 2 DEBUG nova.compute.manager [req-b0a7dd47-4288-4033-b6bf-7487b476dd1c req-aa4d22f7-135c-4948-a060-bebb2536b338 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:21:38 np0005465987 nova_compute[230713]: 2025-10-02 12:21:38.693 2 DEBUG oslo_concurrency.lockutils [req-b0a7dd47-4288-4033-b6bf-7487b476dd1c req-aa4d22f7-135c-4948-a060-bebb2536b338 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:38 np0005465987 nova_compute[230713]: 2025-10-02 12:21:38.694 2 DEBUG oslo_concurrency.lockutils [req-b0a7dd47-4288-4033-b6bf-7487b476dd1c req-aa4d22f7-135c-4948-a060-bebb2536b338 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:38 np0005465987 nova_compute[230713]: 2025-10-02 12:21:38.694 2 DEBUG oslo_concurrency.lockutils [req-b0a7dd47-4288-4033-b6bf-7487b476dd1c req-aa4d22f7-135c-4948-a060-bebb2536b338 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:38 np0005465987 nova_compute[230713]: 2025-10-02 12:21:38.694 2 DEBUG nova.compute.manager [req-b0a7dd47-4288-4033-b6bf-7487b476dd1c req-aa4d22f7-135c-4948-a060-bebb2536b338 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] No waiting events found dispatching network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:21:38 np0005465987 nova_compute[230713]: 2025-10-02 12:21:38.695 2 WARNING nova.compute.manager [req-b0a7dd47-4288-4033-b6bf-7487b476dd1c req-aa4d22f7-135c-4948-a060-bebb2536b338 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received unexpected event network-vif-plugged-ce91d40e-1bd9-4703-95b5-a23942c1592e for instance with vm_state stopped and task_state resize_prep.#033[00m
Oct  2 08:21:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:39 np0005465987 nova_compute[230713]: 2025-10-02 12:21:39.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:21:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2127937119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:21:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:21:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2127937119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:21:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:40Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:e2:a1 10.100.0.6
Oct  2 08:21:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:40Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:e2:a1 10.100.0.6
Oct  2 08:21:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:40.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:40 np0005465987 nova_compute[230713]: 2025-10-02 12:21:40.824 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:40 np0005465987 nova_compute[230713]: 2025-10-02 12:21:40.825 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:40 np0005465987 nova_compute[230713]: 2025-10-02 12:21:40.825 2 DEBUG nova.network.neutron [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:21:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:42.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e273 e273: 3 total, 3 up, 3 in
Oct  2 08:21:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:21:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2453946287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:21:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:21:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2453946287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:21:43 np0005465987 nova_compute[230713]: 2025-10-02 12:21:43.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:43.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.053 2 DEBUG nova.network.neutron [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.080 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.201 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.201 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Creating file /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/91f91b9532954d47af2f95bd11c349a8.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.202 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/91f91b9532954d47af2f95bd11c349a8.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:44.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.613 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/91f91b9532954d47af2f95bd11c349a8.tmp" returned: 1 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.614 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74/91f91b9532954d47af2f95bd11c349a8.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.614 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Creating directory /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.614 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.818 2 DEBUG oslo_concurrency.processutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/ef69bc17-6b51-491e-82e9-c4106abb8d74" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.821 2 INFO nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance already shutdown.#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.826 2 INFO nova.virt.libvirt.driver [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Instance destroyed successfully.#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.827 2 DEBUG nova.virt.libvirt.vif [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:19:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:21:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:18:4b:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.827 2 DEBUG nova.network.os_vif_util [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2080271662-network", "vif_mac": "fa:16:3e:18:4b:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.828 2 DEBUG nova.network.os_vif_util [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.828 2 DEBUG os_vif [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce91d40e-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.836 2 INFO os_vif [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b')#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.841 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:21:44 np0005465987 nova_compute[230713]: 2025-10-02 12:21:44.842 2 DEBUG nova.virt.libvirt.driver [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:21:45 np0005465987 nova_compute[230713]: 2025-10-02 12:21:45.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:45 np0005465987 nova_compute[230713]: 2025-10-02 12:21:45.081 2 DEBUG neutronclient.v2_0.client [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ce91d40e-1bd9-4703-95b5-a23942c1592e for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:21:45 np0005465987 nova_compute[230713]: 2025-10-02 12:21:45.174 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:45 np0005465987 nova_compute[230713]: 2025-10-02 12:21:45.175 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:45 np0005465987 nova_compute[230713]: 2025-10-02 12:21:45.175 2 DEBUG oslo_concurrency.lockutils [None req-9a6be319-1e7d-4d20-8ae3-89b3059c9e94 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e274 e274: 3 total, 3 up, 3 in
Oct  2 08:21:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:45.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:46.077 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:21:46 np0005465987 nova_compute[230713]: 2025-10-02 12:21:46.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:46.079 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:21:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:46.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:46 np0005465987 nova_compute[230713]: 2025-10-02 12:21:46.503 2 DEBUG nova.compute.manager [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Received event network-changed-ce91d40e-1bd9-4703-95b5-a23942c1592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:21:46 np0005465987 nova_compute[230713]: 2025-10-02 12:21:46.503 2 DEBUG nova.compute.manager [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Refreshing instance network info cache due to event network-changed-ce91d40e-1bd9-4703-95b5-a23942c1592e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:21:46 np0005465987 nova_compute[230713]: 2025-10-02 12:21:46.504 2 DEBUG oslo_concurrency.lockutils [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:46 np0005465987 nova_compute[230713]: 2025-10-02 12:21:46.504 2 DEBUG oslo_concurrency.lockutils [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:46 np0005465987 nova_compute[230713]: 2025-10-02 12:21:46.504 2 DEBUG nova.network.neutron [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Refreshing network info cache for port ce91d40e-1bd9-4703-95b5-a23942c1592e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:21:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e275 e275: 3 total, 3 up, 3 in
Oct  2 08:21:47 np0005465987 nova_compute[230713]: 2025-10-02 12:21:47.551 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:47 np0005465987 nova_compute[230713]: 2025-10-02 12:21:47.551 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:47.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:47 np0005465987 nova_compute[230713]: 2025-10-02 12:21:47.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:47 np0005465987 nova_compute[230713]: 2025-10-02 12:21:47.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:21:47 np0005465987 nova_compute[230713]: 2025-10-02 12:21:47.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:21:48 np0005465987 nova_compute[230713]: 2025-10-02 12:21:48.189 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:48 np0005465987 nova_compute[230713]: 2025-10-02 12:21:48.189 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:48 np0005465987 nova_compute[230713]: 2025-10-02 12:21:48.189 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:21:48 np0005465987 nova_compute[230713]: 2025-10-02 12:21:48.189 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 63cd9316-0da7-4274-b2dc-3501e55e6617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:48.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:48 np0005465987 nova_compute[230713]: 2025-10-02 12:21:48.619 2 DEBUG nova.network.neutron [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updated VIF entry in instance network info cache for port ce91d40e-1bd9-4703-95b5-a23942c1592e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:21:48 np0005465987 nova_compute[230713]: 2025-10-02 12:21:48.621 2 DEBUG nova.network.neutron [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:48 np0005465987 nova_compute[230713]: 2025-10-02 12:21:48.637 2 DEBUG oslo_concurrency.lockutils [req-3ab6d3ca-ddad-458d-994c-4e17d01b5ddc req-6ca5cc58-69fb-4b7c-a6a7-318cb9fb6529 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:49 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:21:49 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:21:49 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:21:49 np0005465987 nova_compute[230713]: 2025-10-02 12:21:49.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:49 np0005465987 nova_compute[230713]: 2025-10-02 12:21:49.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e276 e276: 3 total, 3 up, 3 in
Oct  2 08:21:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:50.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.113 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407696.112007, ef69bc17-6b51-491e-82e9-c4106abb8d74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.115 2 INFO nova.compute.manager [-] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.136 2 DEBUG nova.compute.manager [None req-d94603d2-a624-486a-a07a-15f1d96d847d - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.139 2 DEBUG nova.compute.manager [None req-d94603d2-a624-486a-a07a-15f1d96d847d - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: resize_finish, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.158 2 INFO nova.compute.manager [None req-d94603d2-a624-486a-a07a-15f1d96d847d - - - - - -] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.263 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updating instance_info_cache with network_info: [{"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:51Z|00295|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.301 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.302 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:51.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:51 np0005465987 nova_compute[230713]: 2025-10-02 12:21:51.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:21:52.081 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:52.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:52 np0005465987 nova_compute[230713]: 2025-10-02 12:21:52.739 2 DEBUG oslo_concurrency.lockutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:52 np0005465987 nova_compute[230713]: 2025-10-02 12:21:52.740 2 DEBUG oslo_concurrency.lockutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:52 np0005465987 nova_compute[230713]: 2025-10-02 12:21:52.740 2 DEBUG nova.compute.manager [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Going to confirm migration 11 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:21:53 np0005465987 nova_compute[230713]: 2025-10-02 12:21:53.319 2 DEBUG neutronclient.v2_0.client [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ce91d40e-1bd9-4703-95b5-a23942c1592e for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:21:53 np0005465987 nova_compute[230713]: 2025-10-02 12:21:53.320 2 DEBUG oslo_concurrency.lockutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:53 np0005465987 nova_compute[230713]: 2025-10-02 12:21:53.321 2 DEBUG oslo_concurrency.lockutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquired lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:53 np0005465987 nova_compute[230713]: 2025-10-02 12:21:53.321 2 DEBUG nova.network.neutron [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:21:53 np0005465987 nova_compute[230713]: 2025-10-02 12:21:53.322 2 DEBUG nova.objects.instance [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'info_cache' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:53.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:54 np0005465987 nova_compute[230713]: 2025-10-02 12:21:54.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e277 e277: 3 total, 3 up, 3 in
Oct  2 08:21:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:54.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:54 np0005465987 nova_compute[230713]: 2025-10-02 12:21:54.709 2 DEBUG nova.network.neutron [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Updating instance_info_cache with network_info: [{"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:54 np0005465987 nova_compute[230713]: 2025-10-02 12:21:54.725 2 DEBUG oslo_concurrency.lockutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Releasing lock "refresh_cache-ef69bc17-6b51-491e-82e9-c4106abb8d74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:54 np0005465987 nova_compute[230713]: 2025-10-02 12:21:54.725 2 DEBUG nova.objects.instance [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'migration_context' on Instance uuid ef69bc17-6b51-491e-82e9-c4106abb8d74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:55 np0005465987 nova_compute[230713]: 2025-10-02 12:21:55.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:55 np0005465987 nova_compute[230713]: 2025-10-02 12:21:55.245 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:55 np0005465987 nova_compute[230713]: 2025-10-02 12:21:55.246 2 DEBUG nova.storage.rbd_utils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] removing snapshot(nova-resize) on rbd image(ef69bc17-6b51-491e-82e9-c4106abb8d74_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:21:55 np0005465987 ovn_controller[129172]: 2025-10-02T12:21:55Z|00296|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:21:55 np0005465987 nova_compute[230713]: 2025-10-02 12:21:55.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:55.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:55 np0005465987 podman[263754]: 2025-10-02 12:21:55.842048402 +0000 UTC m=+0.060878867 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:21:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:56.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:21:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e278 e278: 3 total, 3 up, 3 in
Oct  2 08:21:56 np0005465987 nova_compute[230713]: 2025-10-02 12:21:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:56 np0005465987 nova_compute[230713]: 2025-10-02 12:21:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:56 np0005465987 nova_compute[230713]: 2025-10-02 12:21:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:56 np0005465987 nova_compute[230713]: 2025-10-02 12:21:56.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:21:56 np0005465987 nova_compute[230713]: 2025-10-02 12:21:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.002 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.003 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.003 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.212 2 DEBUG nova.virt.libvirt.vif [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-457871717',display_name='tempest-ServerActionsTestOtherB-server-457871717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-457871717',id=82,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGD2jbBFmRg2ZrnheVnZyLwDISk/dFTNtp10+sWyF/q+rC4Q86cvBQSRgacxSPIqXVpmiVTqI66cLDPhvjcnRFXyQqHRS/RWGvUZk+wm1wfft8CveiGko+Vh4vSox2iOrA==',key_name='tempest-keypair-1336245373',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:21:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-30ee9i0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:21:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='25468893d71641a385711fd2982bb00b',uuid=ef69bc17-6b51-491e-82e9-c4106abb8d74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.213 2 DEBUG nova.network.os_vif_util [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "address": "fa:16:3e:18:4b:71", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce91d40e-1b", "ovs_interfaceid": "ce91d40e-1bd9-4703-95b5-a23942c1592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.214 2 DEBUG nova.network.os_vif_util [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.215 2 DEBUG os_vif [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce91d40e-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.220 2 INFO os_vif [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:4b:71,bridge_name='br-int',has_traffic_filtering=True,id=ce91d40e-1bd9-4703-95b5-a23942c1592e,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce91d40e-1b')#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.220 2 DEBUG oslo_concurrency.lockutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.221 2 DEBUG oslo_concurrency.lockutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.370 2 DEBUG oslo_concurrency.processutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/472983939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.439 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.517 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.518 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.665 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.667 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4318MB free_disk=20.863990783691406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.667 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:57.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2586325224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.822 2 DEBUG oslo_concurrency.processutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.826 2 DEBUG nova.compute.provider_tree [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.847 2 DEBUG nova.scheduler.client.report [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.943 2 DEBUG oslo_concurrency.lockutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.944 2 DEBUG nova.compute.manager [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: ef69bc17-6b51-491e-82e9-c4106abb8d74] Resized/migrated instance is powered off. Setting vm_state to 'stopped'. _confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4805#033[00m
Oct  2 08:21:57 np0005465987 nova_compute[230713]: 2025-10-02 12:21:57.946 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.059 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 63cd9316-0da7-4274-b2dc-3501e55e6617 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.074 2 INFO nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e9f22ec2-887a-497c-978d-a9e444a3e4bd has allocations against this compute host but is not found in the database.#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.074 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.074 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.091 2 INFO nova.scheduler.client.report [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Deleted allocation for migration e9f22ec2-887a-497c-978d-a9e444a3e4bd#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.120 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.158 2 DEBUG oslo_concurrency.lockutils [None req-e80aad50-114b-4dba-bbfb-d5fce0ecb7b6 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "ef69bc17-6b51-491e-82e9-c4106abb8d74" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:58.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3916135186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.554 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.561 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.581 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.606 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:21:58 np0005465987 nova_compute[230713]: 2025-10-02 12:21:58.606 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:59 np0005465987 nova_compute[230713]: 2025-10-02 12:21:59.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e279 e279: 3 total, 3 up, 3 in
Oct  2 08:21:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:21:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:21:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:59.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:00 np0005465987 nova_compute[230713]: 2025-10-02 12:22:00.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:22:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:22:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:00.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:01.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:02.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:04 np0005465987 nova_compute[230713]: 2025-10-02 12:22:04.106 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquiring lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:04 np0005465987 nova_compute[230713]: 2025-10-02 12:22:04.107 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:04 np0005465987 nova_compute[230713]: 2025-10-02 12:22:04.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:04 np0005465987 nova_compute[230713]: 2025-10-02 12:22:04.187 2 DEBUG nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:22:04 np0005465987 nova_compute[230713]: 2025-10-02 12:22:04.348 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:04 np0005465987 nova_compute[230713]: 2025-10-02 12:22:04.348 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:04 np0005465987 nova_compute[230713]: 2025-10-02 12:22:04.359 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:22:04 np0005465987 nova_compute[230713]: 2025-10-02 12:22:04.359 2 INFO nova.compute.claims [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:22:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:04.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e280 e280: 3 total, 3 up, 3 in
Oct  2 08:22:04 np0005465987 nova_compute[230713]: 2025-10-02 12:22:04.601 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:04 np0005465987 podman[263913]: 2025-10-02 12:22:04.848794547 +0000 UTC m=+0.057002821 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:22:04 np0005465987 podman[263914]: 2025-10-02 12:22:04.851726608 +0000 UTC m=+0.055814098 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:22:04 np0005465987 podman[263912]: 2025-10-02 12:22:04.917341574 +0000 UTC m=+0.127281055 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:22:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1100569601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.017 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.024 2 DEBUG nova.compute.provider_tree [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.051 2 DEBUG nova.scheduler.client.report [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.080 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.080 2 DEBUG nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.128 2 DEBUG nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.129 2 DEBUG nova.network.neutron [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.149 2 INFO nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.165 2 DEBUG nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.259 2 DEBUG nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.260 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.261 2 INFO nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Creating image(s)#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.285 2 DEBUG nova.storage.rbd_utils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] rbd image a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.319 2 DEBUG nova.storage.rbd_utils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] rbd image a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.344 2 DEBUG nova.storage.rbd_utils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] rbd image a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.349 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.377 2 DEBUG nova.policy [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4146a31af09c4e6a8aee251f2fec4f98', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c740a14d1c5c45d1a0959b0e24ac460b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.414 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.415 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.416 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.417 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.450 2 DEBUG nova.storage.rbd_utils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] rbd image a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.454 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:05.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.917 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:05 np0005465987 nova_compute[230713]: 2025-10-02 12:22:05.988 2 DEBUG nova.storage.rbd_utils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] resizing rbd image a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:22:06 np0005465987 nova_compute[230713]: 2025-10-02 12:22:06.112 2 DEBUG nova.objects.instance [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lazy-loading 'migration_context' on Instance uuid a6738e6f-93ec-43f8-bb0b-f6a930d3ab63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:06 np0005465987 nova_compute[230713]: 2025-10-02 12:22:06.129 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:22:06 np0005465987 nova_compute[230713]: 2025-10-02 12:22:06.130 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Ensure instance console log exists: /var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:22:06 np0005465987 nova_compute[230713]: 2025-10-02 12:22:06.130 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:06 np0005465987 nova_compute[230713]: 2025-10-02 12:22:06.130 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:06 np0005465987 nova_compute[230713]: 2025-10-02 12:22:06.131 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:06 np0005465987 nova_compute[230713]: 2025-10-02 12:22:06.279 2 DEBUG nova.network.neutron [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Successfully created port: 042b4629-5e7e-473e-b7e7-bc0b6a831261 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:22:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:06.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:07 np0005465987 nova_compute[230713]: 2025-10-02 12:22:07.600 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:07.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:08 np0005465987 nova_compute[230713]: 2025-10-02 12:22:08.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:08.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:08 np0005465987 nova_compute[230713]: 2025-10-02 12:22:08.758 2 DEBUG nova.network.neutron [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Successfully updated port: 042b4629-5e7e-473e-b7e7-bc0b6a831261 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:08 np0005465987 nova_compute[230713]: 2025-10-02 12:22:08.770 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquiring lock "refresh_cache-a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:08 np0005465987 nova_compute[230713]: 2025-10-02 12:22:08.770 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquired lock "refresh_cache-a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:08 np0005465987 nova_compute[230713]: 2025-10-02 12:22:08.771 2 DEBUG nova.network.neutron [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:22:08 np0005465987 nova_compute[230713]: 2025-10-02 12:22:08.890 2 DEBUG nova.compute.manager [req-f1d6cdc3-3f90-489e-a9fc-6779777b79f1 req-34e663c6-7fbb-4045-a413-be97e201ab3c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Received event network-changed-042b4629-5e7e-473e-b7e7-bc0b6a831261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:08 np0005465987 nova_compute[230713]: 2025-10-02 12:22:08.890 2 DEBUG nova.compute.manager [req-f1d6cdc3-3f90-489e-a9fc-6779777b79f1 req-34e663c6-7fbb-4045-a413-be97e201ab3c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Refreshing instance network info cache due to event network-changed-042b4629-5e7e-473e-b7e7-bc0b6a831261. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:08 np0005465987 nova_compute[230713]: 2025-10-02 12:22:08.890 2 DEBUG oslo_concurrency.lockutils [req-f1d6cdc3-3f90-489e-a9fc-6779777b79f1 req-34e663c6-7fbb-4045-a413-be97e201ab3c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:08 np0005465987 nova_compute[230713]: 2025-10-02 12:22:08.985 2 DEBUG nova.network.neutron [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:09 np0005465987 nova_compute[230713]: 2025-10-02 12:22:09.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 e281: 3 total, 3 up, 3 in
Oct  2 08:22:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:09.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.349 2 DEBUG nova.network.neutron [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Updating instance_info_cache with network_info: [{"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.371 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Releasing lock "refresh_cache-a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.371 2 DEBUG nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Instance network_info: |[{"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.371 2 DEBUG oslo_concurrency.lockutils [req-f1d6cdc3-3f90-489e-a9fc-6779777b79f1 req-34e663c6-7fbb-4045-a413-be97e201ab3c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.372 2 DEBUG nova.network.neutron [req-f1d6cdc3-3f90-489e-a9fc-6779777b79f1 req-34e663c6-7fbb-4045-a413-be97e201ab3c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Refreshing network info cache for port 042b4629-5e7e-473e-b7e7-bc0b6a831261 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.374 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Start _get_guest_xml network_info=[{"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.380 2 WARNING nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.384 2 DEBUG nova.virt.libvirt.host [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.385 2 DEBUG nova.virt.libvirt.host [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.392 2 DEBUG nova.virt.libvirt.host [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.393 2 DEBUG nova.virt.libvirt.host [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.394 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.394 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.394 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.395 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.395 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.395 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.395 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.395 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.396 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.396 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.396 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.396 2 DEBUG nova.virt.hardware [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.399 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:10.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/72021545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.826 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.860 2 DEBUG nova.storage.rbd_utils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] rbd image a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:10 np0005465987 nova_compute[230713]: 2025-10-02 12:22:10.865 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1675394595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.303 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.304 2 DEBUG nova.virt.libvirt.vif [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-689099190',display_name='tempest-ServersNegativeTestJSON-server-689099190',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-689099190',id=91,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c740a14d1c5c45d1a0959b0e24ac460b',ramdisk_id='',reservation_id='r-dtpaacyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-462972452',owner_user_name='tempest-ServersNegativeTestJSON-462972452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:05Z,user_data=None,user_id='4146a31af09c4e6a8aee251f2fec4f98',uuid=a6738e6f-93ec-43f8-bb0b-f6a930d3ab63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.305 2 DEBUG nova.network.os_vif_util [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Converting VIF {"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.306 2 DEBUG nova.network.os_vif_util [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:cd:69,bridge_name='br-int',has_traffic_filtering=True,id=042b4629-5e7e-473e-b7e7-bc0b6a831261,network=Network(d5f671ae-bb65-4932-84ce-cef4210e4599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap042b4629-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.306 2 DEBUG nova.objects.instance [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lazy-loading 'pci_devices' on Instance uuid a6738e6f-93ec-43f8-bb0b-f6a930d3ab63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.328 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <uuid>a6738e6f-93ec-43f8-bb0b-f6a930d3ab63</uuid>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <name>instance-0000005b</name>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServersNegativeTestJSON-server-689099190</nova:name>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:22:10</nova:creationTime>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <nova:user uuid="4146a31af09c4e6a8aee251f2fec4f98">tempest-ServersNegativeTestJSON-462972452-project-member</nova:user>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <nova:project uuid="c740a14d1c5c45d1a0959b0e24ac460b">tempest-ServersNegativeTestJSON-462972452</nova:project>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <nova:port uuid="042b4629-5e7e-473e-b7e7-bc0b6a831261">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <entry name="serial">a6738e6f-93ec-43f8-bb0b-f6a930d3ab63</entry>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <entry name="uuid">a6738e6f-93ec-43f8-bb0b-f6a930d3ab63</entry>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk.config">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:22:11 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:e2:cd:69"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <target dev="tap042b4629-5e"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63/console.log" append="off"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:22:11 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:22:11 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:22:11 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:22:11 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.329 2 DEBUG nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Preparing to wait for external event network-vif-plugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.329 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquiring lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.329 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.330 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.330 2 DEBUG nova.virt.libvirt.vif [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-689099190',display_name='tempest-ServersNegativeTestJSON-server-689099190',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-689099190',id=91,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c740a14d1c5c45d1a0959b0e24ac460b',ramdisk_id='',reservation_id='r-dtpaacyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-462972452',owner_user_name='tempest-ServersNegativeTestJSON-462972452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:05Z,user_data=None,user_id='4146a31af09c4e6a8aee251f2fec4f98',uuid=a6738e6f-93ec-43f8-bb0b-f6a930d3ab63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.331 2 DEBUG nova.network.os_vif_util [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Converting VIF {"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.331 2 DEBUG nova.network.os_vif_util [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:cd:69,bridge_name='br-int',has_traffic_filtering=True,id=042b4629-5e7e-473e-b7e7-bc0b6a831261,network=Network(d5f671ae-bb65-4932-84ce-cef4210e4599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap042b4629-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.332 2 DEBUG os_vif [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:cd:69,bridge_name='br-int',has_traffic_filtering=True,id=042b4629-5e7e-473e-b7e7-bc0b6a831261,network=Network(d5f671ae-bb65-4932-84ce-cef4210e4599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap042b4629-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:22:11 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap042b4629-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap042b4629-5e, col_values=(('external_ids', {'iface-id': '042b4629-5e7e-473e-b7e7-bc0b6a831261', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:cd:69', 'vm-uuid': 'a6738e6f-93ec-43f8-bb0b-f6a930d3ab63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:11 np0005465987 NetworkManager[44910]: <info>  [1759407731.3394] manager: (tap042b4629-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.347 2 INFO os_vif [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:cd:69,bridge_name='br-int',has_traffic_filtering=True,id=042b4629-5e7e-473e-b7e7-bc0b6a831261,network=Network(d5f671ae-bb65-4932-84ce-cef4210e4599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap042b4629-5e')#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.430 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.431 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.432 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] No VIF found with MAC fa:16:3e:e2:cd:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.432 2 INFO nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Using config drive#033[00m
Oct  2 08:22:11 np0005465987 nova_compute[230713]: 2025-10-02 12:22:11.468 2 DEBUG nova.storage.rbd_utils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] rbd image a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:11.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:12.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:12 np0005465987 nova_compute[230713]: 2025-10-02 12:22:12.507 2 INFO nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Creating config drive at /var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63/disk.config#033[00m
Oct  2 08:22:12 np0005465987 nova_compute[230713]: 2025-10-02 12:22:12.516 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xn9kg4n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:12 np0005465987 nova_compute[230713]: 2025-10-02 12:22:12.671 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xn9kg4n" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:12 np0005465987 nova_compute[230713]: 2025-10-02 12:22:12.717 2 DEBUG nova.storage.rbd_utils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] rbd image a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:12 np0005465987 nova_compute[230713]: 2025-10-02 12:22:12.721 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63/disk.config a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:12 np0005465987 nova_compute[230713]: 2025-10-02 12:22:12.949 2 DEBUG oslo_concurrency.processutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63/disk.config a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:12 np0005465987 nova_compute[230713]: 2025-10-02 12:22:12.950 2 INFO nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Deleting local config drive /var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63/disk.config because it was imported into RBD.#033[00m
Oct  2 08:22:12 np0005465987 NetworkManager[44910]: <info>  [1759407732.9985] manager: (tap042b4629-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/157)
Oct  2 08:22:13 np0005465987 kernel: tap042b4629-5e: entered promiscuous mode
Oct  2 08:22:13 np0005465987 nova_compute[230713]: 2025-10-02 12:22:13.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:13Z|00297|binding|INFO|Claiming lport 042b4629-5e7e-473e-b7e7-bc0b6a831261 for this chassis.
Oct  2 08:22:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:13Z|00298|binding|INFO|042b4629-5e7e-473e-b7e7-bc0b6a831261: Claiming fa:16:3e:e2:cd:69 10.100.0.10
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.014 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:cd:69 10.100.0.10'], port_security=['fa:16:3e:e2:cd:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a6738e6f-93ec-43f8-bb0b-f6a930d3ab63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5f671ae-bb65-4932-84ce-cef4210e4599', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c740a14d1c5c45d1a0959b0e24ac460b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0cf8cc6d-2482-45e4-b576-7d811b75025a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5feaa29-5f25-4f45-a24f-ce44451fb322, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=042b4629-5e7e-473e-b7e7-bc0b6a831261) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.015 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 042b4629-5e7e-473e-b7e7-bc0b6a831261 in datapath d5f671ae-bb65-4932-84ce-cef4210e4599 bound to our chassis#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.017 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5f671ae-bb65-4932-84ce-cef4210e4599#033[00m
Oct  2 08:22:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:13Z|00299|binding|INFO|Setting lport 042b4629-5e7e-473e-b7e7-bc0b6a831261 ovn-installed in OVS
Oct  2 08:22:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:13Z|00300|binding|INFO|Setting lport 042b4629-5e7e-473e-b7e7-bc0b6a831261 up in Southbound
Oct  2 08:22:13 np0005465987 nova_compute[230713]: 2025-10-02 12:22:13.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:13 np0005465987 nova_compute[230713]: 2025-10-02 12:22:13.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.028 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[717ccd80-9ce3-4f5b-8d72-70fd26e6fda8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.029 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5f671ae-b1 in ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:22:13 np0005465987 systemd-udevd[264276]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.032 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5f671ae-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.032 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[df5deff5-5bb5-4c35-8221-d374aa556f79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.032 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d8f7af-e4f7-4023-8d33-e0d25f73981e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.045 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2f0e56-3712-4a0f-a158-23dfa96c1372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 NetworkManager[44910]: <info>  [1759407733.0477] device (tap042b4629-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:22:13 np0005465987 systemd-machined[188335]: New machine qemu-40-instance-0000005b.
Oct  2 08:22:13 np0005465987 NetworkManager[44910]: <info>  [1759407733.0489] device (tap042b4629-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.060 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[054c2377-057d-47b8-ad09-329aa20994f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 systemd[1]: Started Virtual Machine qemu-40-instance-0000005b.
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.093 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[105e627c-cbc9-4476-b8d5-bb06a6135cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 systemd-udevd[264280]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:13 np0005465987 NetworkManager[44910]: <info>  [1759407733.0991] manager: (tapd5f671ae-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/158)
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.097 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0632bc24-bede-43f2-8dd7-71422d6220cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.131 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ace1d2ad-9ab7-475d-821c-319a48bc1685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.135 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a0163cca-c504-4440-86c2-47115ac9a7f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 NetworkManager[44910]: <info>  [1759407733.1580] device (tapd5f671ae-b0): carrier: link connected
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.163 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d941dd53-7c84-4b2f-8dea-c5d7515e448f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.179 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c0897fac-badb-4d43-940b-7b3334b47f48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5f671ae-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580173, 'reachable_time': 19796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264309, 'error': None, 'target': 'ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.195 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c47831f5-0a9d-487a-8ae9-1fed6eeb4f86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:3a86'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580173, 'tstamp': 580173}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264310, 'error': None, 'target': 'ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.212 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[51e2c108-f40c-4df4-91e5-35929b258c8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5f671ae-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580173, 'reachable_time': 19796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264311, 'error': None, 'target': 'ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.251 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e318e5-77c2-4c07-adc4-dbdf0d5014ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.302 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6f24fa67-3030-4e31-b1de-47da2ee3f20f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.303 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5f671ae-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.303 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.304 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5f671ae-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:13 np0005465987 nova_compute[230713]: 2025-10-02 12:22:13.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:13 np0005465987 kernel: tapd5f671ae-b0: entered promiscuous mode
Oct  2 08:22:13 np0005465987 NetworkManager[44910]: <info>  [1759407733.3069] manager: (tapd5f671ae-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.308 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5f671ae-b0, col_values=(('external_ids', {'iface-id': '18276c7d-4e7d-4b5c-a013-87c3ea8e7868'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:13Z|00301|binding|INFO|Releasing lport 18276c7d-4e7d-4b5c-a013-87c3ea8e7868 from this chassis (sb_readonly=0)
Oct  2 08:22:13 np0005465987 nova_compute[230713]: 2025-10-02 12:22:13.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.311 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5f671ae-bb65-4932-84ce-cef4210e4599.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5f671ae-bb65-4932-84ce-cef4210e4599.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.311 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f982a9-a4ee-4ca4-a44a-6048138d5608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.312 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-d5f671ae-bb65-4932-84ce-cef4210e4599
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/d5f671ae-bb65-4932-84ce-cef4210e4599.pid.haproxy
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID d5f671ae-bb65-4932-84ce-cef4210e4599
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:22:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:13.314 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599', 'env', 'PROCESS_TAG=haproxy-d5f671ae-bb65-4932-84ce-cef4210e4599', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5f671ae-bb65-4932-84ce-cef4210e4599.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:22:13 np0005465987 nova_compute[230713]: 2025-10-02 12:22:13.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:13 np0005465987 podman[264343]: 2025-10-02 12:22:13.674309882 +0000 UTC m=+0.071897551 container create 6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:22:13 np0005465987 systemd[1]: Started libpod-conmon-6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132.scope.
Oct  2 08:22:13 np0005465987 podman[264343]: 2025-10-02 12:22:13.624933492 +0000 UTC m=+0.022521211 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:22:13 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:22:13 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e976aae6c55e7ef55fad8dd0ab9385aa88a912abe98656d5681578a5584fa96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:22:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:13 np0005465987 podman[264343]: 2025-10-02 12:22:13.773096802 +0000 UTC m=+0.170684521 container init 6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:22:13 np0005465987 podman[264343]: 2025-10-02 12:22:13.784189487 +0000 UTC m=+0.181777176 container start 6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:22:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:13 np0005465987 neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599[264358]: [NOTICE]   (264363) : New worker (264380) forked
Oct  2 08:22:13 np0005465987 neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599[264358]: [NOTICE]   (264363) : Loading success.
Oct  2 08:22:13 np0005465987 nova_compute[230713]: 2025-10-02 12:22:13.818 2 DEBUG nova.network.neutron [req-f1d6cdc3-3f90-489e-a9fc-6779777b79f1 req-34e663c6-7fbb-4045-a413-be97e201ab3c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Updated VIF entry in instance network info cache for port 042b4629-5e7e-473e-b7e7-bc0b6a831261. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:22:13 np0005465987 nova_compute[230713]: 2025-10-02 12:22:13.819 2 DEBUG nova.network.neutron [req-f1d6cdc3-3f90-489e-a9fc-6779777b79f1 req-34e663c6-7fbb-4045-a413-be97e201ab3c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Updating instance_info_cache with network_info: [{"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:13 np0005465987 nova_compute[230713]: 2025-10-02 12:22:13.849 2 DEBUG oslo_concurrency.lockutils [req-f1d6cdc3-3f90-489e-a9fc-6779777b79f1 req-34e663c6-7fbb-4045-a413-be97e201ab3c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:14.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.517 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407734.516426, a6738e6f-93ec-43f8-bb0b-f6a930d3ab63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.518 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] VM Started (Lifecycle Event)#033[00m
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.548 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.552 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407734.5169597, a6738e6f-93ec-43f8-bb0b-f6a930d3ab63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.553 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.575 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.578 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:14 np0005465987 nova_compute[230713]: 2025-10-02 12:22:14.597 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:22:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:15.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:16 np0005465987 nova_compute[230713]: 2025-10-02 12:22:16.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:16.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:17.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.170 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.171 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.203 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.310 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.310 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.317 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.317 2 INFO nova.compute.claims [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:22:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:18.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.644 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.865 2 DEBUG nova.compute.manager [req-435dcbee-0da5-46d4-9d2d-bc5dce6ac4d8 req-85220923-82bd-4f72-aa43-ae5ed2349613 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Received event network-vif-plugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.865 2 DEBUG oslo_concurrency.lockutils [req-435dcbee-0da5-46d4-9d2d-bc5dce6ac4d8 req-85220923-82bd-4f72-aa43-ae5ed2349613 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.866 2 DEBUG oslo_concurrency.lockutils [req-435dcbee-0da5-46d4-9d2d-bc5dce6ac4d8 req-85220923-82bd-4f72-aa43-ae5ed2349613 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.866 2 DEBUG oslo_concurrency.lockutils [req-435dcbee-0da5-46d4-9d2d-bc5dce6ac4d8 req-85220923-82bd-4f72-aa43-ae5ed2349613 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.866 2 DEBUG nova.compute.manager [req-435dcbee-0da5-46d4-9d2d-bc5dce6ac4d8 req-85220923-82bd-4f72-aa43-ae5ed2349613 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Processing event network-vif-plugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.867 2 DEBUG nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.871 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407738.871266, a6738e6f-93ec-43f8-bb0b-f6a930d3ab63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.872 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.874 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.880 2 INFO nova.virt.libvirt.driver [-] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Instance spawned successfully.#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.881 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.916 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.921 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.922 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.923 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.923 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.924 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.924 2 DEBUG nova.virt.libvirt.driver [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.929 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:18 np0005465987 nova_compute[230713]: 2025-10-02 12:22:18.970 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.029 2 INFO nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Took 13.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.030 2 DEBUG nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.100 2 INFO nova.compute.manager [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Took 14.78 seconds to build instance.#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.118 2 DEBUG oslo_concurrency.lockutils [None req-81721e2f-647a-4e16-8bf0-860cb34a8a54 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/922221350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.173 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.180 2 DEBUG nova.compute.provider_tree [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.202 2 DEBUG nova.scheduler.client.report [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.237 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.238 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.335 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.336 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.364 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.393 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.538 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.540 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.541 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Creating image(s)#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.572 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.607 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.638 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.642 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.678 2 DEBUG nova.policy [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93e805bcb0e047ca9d45c653f5ec913d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '91d108e807094b0fa8e63a923d2269ee', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.726 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.727 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.728 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.729 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.764 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:19 np0005465987 nova_compute[230713]: 2025-10-02 12:22:19.769 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:19.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:20.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.652 2 DEBUG oslo_concurrency.lockutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquiring lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.653 2 DEBUG oslo_concurrency.lockutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.653 2 DEBUG oslo_concurrency.lockutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquiring lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.654 2 DEBUG oslo_concurrency.lockutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.654 2 DEBUG oslo_concurrency.lockutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.655 2 INFO nova.compute.manager [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Terminating instance#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.657 2 DEBUG nova.compute.manager [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.735 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.967s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:20 np0005465987 kernel: tap042b4629-5e (unregistering): left promiscuous mode
Oct  2 08:22:20 np0005465987 NetworkManager[44910]: <info>  [1759407740.7433] device (tap042b4629-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:22:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:20Z|00302|binding|INFO|Releasing lport 042b4629-5e7e-473e-b7e7-bc0b6a831261 from this chassis (sb_readonly=0)
Oct  2 08:22:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:20Z|00303|binding|INFO|Setting lport 042b4629-5e7e-473e-b7e7-bc0b6a831261 down in Southbound
Oct  2 08:22:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:20Z|00304|binding|INFO|Removing iface tap042b4629-5e ovn-installed in OVS
Oct  2 08:22:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:20.769 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:cd:69 10.100.0.10'], port_security=['fa:16:3e:e2:cd:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a6738e6f-93ec-43f8-bb0b-f6a930d3ab63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5f671ae-bb65-4932-84ce-cef4210e4599', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c740a14d1c5c45d1a0959b0e24ac460b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0cf8cc6d-2482-45e4-b576-7d811b75025a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5feaa29-5f25-4f45-a24f-ce44451fb322, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=042b4629-5e7e-473e-b7e7-bc0b6a831261) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:20.770 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 042b4629-5e7e-473e-b7e7-bc0b6a831261 in datapath d5f671ae-bb65-4932-84ce-cef4210e4599 unbound from our chassis#033[00m
Oct  2 08:22:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:20.772 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5f671ae-bb65-4932-84ce-cef4210e4599, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:22:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:20.773 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[91678c8c-119c-4414-8db2-6914a39776cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:20.774 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599 namespace which is not needed anymore#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:20 np0005465987 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct  2 08:22:20 np0005465987 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000005b.scope: Consumed 3.252s CPU time.
Oct  2 08:22:20 np0005465987 systemd-machined[188335]: Machine qemu-40-instance-0000005b terminated.
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.836 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] resizing rbd image 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.892 2 INFO nova.virt.libvirt.driver [-] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Instance destroyed successfully.#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.892 2 DEBUG nova.objects.instance [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lazy-loading 'resources' on Instance uuid a6738e6f-93ec-43f8-bb0b-f6a930d3ab63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:20 np0005465987 neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599[264358]: [NOTICE]   (264363) : haproxy version is 2.8.14-c23fe91
Oct  2 08:22:20 np0005465987 neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599[264358]: [NOTICE]   (264363) : path to executable is /usr/sbin/haproxy
Oct  2 08:22:20 np0005465987 neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599[264358]: [WARNING]  (264363) : Exiting Master process...
Oct  2 08:22:20 np0005465987 neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599[264358]: [WARNING]  (264363) : Exiting Master process...
Oct  2 08:22:20 np0005465987 neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599[264358]: [ALERT]    (264363) : Current worker (264380) exited with code 143 (Terminated)
Oct  2 08:22:20 np0005465987 neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599[264358]: [WARNING]  (264363) : All workers exited. Exiting... (0)
Oct  2 08:22:20 np0005465987 systemd[1]: libpod-6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132.scope: Deactivated successfully.
Oct  2 08:22:20 np0005465987 podman[264604]: 2025-10-02 12:22:20.915645788 +0000 UTC m=+0.053316939 container died 6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.920 2 DEBUG nova.virt.libvirt.vif [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-689099190',display_name='tempest-ServersNegativeTestJSON-server-689099190',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-689099190',id=91,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c740a14d1c5c45d1a0959b0e24ac460b',ramdisk_id='',reservation_id='r-dtpaacyd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-462972452',owner_user_name='tempest-ServersNegativeTestJSON-462972452-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:19Z,user_data=None,user_id='4146a31af09c4e6a8aee251f2fec4f98',uuid=a6738e6f-93ec-43f8-bb0b-f6a930d3ab63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.921 2 DEBUG nova.network.os_vif_util [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Converting VIF {"id": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "address": "fa:16:3e:e2:cd:69", "network": {"id": "d5f671ae-bb65-4932-84ce-cef4210e4599", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-212680174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c740a14d1c5c45d1a0959b0e24ac460b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap042b4629-5e", "ovs_interfaceid": "042b4629-5e7e-473e-b7e7-bc0b6a831261", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.922 2 DEBUG nova.network.os_vif_util [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:cd:69,bridge_name='br-int',has_traffic_filtering=True,id=042b4629-5e7e-473e-b7e7-bc0b6a831261,network=Network(d5f671ae-bb65-4932-84ce-cef4210e4599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap042b4629-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.922 2 DEBUG os_vif [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:cd:69,bridge_name='br-int',has_traffic_filtering=True,id=042b4629-5e7e-473e-b7e7-bc0b6a831261,network=Network(d5f671ae-bb65-4932-84ce-cef4210e4599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap042b4629-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap042b4629-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:20 np0005465987 nova_compute[230713]: 2025-10-02 12:22:20.930 2 INFO os_vif [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:cd:69,bridge_name='br-int',has_traffic_filtering=True,id=042b4629-5e7e-473e-b7e7-bc0b6a831261,network=Network(d5f671ae-bb65-4932-84ce-cef4210e4599),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap042b4629-5e')#033[00m
Oct  2 08:22:20 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132-userdata-shm.mount: Deactivated successfully.
Oct  2 08:22:20 np0005465987 systemd[1]: var-lib-containers-storage-overlay-3e976aae6c55e7ef55fad8dd0ab9385aa88a912abe98656d5681578a5584fa96-merged.mount: Deactivated successfully.
Oct  2 08:22:20 np0005465987 podman[264604]: 2025-10-02 12:22:20.954488948 +0000 UTC m=+0.092160089 container cleanup 6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:22:20 np0005465987 systemd[1]: libpod-conmon-6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132.scope: Deactivated successfully.
Oct  2 08:22:21 np0005465987 podman[264658]: 2025-10-02 12:22:21.02104281 +0000 UTC m=+0.042262654 container remove 6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 08:22:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:21.027 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d58777bc-5a26-4fe7-915a-ad534219ace9]: (4, ('Thu Oct  2 12:22:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599 (6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132)\n6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132\nThu Oct  2 12:22:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599 (6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132)\n6cc090e4a15c339675cf0e388f076816805f00e78318f4d7ff5c502696bcd132\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:21.029 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[51637f1d-e4bf-4e4b-bc2d-5b72ddb32e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:21.030 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5f671ae-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:21 np0005465987 kernel: tapd5f671ae-b0: left promiscuous mode
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:21.054 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9331bebc-5655-4e45-ae98-509e998e044a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:21.089 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1168f468-fed0-47e6-a901-e0b26f57c373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:21.090 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1607f69f-21ee-4f65-9609-de91be459aa5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:21.105 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d899d397-d581-4fba-a649-ac2e36fee586]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580166, 'reachable_time': 36520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264676, 'error': None, 'target': 'ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:21.107 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5f671ae-bb65-4932-84ce-cef4210e4599 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:22:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:21.107 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd3c882-13ee-4a2a-8bee-db4404f4d75a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:21 np0005465987 systemd[1]: run-netns-ovnmeta\x2dd5f671ae\x2dbb65\x2d4932\x2d84ce\x2dcef4210e4599.mount: Deactivated successfully.
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.151 2 DEBUG nova.objects.instance [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lazy-loading 'migration_context' on Instance uuid 579516d4-d0f6-455a-9fb5-4c03edb08cfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.165 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.165 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Ensure instance console log exists: /var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.165 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.166 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.166 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.420 2 INFO nova.virt.libvirt.driver [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Deleting instance files /var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_del#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.420 2 INFO nova.virt.libvirt.driver [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Deletion of /var/lib/nova/instances/a6738e6f-93ec-43f8-bb0b-f6a930d3ab63_del complete#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.641 2 INFO nova.compute.manager [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.642 2 DEBUG oslo.service.loopingcall [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.642 2 DEBUG nova.compute.manager [-] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.643 2 DEBUG nova.network.neutron [-] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.682 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Successfully created port: 0ed64e76-c243-4659-b377-9fbb6676af64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.798 2 DEBUG nova.compute.manager [req-bf40eb1a-ef37-41f3-849e-e998b45f4b38 req-265120fc-ac74-4a29-8bf1-9ae5daf32518 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Received event network-vif-plugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.798 2 DEBUG oslo_concurrency.lockutils [req-bf40eb1a-ef37-41f3-849e-e998b45f4b38 req-265120fc-ac74-4a29-8bf1-9ae5daf32518 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.799 2 DEBUG oslo_concurrency.lockutils [req-bf40eb1a-ef37-41f3-849e-e998b45f4b38 req-265120fc-ac74-4a29-8bf1-9ae5daf32518 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.799 2 DEBUG oslo_concurrency.lockutils [req-bf40eb1a-ef37-41f3-849e-e998b45f4b38 req-265120fc-ac74-4a29-8bf1-9ae5daf32518 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.799 2 DEBUG nova.compute.manager [req-bf40eb1a-ef37-41f3-849e-e998b45f4b38 req-265120fc-ac74-4a29-8bf1-9ae5daf32518 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] No waiting events found dispatching network-vif-plugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:21 np0005465987 nova_compute[230713]: 2025-10-02 12:22:21.799 2 WARNING nova.compute.manager [req-bf40eb1a-ef37-41f3-849e-e998b45f4b38 req-265120fc-ac74-4a29-8bf1-9ae5daf32518 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Received unexpected event network-vif-plugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:22:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:21.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:22.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:22 np0005465987 nova_compute[230713]: 2025-10-02 12:22:22.785 2 DEBUG nova.network.neutron [-] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:22 np0005465987 nova_compute[230713]: 2025-10-02 12:22:22.819 2 INFO nova.compute.manager [-] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Took 1.18 seconds to deallocate network for instance.#033[00m
Oct  2 08:22:22 np0005465987 nova_compute[230713]: 2025-10-02 12:22:22.903 2 DEBUG oslo_concurrency.lockutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:22 np0005465987 nova_compute[230713]: 2025-10-02 12:22:22.904 2 DEBUG oslo_concurrency.lockutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.003 2 DEBUG oslo_concurrency.processutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.160 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Successfully updated port: 0ed64e76-c243-4659-b377-9fbb6676af64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.203 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "refresh_cache-579516d4-d0f6-455a-9fb5-4c03edb08cfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.203 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquired lock "refresh_cache-579516d4-d0f6-455a-9fb5-4c03edb08cfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.204 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:22:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:23 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2198849570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.445 2 DEBUG oslo_concurrency.processutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.451 2 DEBUG nova.compute.provider_tree [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.468 2 DEBUG nova.scheduler.client.report [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.511 2 DEBUG oslo_concurrency.lockutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.524 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.543 2 INFO nova.scheduler.client.report [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Deleted allocations for instance a6738e6f-93ec-43f8-bb0b-f6a930d3ab63#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.619 2 DEBUG oslo_concurrency.lockutils [None req-4fd7f633-e607-4212-9809-2b151c23b12b 4146a31af09c4e6a8aee251f2fec4f98 c740a14d1c5c45d1a0959b0e24ac460b - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:23.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.920 2 DEBUG nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Received event network-vif-unplugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.921 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.921 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.921 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.921 2 DEBUG nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] No waiting events found dispatching network-vif-unplugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.922 2 WARNING nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Received unexpected event network-vif-unplugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.922 2 DEBUG nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Received event network-vif-plugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.922 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.922 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.922 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6738e6f-93ec-43f8-bb0b-f6a930d3ab63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.923 2 DEBUG nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] No waiting events found dispatching network-vif-plugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.923 2 WARNING nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Received unexpected event network-vif-plugged-042b4629-5e7e-473e-b7e7-bc0b6a831261 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.923 2 DEBUG nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Received event network-vif-deleted-042b4629-5e7e-473e-b7e7-bc0b6a831261 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.923 2 DEBUG nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Received event network-changed-0ed64e76-c243-4659-b377-9fbb6676af64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.924 2 DEBUG nova.compute.manager [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Refreshing instance network info cache due to event network-changed-0ed64e76-c243-4659-b377-9fbb6676af64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:23 np0005465987 nova_compute[230713]: 2025-10-02 12:22:23.924 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-579516d4-d0f6-455a-9fb5-4c03edb08cfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:24 np0005465987 nova_compute[230713]: 2025-10-02 12:22:24.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:24.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.081 2 DEBUG nova.network.neutron [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Updating instance_info_cache with network_info: [{"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.107 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Releasing lock "refresh_cache-579516d4-d0f6-455a-9fb5-4c03edb08cfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.107 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Instance network_info: |[{"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.108 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-579516d4-d0f6-455a-9fb5-4c03edb08cfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.108 2 DEBUG nova.network.neutron [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Refreshing network info cache for port 0ed64e76-c243-4659-b377-9fbb6676af64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.111 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Start _get_guest_xml network_info=[{"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.115 2 WARNING nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.119 2 DEBUG nova.virt.libvirt.host [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.119 2 DEBUG nova.virt.libvirt.host [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.130 2 DEBUG nova.virt.libvirt.host [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.130 2 DEBUG nova.virt.libvirt.host [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.131 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.132 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.132 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.132 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.133 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.133 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.133 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.133 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.134 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.134 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.134 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.134 2 DEBUG nova.virt.hardware [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.137 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/193472829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.581 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.618 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.623 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:25.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:25 np0005465987 nova_compute[230713]: 2025-10-02 12:22:25.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1046833467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.035 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.037 2 DEBUG nova.virt.libvirt.vif [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-478094513',display_name='tempest-ListServersNegativeTestJSON-server-478094513-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-478094513-3',id=96,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='91d108e807094b0fa8e63a923d2269ee',ramdisk_id='',reservation_id='r-jgpyr0ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1913538533',owner_user_name='tempest-ListServersNegativeTestJSON-1913538533-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:19Z,user_data=None,user_id='93e805bcb0e047ca9d45c653f5ec913d',uuid=579516d4-d0f6-455a-9fb5-4c03edb08cfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.038 2 DEBUG nova.network.os_vif_util [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converting VIF {"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.039 2 DEBUG nova.network.os_vif_util [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b7:01,bridge_name='br-int',has_traffic_filtering=True,id=0ed64e76-c243-4659-b377-9fbb6676af64,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed64e76-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.040 2 DEBUG nova.objects.instance [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lazy-loading 'pci_devices' on Instance uuid 579516d4-d0f6-455a-9fb5-4c03edb08cfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:26.047 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:26.048 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.059 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <uuid>579516d4-d0f6-455a-9fb5-4c03edb08cfb</uuid>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <name>instance-00000060</name>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <nova:name>tempest-ListServersNegativeTestJSON-server-478094513-3</nova:name>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:22:25</nova:creationTime>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <nova:user uuid="93e805bcb0e047ca9d45c653f5ec913d">tempest-ListServersNegativeTestJSON-1913538533-project-member</nova:user>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <nova:project uuid="91d108e807094b0fa8e63a923d2269ee">tempest-ListServersNegativeTestJSON-1913538533</nova:project>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <nova:port uuid="0ed64e76-c243-4659-b377-9fbb6676af64">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <entry name="serial">579516d4-d0f6-455a-9fb5-4c03edb08cfb</entry>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <entry name="uuid">579516d4-d0f6-455a-9fb5-4c03edb08cfb</entry>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk.config">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:0f:b7:01"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <target dev="tap0ed64e76-c2"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb/console.log" append="off"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:22:26 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:22:26 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:22:26 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:22:26 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.062 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Preparing to wait for external event network-vif-plugged-0ed64e76-c243-4659-b377-9fbb6676af64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.062 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.063 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.063 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.064 2 DEBUG nova.virt.libvirt.vif [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-478094513',display_name='tempest-ListServersNegativeTestJSON-server-478094513-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-478094513-3',id=96,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='91d108e807094b0fa8e63a923d2269ee',ramdisk_id='',reservation_id='r-jgpyr0ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1913538533',owner_user_name='tempest-ListServersNegativeTestJSON-1913538533-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:19Z,user_data=None,user_id='93e805bcb0e047ca9d45c653f5ec913d',uuid=579516d4-d0f6-455a-9fb5-4c03edb08cfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.065 2 DEBUG nova.network.os_vif_util [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converting VIF {"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.066 2 DEBUG nova.network.os_vif_util [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b7:01,bridge_name='br-int',has_traffic_filtering=True,id=0ed64e76-c243-4659-b377-9fbb6676af64,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed64e76-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.066 2 DEBUG os_vif [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b7:01,bridge_name='br-int',has_traffic_filtering=True,id=0ed64e76-c243-4659-b377-9fbb6676af64,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed64e76-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ed64e76-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ed64e76-c2, col_values=(('external_ids', {'iface-id': '0ed64e76-c243-4659-b377-9fbb6676af64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:b7:01', 'vm-uuid': '579516d4-d0f6-455a-9fb5-4c03edb08cfb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:26 np0005465987 NetworkManager[44910]: <info>  [1759407746.0797] manager: (tap0ed64e76-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.087 2 INFO os_vif [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b7:01,bridge_name='br-int',has_traffic_filtering=True,id=0ed64e76-c243-4659-b377-9fbb6676af64,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed64e76-c2')#033[00m
Oct  2 08:22:26 np0005465987 podman[264783]: 2025-10-02 12:22:26.188317627 +0000 UTC m=+0.060433054 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.270 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.271 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.271 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] No VIF found with MAC fa:16:3e:0f:b7:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.271 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Using config drive#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.300 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.427 2 DEBUG nova.network.neutron [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Updated VIF entry in instance network info cache for port 0ed64e76-c243-4659-b377-9fbb6676af64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.428 2 DEBUG nova.network.neutron [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Updating instance_info_cache with network_info: [{"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.452 2 DEBUG oslo_concurrency.lockutils [req-35edda68-f146-41bb-a5b3-17c034233c3b req-1b398710-4ebf-498c-957e-fca31e971b6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-579516d4-d0f6-455a-9fb5-4c03edb08cfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:26.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.745 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Creating config drive at /var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb/disk.config#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.749 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7pzgvwy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.881 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7pzgvwy" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.916 2 DEBUG nova.storage.rbd_utils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] rbd image 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:26 np0005465987 nova_compute[230713]: 2025-10-02 12:22:26.921 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb/disk.config 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:27 np0005465987 nova_compute[230713]: 2025-10-02 12:22:27.141 2 DEBUG oslo_concurrency.processutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb/disk.config 579516d4-d0f6-455a-9fb5-4c03edb08cfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:27 np0005465987 nova_compute[230713]: 2025-10-02 12:22:27.143 2 INFO nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Deleting local config drive /var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb/disk.config because it was imported into RBD.#033[00m
Oct  2 08:22:27 np0005465987 kernel: tap0ed64e76-c2: entered promiscuous mode
Oct  2 08:22:27 np0005465987 NetworkManager[44910]: <info>  [1759407747.2141] manager: (tap0ed64e76-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Oct  2 08:22:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:27Z|00305|binding|INFO|Claiming lport 0ed64e76-c243-4659-b377-9fbb6676af64 for this chassis.
Oct  2 08:22:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:27Z|00306|binding|INFO|0ed64e76-c243-4659-b377-9fbb6676af64: Claiming fa:16:3e:0f:b7:01 10.100.0.13
Oct  2 08:22:27 np0005465987 nova_compute[230713]: 2025-10-02 12:22:27.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:27 np0005465987 systemd-udevd[264870]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.259 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:b7:01 10.100.0.13'], port_security=['fa:16:3e:0f:b7:01 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '579516d4-d0f6-455a-9fb5-4c03edb08cfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91d108e807094b0fa8e63a923d2269ee', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f5f828a-9c59-4be3-90d2-ef448e431573', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91243388-a708-4071-bc5f-90666534b8e0, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0ed64e76-c243-4659-b377-9fbb6676af64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.260 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0ed64e76-c243-4659-b377-9fbb6676af64 in datapath 81df0bd4-1de1-409c-8730-5d718bbb9ab0 bound to our chassis#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.261 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81df0bd4-1de1-409c-8730-5d718bbb9ab0#033[00m
Oct  2 08:22:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:27Z|00307|binding|INFO|Setting lport 0ed64e76-c243-4659-b377-9fbb6676af64 ovn-installed in OVS
Oct  2 08:22:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:27Z|00308|binding|INFO|Setting lport 0ed64e76-c243-4659-b377-9fbb6676af64 up in Southbound
Oct  2 08:22:27 np0005465987 NetworkManager[44910]: <info>  [1759407747.2734] device (tap0ed64e76-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:22:27 np0005465987 NetworkManager[44910]: <info>  [1759407747.2742] device (tap0ed64e76-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:22:27 np0005465987 nova_compute[230713]: 2025-10-02 12:22:27.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.276 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8a71f573-7790-4e90-b2d0-99d7a34aa114]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.277 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81df0bd4-11 in ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.280 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81df0bd4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.280 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[05fb0ffd-53db-4a46-9572-9c0a8705b7ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.282 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7b5a92-7a5e-4238-b25d-38a04c872d5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 systemd-machined[188335]: New machine qemu-41-instance-00000060.
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.294 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1ed1b5-771e-4fdf-bb8e-a08340da2994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 systemd[1]: Started Virtual Machine qemu-41-instance-00000060.
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.322 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7fefc1-bf28-4c6b-88ee-714c395db4b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.351 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2ceddb-5183-4f0d-a119-ad4a558a1d01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 NetworkManager[44910]: <info>  [1759407747.3585] manager: (tap81df0bd4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/162)
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.357 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[87b1dcf8-56c1-48c2-be3a-4d5a99a7524e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 systemd-udevd[264875]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.393 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[820b98e9-2289-4a9c-adb9-2c96bc3576c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.396 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[557baa17-ba3f-4d11-802d-87803e928844]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 NetworkManager[44910]: <info>  [1759407747.4211] device (tap81df0bd4-10): carrier: link connected
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.425 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7256ba32-af74-41c3-82a7-d33eb7d2828f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.441 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9a0344-86c4-4ad0-835a-f0a49b2bed91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81df0bd4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:05:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581599, 'reachable_time': 35051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264906, 'error': None, 'target': 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.459 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[24a269af-6aae-4956-a5ba-1d51c0cc7ecb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:5cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581599, 'tstamp': 581599}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264907, 'error': None, 'target': 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.473 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a3059381-1372-4d4f-aac8-2994e3f50145]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81df0bd4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:05:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581599, 'reachable_time': 35051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264908, 'error': None, 'target': 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.509 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c64bce30-63b4-4bba-99cf-d03dda60a3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.580 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[74567a3e-36d6-4b7b-abf8-0c87139092c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.581 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81df0bd4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.581 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.582 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81df0bd4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:27 np0005465987 nova_compute[230713]: 2025-10-02 12:22:27.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:27 np0005465987 NetworkManager[44910]: <info>  [1759407747.5843] manager: (tap81df0bd4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Oct  2 08:22:27 np0005465987 kernel: tap81df0bd4-10: entered promiscuous mode
Oct  2 08:22:27 np0005465987 nova_compute[230713]: 2025-10-02 12:22:27.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.591 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81df0bd4-10, col_values=(('external_ids', {'iface-id': '158c9d13-b7ad-4d55-8f96-3c408ed5e2d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:27 np0005465987 nova_compute[230713]: 2025-10-02 12:22:27.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:27Z|00309|binding|INFO|Releasing lport 158c9d13-b7ad-4d55-8f96-3c408ed5e2d5 from this chassis (sb_readonly=0)
Oct  2 08:22:27 np0005465987 nova_compute[230713]: 2025-10-02 12:22:27.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:27 np0005465987 nova_compute[230713]: 2025-10-02 12:22:27.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.621 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81df0bd4-1de1-409c-8730-5d718bbb9ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81df0bd4-1de1-409c-8730-5d718bbb9ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.621 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3e5ecb-0b12-43a7-88a0-42c849a21771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.622 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-81df0bd4-1de1-409c-8730-5d718bbb9ab0
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/81df0bd4-1de1-409c-8730-5d718bbb9ab0.pid.haproxy
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 81df0bd4-1de1-409c-8730-5d718bbb9ab0
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.623 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'env', 'PROCESS_TAG=haproxy-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81df0bd4-1de1-409c-8730-5d718bbb9ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:22:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:27.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.949 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.950 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:27.950 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:28 np0005465987 podman[264982]: 2025-10-02 12:22:28.003419335 +0000 UTC m=+0.054855461 container create 7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:22:28 np0005465987 systemd[1]: Started libpod-conmon-7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa.scope.
Oct  2 08:22:28 np0005465987 podman[264982]: 2025-10-02 12:22:27.973553792 +0000 UTC m=+0.024989958 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:22:28 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:22:28 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efa6502e11b828b49a0aca8b6259c84e6dccbf1659ab5f0b2854c63f72fb513/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:22:28 np0005465987 podman[264982]: 2025-10-02 12:22:28.102132143 +0000 UTC m=+0.153568299 container init 7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 08:22:28 np0005465987 podman[264982]: 2025-10-02 12:22:28.108837918 +0000 UTC m=+0.160274044 container start 7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:22:28 np0005465987 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[264997]: [NOTICE]   (265001) : New worker (265003) forked
Oct  2 08:22:28 np0005465987 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[264997]: [NOTICE]   (265001) : Loading success.
Oct  2 08:22:28 np0005465987 nova_compute[230713]: 2025-10-02 12:22:28.226 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407748.2257903, 579516d4-d0f6-455a-9fb5-4c03edb08cfb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:28 np0005465987 nova_compute[230713]: 2025-10-02 12:22:28.226 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] VM Started (Lifecycle Event)#033[00m
Oct  2 08:22:28 np0005465987 nova_compute[230713]: 2025-10-02 12:22:28.248 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:28 np0005465987 nova_compute[230713]: 2025-10-02 12:22:28.254 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407748.22594, 579516d4-d0f6-455a-9fb5-4c03edb08cfb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:28 np0005465987 nova_compute[230713]: 2025-10-02 12:22:28.254 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:22:28 np0005465987 nova_compute[230713]: 2025-10-02 12:22:28.284 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:28 np0005465987 nova_compute[230713]: 2025-10-02 12:22:28.287 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:28 np0005465987 nova_compute[230713]: 2025-10-02 12:22:28.320 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:22:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:28.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.735 2 DEBUG nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Received event network-vif-plugged-0ed64e76-c243-4659-b377-9fbb6676af64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.736 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.736 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.736 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.736 2 DEBUG nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Processing event network-vif-plugged-0ed64e76-c243-4659-b377-9fbb6676af64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.736 2 DEBUG nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Received event network-vif-plugged-0ed64e76-c243-4659-b377-9fbb6676af64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.737 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.737 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.737 2 DEBUG oslo_concurrency.lockutils [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.737 2 DEBUG nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] No waiting events found dispatching network-vif-plugged-0ed64e76-c243-4659-b377-9fbb6676af64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.737 2 WARNING nova.compute.manager [req-4b3b1cad-36ff-49d3-a045-ee9ca885f6f8 req-f1c29906-9002-4269-a0e8-e1f9c6f0b3b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Received unexpected event network-vif-plugged-0ed64e76-c243-4659-b377-9fbb6676af64 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.738 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.741 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407749.7405188, 579516d4-d0f6-455a-9fb5-4c03edb08cfb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.741 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.743 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.745 2 INFO nova.virt.libvirt.driver [-] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Instance spawned successfully.#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.746 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.764 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.768 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.768 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.769 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.769 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.770 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.770 2 DEBUG nova.virt.libvirt.driver [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.775 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.805 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:22:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:29.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.841 2 INFO nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Took 10.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.842 2 DEBUG nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:29 np0005465987 nova_compute[230713]: 2025-10-02 12:22:29.934 2 INFO nova.compute.manager [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Took 11.65 seconds to build instance.#033[00m
Oct  2 08:22:30 np0005465987 nova_compute[230713]: 2025-10-02 12:22:30.156 2 DEBUG oslo_concurrency.lockutils [None req-8696be7c-0aea-4749-a57d-fcc63d09ff90 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:30.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:31 np0005465987 nova_compute[230713]: 2025-10-02 12:22:31.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:31.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:32.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:33.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:34.050 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:34 np0005465987 nova_compute[230713]: 2025-10-02 12:22:34.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:34.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:35.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:35 np0005465987 podman[265014]: 2025-10-02 12:22:35.855574089 +0000 UTC m=+0.064427245 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:22:35 np0005465987 podman[265013]: 2025-10-02 12:22:35.880774203 +0000 UTC m=+0.091283205 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct  2 08:22:35 np0005465987 podman[265012]: 2025-10-02 12:22:35.888674041 +0000 UTC m=+0.099245254 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:22:35 np0005465987 nova_compute[230713]: 2025-10-02 12:22:35.892 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407740.8903065, a6738e6f-93ec-43f8-bb0b-f6a930d3ab63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:35 np0005465987 nova_compute[230713]: 2025-10-02 12:22:35.892 2 INFO nova.compute.manager [-] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:22:35 np0005465987 nova_compute[230713]: 2025-10-02 12:22:35.923 2 DEBUG nova.compute.manager [None req-4f30da01-41a0-44b0-a2cf-7a3871829214 - - - - - -] [instance: a6738e6f-93ec-43f8-bb0b-f6a930d3ab63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:36 np0005465987 nova_compute[230713]: 2025-10-02 12:22:36.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:36.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:37.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:38.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:39 np0005465987 nova_compute[230713]: 2025-10-02 12:22:39.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:39.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.053 2 DEBUG oslo_concurrency.lockutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.054 2 DEBUG oslo_concurrency.lockutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.054 2 DEBUG oslo_concurrency.lockutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.055 2 DEBUG oslo_concurrency.lockutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.055 2 DEBUG oslo_concurrency.lockutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.056 2 INFO nova.compute.manager [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Terminating instance#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.057 2 DEBUG nova.compute.manager [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:22:40 np0005465987 kernel: tap0ed64e76-c2 (unregistering): left promiscuous mode
Oct  2 08:22:40 np0005465987 NetworkManager[44910]: <info>  [1759407760.1112] device (tap0ed64e76-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:22:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:40Z|00310|binding|INFO|Releasing lport 0ed64e76-c243-4659-b377-9fbb6676af64 from this chassis (sb_readonly=0)
Oct  2 08:22:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:40Z|00311|binding|INFO|Setting lport 0ed64e76-c243-4659-b377-9fbb6676af64 down in Southbound
Oct  2 08:22:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:40Z|00312|binding|INFO|Removing iface tap0ed64e76-c2 ovn-installed in OVS
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.140 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:b7:01 10.100.0.13'], port_security=['fa:16:3e:0f:b7:01 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '579516d4-d0f6-455a-9fb5-4c03edb08cfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91d108e807094b0fa8e63a923d2269ee', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f5f828a-9c59-4be3-90d2-ef448e431573', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91243388-a708-4071-bc5f-90666534b8e0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0ed64e76-c243-4659-b377-9fbb6676af64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.142 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0ed64e76-c243-4659-b377-9fbb6676af64 in datapath 81df0bd4-1de1-409c-8730-5d718bbb9ab0 unbound from our chassis#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.144 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81df0bd4-1de1-409c-8730-5d718bbb9ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.145 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac72969-4db5-4c04-81cf-514055d40600]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.146 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 namespace which is not needed anymore#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:40 np0005465987 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct  2 08:22:40 np0005465987 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000060.scope: Consumed 11.359s CPU time.
Oct  2 08:22:40 np0005465987 systemd-machined[188335]: Machine qemu-41-instance-00000060 terminated.
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.294 2 INFO nova.virt.libvirt.driver [-] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Instance destroyed successfully.#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.295 2 DEBUG nova.objects.instance [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lazy-loading 'resources' on Instance uuid 579516d4-d0f6-455a-9fb5-4c03edb08cfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:40 np0005465987 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[264997]: [NOTICE]   (265001) : haproxy version is 2.8.14-c23fe91
Oct  2 08:22:40 np0005465987 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[264997]: [NOTICE]   (265001) : path to executable is /usr/sbin/haproxy
Oct  2 08:22:40 np0005465987 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[264997]: [WARNING]  (265001) : Exiting Master process...
Oct  2 08:22:40 np0005465987 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[264997]: [WARNING]  (265001) : Exiting Master process...
Oct  2 08:22:40 np0005465987 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[264997]: [ALERT]    (265001) : Current worker (265003) exited with code 143 (Terminated)
Oct  2 08:22:40 np0005465987 neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0[264997]: [WARNING]  (265001) : All workers exited. Exiting... (0)
Oct  2 08:22:40 np0005465987 systemd[1]: libpod-7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa.scope: Deactivated successfully.
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.309 2 DEBUG nova.virt.libvirt.vif [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-478094513',display_name='tempest-ListServersNegativeTestJSON-server-478094513-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-478094513-3',id=96,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-02T12:22:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='91d108e807094b0fa8e63a923d2269ee',ramdisk_id='',reservation_id='r-jgpyr0ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1913538533',owner_user_name='tempest-ListServersNegativeTestJSON-1913538533-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:29Z,user_data=None,user_id='93e805bcb0e047ca9d45c653f5ec913d',uuid=579516d4-d0f6-455a-9fb5-4c03edb08cfb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.309 2 DEBUG nova.network.os_vif_util [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converting VIF {"id": "0ed64e76-c243-4659-b377-9fbb6676af64", "address": "fa:16:3e:0f:b7:01", "network": {"id": "81df0bd4-1de1-409c-8730-5d718bbb9ab0", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-681141919-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "91d108e807094b0fa8e63a923d2269ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed64e76-c2", "ovs_interfaceid": "0ed64e76-c243-4659-b377-9fbb6676af64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.310 2 DEBUG nova.network.os_vif_util [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b7:01,bridge_name='br-int',has_traffic_filtering=True,id=0ed64e76-c243-4659-b377-9fbb6676af64,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed64e76-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.311 2 DEBUG os_vif [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b7:01,bridge_name='br-int',has_traffic_filtering=True,id=0ed64e76-c243-4659-b377-9fbb6676af64,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed64e76-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:40 np0005465987 podman[265101]: 2025-10-02 12:22:40.313603848 +0000 UTC m=+0.072705043 container died 7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.314 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ed64e76-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.321 2 INFO os_vif [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:b7:01,bridge_name='br-int',has_traffic_filtering=True,id=0ed64e76-c243-4659-b377-9fbb6676af64,network=Network(81df0bd4-1de1-409c-8730-5d718bbb9ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed64e76-c2')#033[00m
Oct  2 08:22:40 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa-userdata-shm.mount: Deactivated successfully.
Oct  2 08:22:40 np0005465987 systemd[1]: var-lib-containers-storage-overlay-2efa6502e11b828b49a0aca8b6259c84e6dccbf1659ab5f0b2854c63f72fb513-merged.mount: Deactivated successfully.
Oct  2 08:22:40 np0005465987 podman[265101]: 2025-10-02 12:22:40.388150711 +0000 UTC m=+0.147251896 container cleanup 7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:22:40 np0005465987 systemd[1]: libpod-conmon-7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa.scope: Deactivated successfully.
Oct  2 08:22:40 np0005465987 podman[265154]: 2025-10-02 12:22:40.483967609 +0000 UTC m=+0.059284833 container remove 7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.489 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b896f48c-712d-42bd-a269-2529e91b729d]: (4, ('Thu Oct  2 12:22:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 (7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa)\n7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa\nThu Oct  2 12:22:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 (7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa)\n7109c5ef6453c7cd57a308d2638520931d8a0d7b53c536d07fff1abd9e05c5aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.491 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5c80dd8f-e1ac-4c4c-88d8-392704a804a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.491 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81df0bd4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:40 np0005465987 kernel: tap81df0bd4-10: left promiscuous mode
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.509 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[64e1ffbb-e988-491c-9d23-50b0d79ba5b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.532 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[488ca373-ae56-4a36-bda8-ddb20a699a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.533 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[17535e2a-92a6-4d9d-a8a7-cd8ab90e7c93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:40.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.550 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[19ed076b-339a-4895-8fca-88d0aa6ce77e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581592, 'reachable_time': 40644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265172, 'error': None, 'target': 'ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.553 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81df0bd4-1de1-409c-8730-5d718bbb9ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:22:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:22:40.553 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e39eee-cb40-4d85-8128-6fb81062c7cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:40 np0005465987 systemd[1]: run-netns-ovnmeta\x2d81df0bd4\x2d1de1\x2d409c\x2d8730\x2d5d718bbb9ab0.mount: Deactivated successfully.
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.601 2 DEBUG nova.compute.manager [req-2adab0aa-cdc4-4496-a27b-a6669b34ca10 req-533c915c-259e-45d7-8398-e41d5a216a29 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Received event network-vif-unplugged-0ed64e76-c243-4659-b377-9fbb6676af64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.602 2 DEBUG oslo_concurrency.lockutils [req-2adab0aa-cdc4-4496-a27b-a6669b34ca10 req-533c915c-259e-45d7-8398-e41d5a216a29 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.602 2 DEBUG oslo_concurrency.lockutils [req-2adab0aa-cdc4-4496-a27b-a6669b34ca10 req-533c915c-259e-45d7-8398-e41d5a216a29 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.602 2 DEBUG oslo_concurrency.lockutils [req-2adab0aa-cdc4-4496-a27b-a6669b34ca10 req-533c915c-259e-45d7-8398-e41d5a216a29 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.603 2 DEBUG nova.compute.manager [req-2adab0aa-cdc4-4496-a27b-a6669b34ca10 req-533c915c-259e-45d7-8398-e41d5a216a29 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] No waiting events found dispatching network-vif-unplugged-0ed64e76-c243-4659-b377-9fbb6676af64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:40 np0005465987 nova_compute[230713]: 2025-10-02 12:22:40.603 2 DEBUG nova.compute.manager [req-2adab0aa-cdc4-4496-a27b-a6669b34ca10 req-533c915c-259e-45d7-8398-e41d5a216a29 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Received event network-vif-unplugged-0ed64e76-c243-4659-b377-9fbb6676af64 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:22:41 np0005465987 nova_compute[230713]: 2025-10-02 12:22:41.088 2 INFO nova.virt.libvirt.driver [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Deleting instance files /var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb_del#033[00m
Oct  2 08:22:41 np0005465987 nova_compute[230713]: 2025-10-02 12:22:41.089 2 INFO nova.virt.libvirt.driver [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Deletion of /var/lib/nova/instances/579516d4-d0f6-455a-9fb5-4c03edb08cfb_del complete#033[00m
Oct  2 08:22:41 np0005465987 nova_compute[230713]: 2025-10-02 12:22:41.145 2 INFO nova.compute.manager [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Took 1.09 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:22:41 np0005465987 nova_compute[230713]: 2025-10-02 12:22:41.146 2 DEBUG oslo.service.loopingcall [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:22:41 np0005465987 nova_compute[230713]: 2025-10-02 12:22:41.147 2 DEBUG nova.compute.manager [-] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:22:41 np0005465987 nova_compute[230713]: 2025-10-02 12:22:41.147 2 DEBUG nova.network.neutron [-] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:22:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:41.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:41 np0005465987 nova_compute[230713]: 2025-10-02 12:22:41.927 2 DEBUG nova.network.neutron [-] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:41 np0005465987 nova_compute[230713]: 2025-10-02 12:22:41.974 2 INFO nova.compute.manager [-] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.030 2 DEBUG oslo_concurrency.lockutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.031 2 DEBUG oslo_concurrency.lockutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.109 2 DEBUG oslo_concurrency.processutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1013036572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.515 2 DEBUG oslo_concurrency.processutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.524 2 DEBUG nova.compute.provider_tree [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:42.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.560 2 DEBUG nova.scheduler.client.report [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.592 2 DEBUG oslo_concurrency.lockutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.640 2 INFO nova.scheduler.client.report [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Deleted allocations for instance 579516d4-d0f6-455a-9fb5-4c03edb08cfb#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.717 2 DEBUG oslo_concurrency.lockutils [None req-735c8c3d-c221-4235-9111-b762e38d1cce 93e805bcb0e047ca9d45c653f5ec913d 91d108e807094b0fa8e63a923d2269ee - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.739 2 DEBUG nova.compute.manager [req-59200d2e-4d3f-46a4-a025-ba04fa4ba7a8 req-3d3ecc85-c5c9-4a1c-8ec6-597f71762a62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Received event network-vif-plugged-0ed64e76-c243-4659-b377-9fbb6676af64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.739 2 DEBUG oslo_concurrency.lockutils [req-59200d2e-4d3f-46a4-a025-ba04fa4ba7a8 req-3d3ecc85-c5c9-4a1c-8ec6-597f71762a62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.739 2 DEBUG oslo_concurrency.lockutils [req-59200d2e-4d3f-46a4-a025-ba04fa4ba7a8 req-3d3ecc85-c5c9-4a1c-8ec6-597f71762a62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.739 2 DEBUG oslo_concurrency.lockutils [req-59200d2e-4d3f-46a4-a025-ba04fa4ba7a8 req-3d3ecc85-c5c9-4a1c-8ec6-597f71762a62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "579516d4-d0f6-455a-9fb5-4c03edb08cfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.740 2 DEBUG nova.compute.manager [req-59200d2e-4d3f-46a4-a025-ba04fa4ba7a8 req-3d3ecc85-c5c9-4a1c-8ec6-597f71762a62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] No waiting events found dispatching network-vif-plugged-0ed64e76-c243-4659-b377-9fbb6676af64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:42 np0005465987 nova_compute[230713]: 2025-10-02 12:22:42.740 2 WARNING nova.compute.manager [req-59200d2e-4d3f-46a4-a025-ba04fa4ba7a8 req-3d3ecc85-c5c9-4a1c-8ec6-597f71762a62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Received unexpected event network-vif-plugged-0ed64e76-c243-4659-b377-9fbb6676af64 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:22:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:43.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:43 np0005465987 nova_compute[230713]: 2025-10-02 12:22:43.951 2 DEBUG nova.compute.manager [req-10339c6a-7a2c-4e94-acf2-8c6dd5f6dbe7 req-30c5dadb-9016-44f0-b9d0-95b0314035a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Received event network-vif-deleted-0ed64e76-c243-4659-b377-9fbb6676af64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:44 np0005465987 nova_compute[230713]: 2025-10-02 12:22:44.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:44.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:45 np0005465987 nova_compute[230713]: 2025-10-02 12:22:45.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:45.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:46.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:46 np0005465987 nova_compute[230713]: 2025-10-02 12:22:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:47.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:47 np0005465987 nova_compute[230713]: 2025-10-02 12:22:47.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:47 np0005465987 nova_compute[230713]: 2025-10-02 12:22:47.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:47 np0005465987 nova_compute[230713]: 2025-10-02 12:22:47.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:22:47 np0005465987 nova_compute[230713]: 2025-10-02 12:22:47.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:22:48 np0005465987 nova_compute[230713]: 2025-10-02 12:22:48.118 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:48 np0005465987 nova_compute[230713]: 2025-10-02 12:22:48.119 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:48 np0005465987 nova_compute[230713]: 2025-10-02 12:22:48.119 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:22:48 np0005465987 nova_compute[230713]: 2025-10-02 12:22:48.119 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 63cd9316-0da7-4274-b2dc-3501e55e6617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:48.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:49 np0005465987 nova_compute[230713]: 2025-10-02 12:22:49.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:22:49Z|00313|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:22:49 np0005465987 nova_compute[230713]: 2025-10-02 12:22:49.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:49.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:50 np0005465987 nova_compute[230713]: 2025-10-02 12:22:50.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:50.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:50 np0005465987 nova_compute[230713]: 2025-10-02 12:22:50.713 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updating instance_info_cache with network_info: [{"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:50 np0005465987 nova_compute[230713]: 2025-10-02 12:22:50.737 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:50 np0005465987 nova_compute[230713]: 2025-10-02 12:22:50.738 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:22:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:51.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:52.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:52 np0005465987 nova_compute[230713]: 2025-10-02 12:22:52.675 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "2c1402ac-76cb-4a35-b50e-eb00991cb113" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:52 np0005465987 nova_compute[230713]: 2025-10-02 12:22:52.676 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:52 np0005465987 nova_compute[230713]: 2025-10-02 12:22:52.694 2 DEBUG nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:22:52 np0005465987 nova_compute[230713]: 2025-10-02 12:22:52.774 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:52 np0005465987 nova_compute[230713]: 2025-10-02 12:22:52.774 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:52 np0005465987 nova_compute[230713]: 2025-10-02 12:22:52.779 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:22:52 np0005465987 nova_compute[230713]: 2025-10-02 12:22:52.780 2 INFO nova.compute.claims [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:22:52 np0005465987 nova_compute[230713]: 2025-10-02 12:22:52.888 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:53 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/477719117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.326 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.331 2 DEBUG nova.compute.provider_tree [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.356 2 DEBUG nova.scheduler.client.report [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.401 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.401 2 DEBUG nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.468 2 DEBUG nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.469 2 DEBUG nova.network.neutron [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.497 2 INFO nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.709 2 DEBUG nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:22:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.785 2 DEBUG nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.787 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.788 2 INFO nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Creating image(s)#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.825 2 DEBUG nova.storage.rbd_utils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.863 2 DEBUG nova.storage.rbd_utils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:53.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.898 2 DEBUG nova.storage.rbd_utils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.901 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.972 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.973 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.973 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.973 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:53 np0005465987 nova_compute[230713]: 2025-10-02 12:22:53.998 2 DEBUG nova.storage.rbd_utils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:54 np0005465987 nova_compute[230713]: 2025-10-02 12:22:54.001 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:54 np0005465987 nova_compute[230713]: 2025-10-02 12:22:54.096 2 DEBUG nova.policy [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a9f7faffac7240869a0196df1ddda7e5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:22:54 np0005465987 nova_compute[230713]: 2025-10-02 12:22:54.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:54.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:22:54 np0005465987 nova_compute[230713]: 2025-10-02 12:22:54.688 2 DEBUG nova.network.neutron [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Successfully created port: faff344b-ce2e-467a-9675-e2b441c73e07 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:22:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:22:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 29K writes, 115K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s#012Cumulative WAL: 29K writes, 9738 syncs, 3.00 writes per sync, written: 0.11 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8853 writes, 34K keys, 8853 commit groups, 1.0 writes per commit group, ingest: 39.23 MB, 0.07 MB/s#012Interval WAL: 8852 writes, 3240 syncs, 2.73 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:22:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:22:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2234734662' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:22:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:22:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2234734662' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:22:55 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.292 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407760.2910242, 579516d4-d0f6-455a-9fb5-4c03edb08cfb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:55 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.293 2 INFO nova.compute.manager [-] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:22:55 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.311 2 DEBUG nova.compute.manager [None req-1c81538a-bb65-41c0-b118-661ae9402053 - - - - - -] [instance: 579516d4-d0f6-455a-9fb5-4c03edb08cfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:55 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:55.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:55 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.874 2 DEBUG nova.network.neutron [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Successfully updated port: faff344b-ce2e-467a-9675-e2b441c73e07 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:55 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.893 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "refresh_cache-2c1402ac-76cb-4a35-b50e-eb00991cb113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:55 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.893 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquired lock "refresh_cache-2c1402ac-76cb-4a35-b50e-eb00991cb113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:55 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.893 2 DEBUG nova.network.neutron [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:22:55 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:56 np0005465987 nova_compute[230713]: 2025-10-02 12:22:55.999 2 DEBUG nova.compute.manager [req-e7f26ed6-3d8f-4e43-9da2-9e32325ffa98 req-06b43b80-de0b-496f-a8af-7f96b3799943 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Received event network-changed-faff344b-ce2e-467a-9675-e2b441c73e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:56 np0005465987 nova_compute[230713]: 2025-10-02 12:22:56.000 2 DEBUG nova.compute.manager [req-e7f26ed6-3d8f-4e43-9da2-9e32325ffa98 req-06b43b80-de0b-496f-a8af-7f96b3799943 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Refreshing instance network info cache due to event network-changed-faff344b-ce2e-467a-9675-e2b441c73e07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:56 np0005465987 nova_compute[230713]: 2025-10-02 12:22:56.000 2 DEBUG oslo_concurrency.lockutils [req-e7f26ed6-3d8f-4e43-9da2-9e32325ffa98 req-06b43b80-de0b-496f-a8af-7f96b3799943 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2c1402ac-76cb-4a35-b50e-eb00991cb113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:56 np0005465987 nova_compute[230713]: 2025-10-02 12:22:56.346 2 DEBUG nova.network.neutron [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:56.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:56 np0005465987 podman[265312]: 2025-10-02 12:22:56.858514271 +0000 UTC m=+0.072296870 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 08:22:56 np0005465987 nova_compute[230713]: 2025-10-02 12:22:56.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.396 2 DEBUG nova.network.neutron [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Updating instance_info_cache with network_info: [{"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.523 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Releasing lock "refresh_cache-2c1402ac-76cb-4a35-b50e-eb00991cb113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.523 2 DEBUG nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Instance network_info: |[{"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.524 2 DEBUG oslo_concurrency.lockutils [req-e7f26ed6-3d8f-4e43-9da2-9e32325ffa98 req-06b43b80-de0b-496f-a8af-7f96b3799943 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2c1402ac-76cb-4a35-b50e-eb00991cb113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.524 2 DEBUG nova.network.neutron [req-e7f26ed6-3d8f-4e43-9da2-9e32325ffa98 req-06b43b80-de0b-496f-a8af-7f96b3799943 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Refreshing network info cache for port faff344b-ce2e-467a-9675-e2b441c73e07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:57.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.989 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.990 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:22:57 np0005465987 nova_compute[230713]: 2025-10-02 12:22:57.990 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/885307893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.432 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.520 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.521 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:22:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:58.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.663 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.664 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4220MB free_disk=20.85171890258789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.664 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.664 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.725 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 63cd9316-0da7-4274-b2dc-3501e55e6617 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.726 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 2c1402ac-76cb-4a35-b50e-eb00991cb113 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.726 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.726 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:22:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.776 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.878 2 DEBUG nova.network.neutron [req-e7f26ed6-3d8f-4e43-9da2-9e32325ffa98 req-06b43b80-de0b-496f-a8af-7f96b3799943 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Updated VIF entry in instance network info cache for port faff344b-ce2e-467a-9675-e2b441c73e07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.878 2 DEBUG nova.network.neutron [req-e7f26ed6-3d8f-4e43-9da2-9e32325ffa98 req-06b43b80-de0b-496f-a8af-7f96b3799943 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Updating instance_info_cache with network_info: [{"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:58 np0005465987 nova_compute[230713]: 2025-10-02 12:22:58.895 2 DEBUG oslo_concurrency.lockutils [req-e7f26ed6-3d8f-4e43-9da2-9e32325ffa98 req-06b43b80-de0b-496f-a8af-7f96b3799943 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2c1402ac-76cb-4a35-b50e-eb00991cb113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:59 np0005465987 nova_compute[230713]: 2025-10-02 12:22:59.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2726592695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:59 np0005465987 nova_compute[230713]: 2025-10-02 12:22:59.259 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:59 np0005465987 nova_compute[230713]: 2025-10-02 12:22:59.268 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:59 np0005465987 nova_compute[230713]: 2025-10-02 12:22:59.287 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:59 np0005465987 nova_compute[230713]: 2025-10-02 12:22:59.316 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:22:59 np0005465987 nova_compute[230713]: 2025-10-02 12:22:59.316 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:22:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:22:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:59.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:00 np0005465987 nova_compute[230713]: 2025-10-02 12:23:00.316 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:00 np0005465987 nova_compute[230713]: 2025-10-02 12:23:00.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:00.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:01.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.222 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.302 2 DEBUG nova.storage.rbd_utils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] resizing rbd image 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:23:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:02.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.662 2 DEBUG nova.objects.instance [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c1402ac-76cb-4a35-b50e-eb00991cb113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.681 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.681 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Ensure instance console log exists: /var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.682 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.684 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.684 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.686 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Start _get_guest_xml network_info=[{"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.690 2 WARNING nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.696 2 DEBUG nova.virt.libvirt.host [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.696 2 DEBUG nova.virt.libvirt.host [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.699 2 DEBUG nova.virt.libvirt.host [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.700 2 DEBUG nova.virt.libvirt.host [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.701 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.701 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.701 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.701 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.702 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.702 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.702 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.702 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.702 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.703 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.703 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.703 2 DEBUG nova.virt.hardware [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:23:02 np0005465987 nova_compute[230713]: 2025-10-02 12:23:02.705 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:23:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:23:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4172738426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.139 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.165 2 DEBUG nova.storage.rbd_utils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.168 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1954220450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.599 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.602 2 DEBUG nova.virt.libvirt.vif [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1587090159',display_name='tempest-DeleteServersTestJSON-server-1587090159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1587090159',id=97,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-3mfd28l8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:53Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=2c1402ac-76cb-4a35-b50e-eb00991cb113,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.603 2 DEBUG nova.network.os_vif_util [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.604 2 DEBUG nova.network.os_vif_util [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:9c:0b,bridge_name='br-int',has_traffic_filtering=True,id=faff344b-ce2e-467a-9675-e2b441c73e07,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaff344b-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.606 2 DEBUG nova.objects.instance [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c1402ac-76cb-4a35-b50e-eb00991cb113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.621 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <uuid>2c1402ac-76cb-4a35-b50e-eb00991cb113</uuid>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <name>instance-00000061</name>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <nova:name>tempest-DeleteServersTestJSON-server-1587090159</nova:name>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:23:02</nova:creationTime>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <nova:user uuid="a9f7faffac7240869a0196df1ddda7e5">tempest-DeleteServersTestJSON-1602490521-project-member</nova:user>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <nova:project uuid="1c2c11ebecb14f3188f35ea473c4ca02">tempest-DeleteServersTestJSON-1602490521</nova:project>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <nova:port uuid="faff344b-ce2e-467a-9675-e2b441c73e07">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <entry name="serial">2c1402ac-76cb-4a35-b50e-eb00991cb113</entry>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <entry name="uuid">2c1402ac-76cb-4a35-b50e-eb00991cb113</entry>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/2c1402ac-76cb-4a35-b50e-eb00991cb113_disk">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/2c1402ac-76cb-4a35-b50e-eb00991cb113_disk.config">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:33:9c:0b"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <target dev="tapfaff344b-ce"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113/console.log" append="off"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:23:03 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:23:03 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:23:03 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:23:03 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.623 2 DEBUG nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Preparing to wait for external event network-vif-plugged-faff344b-ce2e-467a-9675-e2b441c73e07 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.623 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.623 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.624 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.624 2 DEBUG nova.virt.libvirt.vif [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1587090159',display_name='tempest-DeleteServersTestJSON-server-1587090159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1587090159',id=97,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-3mfd28l8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:53Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=2c1402ac-76cb-4a35-b50e-eb00991cb113,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.625 2 DEBUG nova.network.os_vif_util [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.626 2 DEBUG nova.network.os_vif_util [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:9c:0b,bridge_name='br-int',has_traffic_filtering=True,id=faff344b-ce2e-467a-9675-e2b441c73e07,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaff344b-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.626 2 DEBUG os_vif [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:9c:0b,bridge_name='br-int',has_traffic_filtering=True,id=faff344b-ce2e-467a-9675-e2b441c73e07,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaff344b-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.634 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaff344b-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.635 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaff344b-ce, col_values=(('external_ids', {'iface-id': 'faff344b-ce2e-467a-9675-e2b441c73e07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:9c:0b', 'vm-uuid': '2c1402ac-76cb-4a35-b50e-eb00991cb113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:03 np0005465987 NetworkManager[44910]: <info>  [1759407783.6389] manager: (tapfaff344b-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.647 2 INFO os_vif [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:9c:0b,bridge_name='br-int',has_traffic_filtering=True,id=faff344b-ce2e-467a-9675-e2b441c73e07,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaff344b-ce')#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.741 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.742 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.743 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No VIF found with MAC fa:16:3e:33:9c:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.743 2 INFO nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Using config drive#033[00m
Oct  2 08:23:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:03 np0005465987 nova_compute[230713]: 2025-10-02 12:23:03.780 2 DEBUG nova.storage.rbd_utils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:03.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:04 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:23:04 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:23:04 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.133 2 INFO nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Creating config drive at /var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113/disk.config#033[00m
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.138 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp76b2aig7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.278 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp76b2aig7" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.315 2 DEBUG nova.storage.rbd_utils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.318 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113/disk.config 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:04.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.797 2 DEBUG oslo_concurrency.processutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113/disk.config 2c1402ac-76cb-4a35-b50e-eb00991cb113_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.798 2 INFO nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Deleting local config drive /var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113/disk.config because it was imported into RBD.#033[00m
Oct  2 08:23:04 np0005465987 kernel: tapfaff344b-ce: entered promiscuous mode
Oct  2 08:23:04 np0005465987 NetworkManager[44910]: <info>  [1759407784.8568] manager: (tapfaff344b-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Oct  2 08:23:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:04Z|00314|binding|INFO|Claiming lport faff344b-ce2e-467a-9675-e2b441c73e07 for this chassis.
Oct  2 08:23:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:04Z|00315|binding|INFO|faff344b-ce2e-467a-9675-e2b441c73e07: Claiming fa:16:3e:33:9c:0b 10.100.0.14
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.876 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:9c:0b 10.100.0.14'], port_security=['fa:16:3e:33:9c:0b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c1402ac-76cb-4a35-b50e-eb00991cb113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=faff344b-ce2e-467a-9675-e2b441c73e07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.877 138325 INFO neutron.agent.ovn.metadata.agent [-] Port faff344b-ce2e-467a-9675-e2b441c73e07 in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc bound to our chassis#033[00m
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.879 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7754c79a-cca5-48c7-9169-831eaad23ccc#033[00m
Oct  2 08:23:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:04Z|00316|binding|INFO|Setting lport faff344b-ce2e-467a-9675-e2b441c73e07 ovn-installed in OVS
Oct  2 08:23:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:04Z|00317|binding|INFO|Setting lport faff344b-ce2e-467a-9675-e2b441c73e07 up in Southbound
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:04 np0005465987 systemd-udevd[265715]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:23:04 np0005465987 nova_compute[230713]: 2025-10-02 12:23:04.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.892 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[21d516b0-2fff-4725-8e6e-de655fb7b7e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.893 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7754c79a-c1 in ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.896 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7754c79a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.896 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fac9a2db-7dca-476c-9c39-804c3e7a0f1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:04 np0005465987 systemd-machined[188335]: New machine qemu-42-instance-00000061.
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.897 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5c3378-d85d-4f26-ba78-a5a6071e7f2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:04 np0005465987 NetworkManager[44910]: <info>  [1759407784.8988] device (tapfaff344b-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:23:04 np0005465987 NetworkManager[44910]: <info>  [1759407784.8998] device (tapfaff344b-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.908 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[b822e73b-2ce5-463d-9438-a767e253c9be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:04 np0005465987 systemd[1]: Started Virtual Machine qemu-42-instance-00000061.
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.935 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[898af350-098f-4e1e-886a-0769e18e3d3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.963 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[dc62a31f-e39e-4aee-97cb-757a97fbef7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:04.966 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f2020420-1b1c-4f0b-8b29-bb511f95f442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:04 np0005465987 NetworkManager[44910]: <info>  [1759407784.9680] manager: (tap7754c79a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/166)
Oct  2 08:23:04 np0005465987 systemd-udevd[265719]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.000 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[377686e3-ea32-4ded-9fa2-2da4a46b22e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.004 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9d7140-fa0a-4a27-b27e-94d49246dbf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:05 np0005465987 NetworkManager[44910]: <info>  [1759407785.0274] device (tap7754c79a-c0): carrier: link connected
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.034 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1f1cf4-c683-4efc-bf74-0e6821844239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.049 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[11c3934f-8620-4d72-9f62-9666e7c5d125]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585360, 'reachable_time': 31009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265749, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.063 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[86725661-11d3-4573-8719-4111983746e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:b018'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585360, 'tstamp': 585360}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265750, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.082 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dbab5281-13a3-4b98-bf90-cf043fa73d26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585360, 'reachable_time': 31009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265751, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.110 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1478c1cc-1ee7-400c-9342-15fe2ccb6e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.162 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad61a76-966e-4046-aaaa-d35fe46fac73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.163 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.164 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.164 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7754c79a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:05 np0005465987 kernel: tap7754c79a-c0: entered promiscuous mode
Oct  2 08:23:05 np0005465987 nova_compute[230713]: 2025-10-02 12:23:05.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:05 np0005465987 NetworkManager[44910]: <info>  [1759407785.1669] manager: (tap7754c79a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.168 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7754c79a-c0, col_values=(('external_ids', {'iface-id': 'b1ce5636-6283-470c-ab5e-aac212c1256d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:05Z|00318|binding|INFO|Releasing lport b1ce5636-6283-470c-ab5e-aac212c1256d from this chassis (sb_readonly=0)
Oct  2 08:23:05 np0005465987 nova_compute[230713]: 2025-10-02 12:23:05.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.183 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.185 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc266f0-6710-40a2-a607-880fbdcd94ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.186 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-7754c79a-cca5-48c7-9169-831eaad23ccc
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:23:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:05.187 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'env', 'PROCESS_TAG=haproxy-7754c79a-cca5-48c7-9169-831eaad23ccc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7754c79a-cca5-48c7-9169-831eaad23ccc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:23:05 np0005465987 podman[265819]: 2025-10-02 12:23:05.630064081 +0000 UTC m=+0.112236671 container create 67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:23:05 np0005465987 podman[265819]: 2025-10-02 12:23:05.542368696 +0000 UTC m=+0.024541266 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:23:05 np0005465987 systemd[1]: Started libpod-conmon-67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172.scope.
Oct  2 08:23:05 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:23:05 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05a39647a6f32a57aaf9f068b6352b9061014c3482ca60b15b736a584c2e684/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:23:05 np0005465987 podman[265819]: 2025-10-02 12:23:05.750050764 +0000 UTC m=+0.232223344 container init 67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:23:05 np0005465987 podman[265819]: 2025-10-02 12:23:05.755828533 +0000 UTC m=+0.238001083 container start 67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:23:05 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[265840]: [NOTICE]   (265844) : New worker (265846) forked
Oct  2 08:23:05 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[265840]: [NOTICE]   (265844) : Loading success.
Oct  2 08:23:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:05.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:06 np0005465987 nova_compute[230713]: 2025-10-02 12:23:06.194 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407786.1939268, 2c1402ac-76cb-4a35-b50e-eb00991cb113 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:06 np0005465987 nova_compute[230713]: 2025-10-02 12:23:06.194 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] VM Started (Lifecycle Event)#033[00m
Oct  2 08:23:06 np0005465987 nova_compute[230713]: 2025-10-02 12:23:06.220 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:06 np0005465987 nova_compute[230713]: 2025-10-02 12:23:06.224 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407786.1939964, 2c1402ac-76cb-4a35-b50e-eb00991cb113 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:06 np0005465987 nova_compute[230713]: 2025-10-02 12:23:06.225 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:23:06 np0005465987 nova_compute[230713]: 2025-10-02 12:23:06.246 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:06 np0005465987 nova_compute[230713]: 2025-10-02 12:23:06.249 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:06 np0005465987 nova_compute[230713]: 2025-10-02 12:23:06.267 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:06 np0005465987 podman[265857]: 2025-10-02 12:23:06.442622194 +0000 UTC m=+0.076250791 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:23:06 np0005465987 podman[265856]: 2025-10-02 12:23:06.463106128 +0000 UTC m=+0.096749875 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 08:23:06 np0005465987 podman[265855]: 2025-10-02 12:23:06.475998243 +0000 UTC m=+0.114645897 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:23:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:06.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:06 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Oct  2 08:23:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:06.993224) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:23:06 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Oct  2 08:23:06 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407786993291, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1616, "num_deletes": 255, "total_data_size": 3349837, "memory_usage": 3393672, "flush_reason": "Manual Compaction"}
Oct  2 08:23:06 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787007631, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1425644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41684, "largest_seqno": 43295, "table_properties": {"data_size": 1420038, "index_size": 2746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15248, "raw_average_key_size": 21, "raw_value_size": 1407664, "raw_average_value_size": 2005, "num_data_blocks": 119, "num_entries": 702, "num_filter_entries": 702, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407668, "oldest_key_time": 1759407668, "file_creation_time": 1759407786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 14453 microseconds, and 7334 cpu microseconds.
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.007682) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1425644 bytes OK
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.007704) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.009640) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.009661) EVENT_LOG_v1 {"time_micros": 1759407787009654, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.009683) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 3342302, prev total WAL file size 3342302, number of live WAL files 2.
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.011421) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323536' seq:72057594037927935, type:22 .. '6D6772737461740031353038' seq:0, type:0; will stop at (end)
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1392KB)], [78(11MB)]
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787011475, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 13070342, "oldest_snapshot_seqno": -1}
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 6895 keys, 9977289 bytes, temperature: kUnknown
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787088809, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 9977289, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9931568, "index_size": 27369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 176144, "raw_average_key_size": 25, "raw_value_size": 9808533, "raw_average_value_size": 1422, "num_data_blocks": 1092, "num_entries": 6895, "num_filter_entries": 6895, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759407787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.089159) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9977289 bytes
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.090916) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.8 rd, 128.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 11.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(16.2) write-amplify(7.0) OK, records in: 7372, records dropped: 477 output_compression: NoCompression
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.090948) EVENT_LOG_v1 {"time_micros": 1759407787090934, "job": 48, "event": "compaction_finished", "compaction_time_micros": 77438, "compaction_time_cpu_micros": 45831, "output_level": 6, "num_output_files": 1, "total_output_size": 9977289, "num_input_records": 7372, "num_output_records": 6895, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787091529, "job": 48, "event": "table_file_deletion", "file_number": 80}
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407787095220, "job": 48, "event": "table_file_deletion", "file_number": 78}
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.011255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.095327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.095333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.095336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.095339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:07.095342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:07.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:08.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.608 2 DEBUG nova.compute.manager [req-5d28ee7c-0595-45dc-9b47-a5e9da36798e req-35e1f016-aa4f-4484-902c-b69eb85d53c6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Received event network-vif-plugged-faff344b-ce2e-467a-9675-e2b441c73e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.608 2 DEBUG oslo_concurrency.lockutils [req-5d28ee7c-0595-45dc-9b47-a5e9da36798e req-35e1f016-aa4f-4484-902c-b69eb85d53c6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.609 2 DEBUG oslo_concurrency.lockutils [req-5d28ee7c-0595-45dc-9b47-a5e9da36798e req-35e1f016-aa4f-4484-902c-b69eb85d53c6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.610 2 DEBUG oslo_concurrency.lockutils [req-5d28ee7c-0595-45dc-9b47-a5e9da36798e req-35e1f016-aa4f-4484-902c-b69eb85d53c6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.610 2 DEBUG nova.compute.manager [req-5d28ee7c-0595-45dc-9b47-a5e9da36798e req-35e1f016-aa4f-4484-902c-b69eb85d53c6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Processing event network-vif-plugged-faff344b-ce2e-467a-9675-e2b441c73e07 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.611 2 DEBUG nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.616 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407788.6159983, 2c1402ac-76cb-4a35-b50e-eb00991cb113 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.616 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.619 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.624 2 INFO nova.virt.libvirt.driver [-] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Instance spawned successfully.#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.624 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.727 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.733 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.734 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.734 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.735 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.735 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.736 2 DEBUG nova.virt.libvirt.driver [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.740 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:08 np0005465987 nova_compute[230713]: 2025-10-02 12:23:08.942 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:09 np0005465987 nova_compute[230713]: 2025-10-02 12:23:09.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:09 np0005465987 nova_compute[230713]: 2025-10-02 12:23:09.221 2 INFO nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Took 15.44 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:23:09 np0005465987 nova_compute[230713]: 2025-10-02 12:23:09.221 2 DEBUG nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:09 np0005465987 nova_compute[230713]: 2025-10-02 12:23:09.379 2 INFO nova.compute.manager [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Took 16.63 seconds to build instance.#033[00m
Oct  2 08:23:09 np0005465987 nova_compute[230713]: 2025-10-02 12:23:09.407 2 DEBUG oslo_concurrency.lockutils [None req-74172e14-96bf-48a7-b0ac-5f1ea525335e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:09.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:10.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:10 np0005465987 nova_compute[230713]: 2025-10-02 12:23:10.688 2 DEBUG nova.compute.manager [req-a8bf958b-f0d3-4d81-b3c1-58e08d093c13 req-84bb3b32-f954-496c-9507-57226a63fa11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Received event network-vif-plugged-faff344b-ce2e-467a-9675-e2b441c73e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:10 np0005465987 nova_compute[230713]: 2025-10-02 12:23:10.689 2 DEBUG oslo_concurrency.lockutils [req-a8bf958b-f0d3-4d81-b3c1-58e08d093c13 req-84bb3b32-f954-496c-9507-57226a63fa11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:10 np0005465987 nova_compute[230713]: 2025-10-02 12:23:10.689 2 DEBUG oslo_concurrency.lockutils [req-a8bf958b-f0d3-4d81-b3c1-58e08d093c13 req-84bb3b32-f954-496c-9507-57226a63fa11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:10 np0005465987 nova_compute[230713]: 2025-10-02 12:23:10.689 2 DEBUG oslo_concurrency.lockutils [req-a8bf958b-f0d3-4d81-b3c1-58e08d093c13 req-84bb3b32-f954-496c-9507-57226a63fa11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:10 np0005465987 nova_compute[230713]: 2025-10-02 12:23:10.690 2 DEBUG nova.compute.manager [req-a8bf958b-f0d3-4d81-b3c1-58e08d093c13 req-84bb3b32-f954-496c-9507-57226a63fa11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] No waiting events found dispatching network-vif-plugged-faff344b-ce2e-467a-9675-e2b441c73e07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:10 np0005465987 nova_compute[230713]: 2025-10-02 12:23:10.690 2 WARNING nova.compute.manager [req-a8bf958b-f0d3-4d81-b3c1-58e08d093c13 req-84bb3b32-f954-496c-9507-57226a63fa11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Received unexpected event network-vif-plugged-faff344b-ce2e-467a-9675-e2b441c73e07 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:23:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:23:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:23:11 np0005465987 nova_compute[230713]: 2025-10-02 12:23:11.818 2 DEBUG nova.objects.instance [None req-042a2adf-1d88-4135-9baf-0904b547fd6d a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c1402ac-76cb-4a35-b50e-eb00991cb113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:11 np0005465987 nova_compute[230713]: 2025-10-02 12:23:11.846 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407791.846536, 2c1402ac-76cb-4a35-b50e-eb00991cb113 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:11 np0005465987 nova_compute[230713]: 2025-10-02 12:23:11.847 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:23:11 np0005465987 nova_compute[230713]: 2025-10-02 12:23:11.865 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:11 np0005465987 nova_compute[230713]: 2025-10-02 12:23:11.869 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:11.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:11 np0005465987 nova_compute[230713]: 2025-10-02 12:23:11.922 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 08:23:12 np0005465987 kernel: tapfaff344b-ce (unregistering): left promiscuous mode
Oct  2 08:23:12 np0005465987 NetworkManager[44910]: <info>  [1759407792.3790] device (tapfaff344b-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:12Z|00319|binding|INFO|Releasing lport faff344b-ce2e-467a-9675-e2b441c73e07 from this chassis (sb_readonly=0)
Oct  2 08:23:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:12Z|00320|binding|INFO|Setting lport faff344b-ce2e-467a-9675-e2b441c73e07 down in Southbound
Oct  2 08:23:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:12Z|00321|binding|INFO|Removing iface tapfaff344b-ce ovn-installed in OVS
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.400 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:9c:0b 10.100.0.14'], port_security=['fa:16:3e:33:9c:0b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c1402ac-76cb-4a35-b50e-eb00991cb113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=faff344b-ce2e-467a-9675-e2b441c73e07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.403 138325 INFO neutron.agent.ovn.metadata.agent [-] Port faff344b-ce2e-467a-9675-e2b441c73e07 in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc unbound from our chassis#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.405 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7754c79a-cca5-48c7-9169-831eaad23ccc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.407 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a488fc58-790f-4c36-bcb7-3dfb495fcb05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.407 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace which is not needed anymore#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:12 np0005465987 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct  2 08:23:12 np0005465987 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000061.scope: Consumed 4.352s CPU time.
Oct  2 08:23:12 np0005465987 systemd-machined[188335]: Machine qemu-42-instance-00000061 terminated.
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.525 2 DEBUG nova.compute.manager [None req-042a2adf-1d88-4135-9baf-0904b547fd6d a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:12.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:12 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[265840]: [NOTICE]   (265844) : haproxy version is 2.8.14-c23fe91
Oct  2 08:23:12 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[265840]: [NOTICE]   (265844) : path to executable is /usr/sbin/haproxy
Oct  2 08:23:12 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[265840]: [WARNING]  (265844) : Exiting Master process...
Oct  2 08:23:12 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[265840]: [WARNING]  (265844) : Exiting Master process...
Oct  2 08:23:12 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[265840]: [ALERT]    (265844) : Current worker (265846) exited with code 143 (Terminated)
Oct  2 08:23:12 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[265840]: [WARNING]  (265844) : All workers exited. Exiting... (0)
Oct  2 08:23:12 np0005465987 systemd[1]: libpod-67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172.scope: Deactivated successfully.
Oct  2 08:23:12 np0005465987 podman[266000]: 2025-10-02 12:23:12.613045893 +0000 UTC m=+0.075853900 container died 67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:23:12 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172-userdata-shm.mount: Deactivated successfully.
Oct  2 08:23:12 np0005465987 systemd[1]: var-lib-containers-storage-overlay-a05a39647a6f32a57aaf9f068b6352b9061014c3482ca60b15b736a584c2e684-merged.mount: Deactivated successfully.
Oct  2 08:23:12 np0005465987 podman[266000]: 2025-10-02 12:23:12.662728801 +0000 UTC m=+0.125536778 container cleanup 67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:23:12 np0005465987 systemd[1]: libpod-conmon-67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172.scope: Deactivated successfully.
Oct  2 08:23:12 np0005465987 podman[266033]: 2025-10-02 12:23:12.768251937 +0000 UTC m=+0.066808561 container remove 67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.776 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cf23ad39-b378-42e3-8fb6-245204537b3b]: (4, ('Thu Oct  2 12:23:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172)\n67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172\nThu Oct  2 12:23:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172)\n67094ed21fedf13883e14bcf078c357b834e780a07f56dc4cbeb8e8dac0b6172\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.779 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[df5ce63d-7d98-4e52-82d6-9a5cbb9b8caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.780 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:12 np0005465987 kernel: tap7754c79a-c0: left promiscuous mode
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.821 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[97e63776-4240-4388-9756-92e14c0375f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.867 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[871f7d3d-2007-493d-a3c7-140e685f4c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.869 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[48c21fd8-9b45-4025-8014-274626a5c007]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.897 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[edae8571-8f94-4591-a0cd-15d02408586e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585353, 'reachable_time': 35728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266051, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:12 np0005465987 systemd[1]: run-netns-ovnmeta\x2d7754c79a\x2dcca5\x2d48c7\x2d9169\x2d831eaad23ccc.mount: Deactivated successfully.
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.902 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:23:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:12.902 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[bc151ce6-891c-4e5f-bbad-2062571abe40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.990 2 DEBUG nova.compute.manager [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Received event network-vif-unplugged-faff344b-ce2e-467a-9675-e2b441c73e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.990 2 DEBUG oslo_concurrency.lockutils [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.991 2 DEBUG oslo_concurrency.lockutils [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.991 2 DEBUG oslo_concurrency.lockutils [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.991 2 DEBUG nova.compute.manager [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] No waiting events found dispatching network-vif-unplugged-faff344b-ce2e-467a-9675-e2b441c73e07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.992 2 WARNING nova.compute.manager [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Received unexpected event network-vif-unplugged-faff344b-ce2e-467a-9675-e2b441c73e07 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.992 2 DEBUG nova.compute.manager [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Received event network-vif-plugged-faff344b-ce2e-467a-9675-e2b441c73e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.992 2 DEBUG oslo_concurrency.lockutils [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.992 2 DEBUG oslo_concurrency.lockutils [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.992 2 DEBUG oslo_concurrency.lockutils [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.993 2 DEBUG nova.compute.manager [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] No waiting events found dispatching network-vif-plugged-faff344b-ce2e-467a-9675-e2b441c73e07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:12 np0005465987 nova_compute[230713]: 2025-10-02 12:23:12.993 2 WARNING nova.compute.manager [req-2ee82124-6339-4563-ad37-ee118f7855d6 req-9e11d264-0c77-4793-a897-0dcc75906cd6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Received unexpected event network-vif-plugged-faff344b-ce2e-467a-9675-e2b441c73e07 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 08:23:13 np0005465987 nova_compute[230713]: 2025-10-02 12:23:13.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:13.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.387 2 DEBUG oslo_concurrency.lockutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "2c1402ac-76cb-4a35-b50e-eb00991cb113" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.387 2 DEBUG oslo_concurrency.lockutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.388 2 DEBUG oslo_concurrency.lockutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.388 2 DEBUG oslo_concurrency.lockutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.388 2 DEBUG oslo_concurrency.lockutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.390 2 INFO nova.compute.manager [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Terminating instance#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.391 2 DEBUG nova.compute.manager [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.399 2 INFO nova.virt.libvirt.driver [-] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Instance destroyed successfully.#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.399 2 DEBUG nova.objects.instance [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'resources' on Instance uuid 2c1402ac-76cb-4a35-b50e-eb00991cb113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.421 2 DEBUG nova.virt.libvirt.vif [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1587090159',display_name='tempest-DeleteServersTestJSON-server-1587090159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1587090159',id=97,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-3mfd28l8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:12Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=2c1402ac-76cb-4a35-b50e-eb00991cb113,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.422 2 DEBUG nova.network.os_vif_util [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "faff344b-ce2e-467a-9675-e2b441c73e07", "address": "fa:16:3e:33:9c:0b", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaff344b-ce", "ovs_interfaceid": "faff344b-ce2e-467a-9675-e2b441c73e07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.423 2 DEBUG nova.network.os_vif_util [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:9c:0b,bridge_name='br-int',has_traffic_filtering=True,id=faff344b-ce2e-467a-9675-e2b441c73e07,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaff344b-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.424 2 DEBUG os_vif [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:9c:0b,bridge_name='br-int',has_traffic_filtering=True,id=faff344b-ce2e-467a-9675-e2b441c73e07,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaff344b-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaff344b-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:14 np0005465987 nova_compute[230713]: 2025-10-02 12:23:14.437 2 INFO os_vif [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:9c:0b,bridge_name='br-int',has_traffic_filtering=True,id=faff344b-ce2e-467a-9675-e2b441c73e07,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaff344b-ce')#033[00m
Oct  2 08:23:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:14.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:15 np0005465987 nova_compute[230713]: 2025-10-02 12:23:15.201 2 INFO nova.virt.libvirt.driver [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Deleting instance files /var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113_del#033[00m
Oct  2 08:23:15 np0005465987 nova_compute[230713]: 2025-10-02 12:23:15.203 2 INFO nova.virt.libvirt.driver [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Deletion of /var/lib/nova/instances/2c1402ac-76cb-4a35-b50e-eb00991cb113_del complete#033[00m
Oct  2 08:23:15 np0005465987 nova_compute[230713]: 2025-10-02 12:23:15.266 2 INFO nova.compute.manager [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:23:15 np0005465987 nova_compute[230713]: 2025-10-02 12:23:15.267 2 DEBUG oslo.service.loopingcall [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:23:15 np0005465987 nova_compute[230713]: 2025-10-02 12:23:15.267 2 DEBUG nova.compute.manager [-] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:23:15 np0005465987 nova_compute[230713]: 2025-10-02 12:23:15.267 2 DEBUG nova.network.neutron [-] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:23:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:16 np0005465987 nova_compute[230713]: 2025-10-02 12:23:16.362 2 DEBUG nova.network.neutron [-] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:16 np0005465987 nova_compute[230713]: 2025-10-02 12:23:16.387 2 INFO nova.compute.manager [-] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Took 1.12 seconds to deallocate network for instance.#033[00m
Oct  2 08:23:16 np0005465987 nova_compute[230713]: 2025-10-02 12:23:16.439 2 DEBUG oslo_concurrency.lockutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:16 np0005465987 nova_compute[230713]: 2025-10-02 12:23:16.440 2 DEBUG oslo_concurrency.lockutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:16 np0005465987 nova_compute[230713]: 2025-10-02 12:23:16.515 2 DEBUG oslo_concurrency.processutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:16 np0005465987 nova_compute[230713]: 2025-10-02 12:23:16.557 2 DEBUG nova.compute.manager [req-ba0bc559-8207-41a3-9b59-95440a3cf185 req-4ba5a163-f1cf-4064-a448-4467c4797aed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Received event network-vif-deleted-faff344b-ce2e-467a-9675-e2b441c73e07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:16.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/515492389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:17 np0005465987 nova_compute[230713]: 2025-10-02 12:23:17.006 2 DEBUG oslo_concurrency.processutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:17 np0005465987 nova_compute[230713]: 2025-10-02 12:23:17.014 2 DEBUG nova.compute.provider_tree [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:17 np0005465987 nova_compute[230713]: 2025-10-02 12:23:17.039 2 DEBUG nova.scheduler.client.report [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:17 np0005465987 nova_compute[230713]: 2025-10-02 12:23:17.064 2 DEBUG oslo_concurrency.lockutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:17 np0005465987 nova_compute[230713]: 2025-10-02 12:23:17.096 2 INFO nova.scheduler.client.report [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Deleted allocations for instance 2c1402ac-76cb-4a35-b50e-eb00991cb113#033[00m
Oct  2 08:23:17 np0005465987 nova_compute[230713]: 2025-10-02 12:23:17.161 2 DEBUG oslo_concurrency.lockutils [None req-bc52b871-a217-4d2b-9293-07a9603fa8cb a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "2c1402ac-76cb-4a35-b50e-eb00991cb113" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:17.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:18.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:19 np0005465987 nova_compute[230713]: 2025-10-02 12:23:19.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:19 np0005465987 nova_compute[230713]: 2025-10-02 12:23:19.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:20.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:20 np0005465987 nova_compute[230713]: 2025-10-02 12:23:20.680 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:20 np0005465987 nova_compute[230713]: 2025-10-02 12:23:20.680 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:20 np0005465987 nova_compute[230713]: 2025-10-02 12:23:20.707 2 DEBUG nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:23:20 np0005465987 nova_compute[230713]: 2025-10-02 12:23:20.783 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:20 np0005465987 nova_compute[230713]: 2025-10-02 12:23:20.783 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:20 np0005465987 nova_compute[230713]: 2025-10-02 12:23:20.793 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:23:20 np0005465987 nova_compute[230713]: 2025-10-02 12:23:20.794 2 INFO nova.compute.claims [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:23:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e282 e282: 3 total, 3 up, 3 in
Oct  2 08:23:20 np0005465987 nova_compute[230713]: 2025-10-02 12:23:20.948 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/770534632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.366 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.372 2 DEBUG nova.compute.provider_tree [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.393 2 DEBUG nova.scheduler.client.report [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.418 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.419 2 DEBUG nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.472 2 DEBUG nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.473 2 DEBUG nova.network.neutron [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.508 2 INFO nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.533 2 DEBUG nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.618 2 DEBUG nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.620 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.620 2 INFO nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Creating image(s)#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.655 2 DEBUG nova.storage.rbd_utils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e1e08b12-a046-4fbd-a025-dff040cca766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.691 2 DEBUG nova.storage.rbd_utils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e1e08b12-a046-4fbd-a025-dff040cca766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.722 2 DEBUG nova.storage.rbd_utils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e1e08b12-a046-4fbd-a025-dff040cca766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.726 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.805 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.806 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.807 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.807 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.836 2 DEBUG nova.storage.rbd_utils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e1e08b12-a046-4fbd-a025-dff040cca766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:21 np0005465987 nova_compute[230713]: 2025-10-02 12:23:21.843 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e1e08b12-a046-4fbd-a025-dff040cca766_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:21.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:22 np0005465987 nova_compute[230713]: 2025-10-02 12:23:22.451 2 DEBUG nova.policy [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a9f7faffac7240869a0196df1ddda7e5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:23:22 np0005465987 nova_compute[230713]: 2025-10-02 12:23:22.574 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e1e08b12-a046-4fbd-a025-dff040cca766_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:22.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:22 np0005465987 nova_compute[230713]: 2025-10-02 12:23:22.683 2 DEBUG nova.storage.rbd_utils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] resizing rbd image e1e08b12-a046-4fbd-a025-dff040cca766_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:23:23 np0005465987 nova_compute[230713]: 2025-10-02 12:23:23.440 2 DEBUG nova.objects.instance [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'migration_context' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:23 np0005465987 nova_compute[230713]: 2025-10-02 12:23:23.454 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:23:23 np0005465987 nova_compute[230713]: 2025-10-02 12:23:23.454 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Ensure instance console log exists: /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:23:23 np0005465987 nova_compute[230713]: 2025-10-02 12:23:23.454 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:23 np0005465987 nova_compute[230713]: 2025-10-02 12:23:23.455 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:23 np0005465987 nova_compute[230713]: 2025-10-02 12:23:23.455 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:23.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:24 np0005465987 nova_compute[230713]: 2025-10-02 12:23:24.090 2 DEBUG nova.network.neutron [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Successfully created port: a399f412-7fee-426a-9fed-b593373c0a90 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:23:24 np0005465987 nova_compute[230713]: 2025-10-02 12:23:24.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005465987 nova_compute[230713]: 2025-10-02 12:23:24.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:24.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:25 np0005465987 nova_compute[230713]: 2025-10-02 12:23:25.585 2 DEBUG nova.network.neutron [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Successfully updated port: a399f412-7fee-426a-9fed-b593373c0a90 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:23:25 np0005465987 nova_compute[230713]: 2025-10-02 12:23:25.600 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:25 np0005465987 nova_compute[230713]: 2025-10-02 12:23:25.600 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquired lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:25 np0005465987 nova_compute[230713]: 2025-10-02 12:23:25.601 2 DEBUG nova.network.neutron [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:23:25 np0005465987 nova_compute[230713]: 2025-10-02 12:23:25.739 2 DEBUG nova.compute.manager [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-changed-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:25 np0005465987 nova_compute[230713]: 2025-10-02 12:23:25.739 2 DEBUG nova.compute.manager [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Refreshing instance network info cache due to event network-changed-a399f412-7fee-426a-9fed-b593373c0a90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:23:25 np0005465987 nova_compute[230713]: 2025-10-02 12:23:25.740 2 DEBUG oslo_concurrency.lockutils [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:25 np0005465987 nova_compute[230713]: 2025-10-02 12:23:25.801 2 DEBUG nova.network.neutron [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:23:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:25.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:26.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:26 np0005465987 nova_compute[230713]: 2025-10-02 12:23:26.970 2 DEBUG nova.network.neutron [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating instance_info_cache with network_info: [{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.020 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Releasing lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.020 2 DEBUG nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance network_info: |[{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.021 2 DEBUG oslo_concurrency.lockutils [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.022 2 DEBUG nova.network.neutron [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Refreshing network info cache for port a399f412-7fee-426a-9fed-b593373c0a90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.027 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Start _get_guest_xml network_info=[{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.034 2 WARNING nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.043 2 DEBUG nova.virt.libvirt.host [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.044 2 DEBUG nova.virt.libvirt.host [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.051 2 DEBUG nova.virt.libvirt.host [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.052 2 DEBUG nova.virt.libvirt.host [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.054 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.054 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.055 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.055 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.055 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.055 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.056 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.056 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.056 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.057 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.057 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.057 2 DEBUG nova.virt.hardware [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.061 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/393264518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.501 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.538 2 DEBUG nova.storage.rbd_utils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e1e08b12-a046-4fbd-a025-dff040cca766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.544 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.582 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407792.525426, 2c1402ac-76cb-4a35-b50e-eb00991cb113 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.583 2 INFO nova.compute.manager [-] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.609 2 DEBUG nova.compute.manager [None req-f743ffd5-0096-484d-b25a-9bb1bc6f730c - - - - - -] [instance: 2c1402ac-76cb-4a35-b50e-eb00991cb113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:27 np0005465987 podman[266341]: 2025-10-02 12:23:27.865932592 +0000 UTC m=+0.080113117 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct  2 08:23:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:27.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/138954372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:27.950 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:27.950 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:27.951 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.960 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.962 2 DEBUG nova.virt.libvirt.vif [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-850761329',display_name='tempest-DeleteServersTestJSON-server-850761329',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-850761329',id=99,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-biy5z6a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:21Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e1e08b12-a046-4fbd-a025-dff040cca766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.962 2 DEBUG nova.network.os_vif_util [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.963 2 DEBUG nova.network.os_vif_util [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.964 2 DEBUG nova.objects.instance [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.979 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <uuid>e1e08b12-a046-4fbd-a025-dff040cca766</uuid>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <name>instance-00000063</name>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <nova:name>tempest-DeleteServersTestJSON-server-850761329</nova:name>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:23:27</nova:creationTime>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <nova:user uuid="a9f7faffac7240869a0196df1ddda7e5">tempest-DeleteServersTestJSON-1602490521-project-member</nova:user>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <nova:project uuid="1c2c11ebecb14f3188f35ea473c4ca02">tempest-DeleteServersTestJSON-1602490521</nova:project>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <nova:port uuid="a399f412-7fee-426a-9fed-b593373c0a90">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <entry name="serial">e1e08b12-a046-4fbd-a025-dff040cca766</entry>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <entry name="uuid">e1e08b12-a046-4fbd-a025-dff040cca766</entry>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/e1e08b12-a046-4fbd-a025-dff040cca766_disk">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/e1e08b12-a046-4fbd-a025-dff040cca766_disk.config">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:cd:48:e8"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <target dev="tapa399f412-7f"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/console.log" append="off"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:23:27 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:23:27 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:23:27 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:23:27 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.980 2 DEBUG nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Preparing to wait for external event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.981 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.981 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.982 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.983 2 DEBUG nova.virt.libvirt.vif [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-850761329',display_name='tempest-DeleteServersTestJSON-server-850761329',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-850761329',id=99,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-biy5z6a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:21Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e1e08b12-a046-4fbd-a025-dff040cca766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.984 2 DEBUG nova.network.os_vif_util [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.985 2 DEBUG nova.network.os_vif_util [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.986 2 DEBUG os_vif [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.988 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa399f412-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa399f412-7f, col_values=(('external_ids', {'iface-id': 'a399f412-7fee-426a-9fed-b593373c0a90', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:48:e8', 'vm-uuid': 'e1e08b12-a046-4fbd-a025-dff040cca766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:27 np0005465987 NetworkManager[44910]: <info>  [1759407807.9973] manager: (tapa399f412-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Oct  2 08:23:27 np0005465987 nova_compute[230713]: 2025-10-02 12:23:27.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.002 2 INFO os_vif [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f')#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.072 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.072 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.073 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] No VIF found with MAC fa:16:3e:cd:48:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.074 2 INFO nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Using config drive#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.110 2 DEBUG nova.storage.rbd_utils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e1e08b12-a046-4fbd-a025-dff040cca766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.519 2 INFO nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Creating config drive at /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/disk.config#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.527 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzkg2_pf8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:28.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.671 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzkg2_pf8" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.705 2 DEBUG nova.storage.rbd_utils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] rbd image e1e08b12-a046-4fbd-a025-dff040cca766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.708 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/disk.config e1e08b12-a046-4fbd-a025-dff040cca766_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.977 2 DEBUG oslo_concurrency.processutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/disk.config e1e08b12-a046-4fbd-a025-dff040cca766_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:28 np0005465987 nova_compute[230713]: 2025-10-02 12:23:28.978 2 INFO nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Deleting local config drive /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/disk.config because it was imported into RBD.#033[00m
Oct  2 08:23:29 np0005465987 NetworkManager[44910]: <info>  [1759407809.0348] manager: (tapa399f412-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/169)
Oct  2 08:23:29 np0005465987 kernel: tapa399f412-7f: entered promiscuous mode
Oct  2 08:23:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:29Z|00322|binding|INFO|Claiming lport a399f412-7fee-426a-9fed-b593373c0a90 for this chassis.
Oct  2 08:23:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:29Z|00323|binding|INFO|a399f412-7fee-426a-9fed-b593373c0a90: Claiming fa:16:3e:cd:48:e8 10.100.0.3
Oct  2 08:23:29 np0005465987 nova_compute[230713]: 2025-10-02 12:23:29.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.055 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:48:e8 10.100.0.3'], port_security=['fa:16:3e:cd:48:e8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e1e08b12-a046-4fbd-a025-dff040cca766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=a399f412-7fee-426a-9fed-b593373c0a90) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.057 138325 INFO neutron.agent.ovn.metadata.agent [-] Port a399f412-7fee-426a-9fed-b593373c0a90 in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc bound to our chassis#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.059 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7754c79a-cca5-48c7-9169-831eaad23ccc#033[00m
Oct  2 08:23:29 np0005465987 systemd-udevd[266436]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:23:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:29Z|00324|binding|INFO|Setting lport a399f412-7fee-426a-9fed-b593373c0a90 ovn-installed in OVS
Oct  2 08:23:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:29Z|00325|binding|INFO|Setting lport a399f412-7fee-426a-9fed-b593373c0a90 up in Southbound
Oct  2 08:23:29 np0005465987 nova_compute[230713]: 2025-10-02 12:23:29.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.071 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5e626cd3-414d-4a9c-b63c-0e1d70f8e124]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.072 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7754c79a-c1 in ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:23:29 np0005465987 systemd-machined[188335]: New machine qemu-43-instance-00000063.
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.075 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7754c79a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.075 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5389ef8c-5790-4bd2-a923-316a8bc547f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.076 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6f940ca1-b81a-4af8-ab8b-2ff911641e1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 NetworkManager[44910]: <info>  [1759407809.0810] device (tapa399f412-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:23:29 np0005465987 NetworkManager[44910]: <info>  [1759407809.0819] device (tapa399f412-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:23:29 np0005465987 systemd[1]: Started Virtual Machine qemu-43-instance-00000063.
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.087 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[303db3dc-974d-492a-823c-692d04cd7352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.119 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[85ff5c26-3dc6-4f32-9e7a-511891ea1ebc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.151 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[333112bb-37ac-4244-a389-485d7b7f8f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.157 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8e656997-2f11-4a91-a34c-60edeacac4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 NetworkManager[44910]: <info>  [1759407809.1591] manager: (tap7754c79a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/170)
Oct  2 08:23:29 np0005465987 systemd-udevd[266440]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.202 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a89188-da43-4f14-a71e-067fc575523c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.206 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c5462450-1104-4e27-88b8-8d17638e91e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 nova_compute[230713]: 2025-10-02 12:23:29.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:29 np0005465987 NetworkManager[44910]: <info>  [1759407809.2342] device (tap7754c79a-c0): carrier: link connected
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.248 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec0716e-4f29-4dcd-aaa0-cbf8debca53b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.266 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbfdfc2-6a4c-4987-aa15-3a2130482310]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587780, 'reachable_time': 38125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266469, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.288 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a41a4d51-8d3e-4f78-8c40-215a1c893a9c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:b018'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587780, 'tstamp': 587780}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266470, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.313 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[aef45255-3a6a-4888-9eab-fb2b29579036]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7754c79a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b0:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587780, 'reachable_time': 38125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266471, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.363 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b4311ae9-fa54-4646-b68e-ef4648bf9703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.445 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[79951226-edc4-4b7d-99a5-e49a00f295df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.447 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.447 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.447 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7754c79a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:29 np0005465987 nova_compute[230713]: 2025-10-02 12:23:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:29 np0005465987 NetworkManager[44910]: <info>  [1759407809.4503] manager: (tap7754c79a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Oct  2 08:23:29 np0005465987 kernel: tap7754c79a-c0: entered promiscuous mode
Oct  2 08:23:29 np0005465987 nova_compute[230713]: 2025-10-02 12:23:29.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.456 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7754c79a-c0, col_values=(('external_ids', {'iface-id': 'b1ce5636-6283-470c-ab5e-aac212c1256d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:29Z|00326|binding|INFO|Releasing lport b1ce5636-6283-470c-ab5e-aac212c1256d from this chassis (sb_readonly=0)
Oct  2 08:23:29 np0005465987 nova_compute[230713]: 2025-10-02 12:23:29.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:29 np0005465987 nova_compute[230713]: 2025-10-02 12:23:29.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.493 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.494 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7b256d5b-931a-446a-b836-2b0b06fd36f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.495 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-7754c79a-cca5-48c7-9169-831eaad23ccc
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/7754c79a-cca5-48c7-9169-831eaad23ccc.pid.haproxy
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 7754c79a-cca5-48c7-9169-831eaad23ccc
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:23:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:29.495 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'env', 'PROCESS_TAG=haproxy-7754c79a-cca5-48c7-9169-831eaad23ccc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7754c79a-cca5-48c7-9169-831eaad23ccc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:23:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:29.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:29 np0005465987 podman[266521]: 2025-10-02 12:23:29.945335157 +0000 UTC m=+0.110030791 container create 01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:23:29 np0005465987 podman[266521]: 2025-10-02 12:23:29.865490169 +0000 UTC m=+0.030185823 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:23:30 np0005465987 systemd[1]: Started libpod-conmon-01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad.scope.
Oct  2 08:23:30 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:23:30 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1a39f7b7b87e129e480695969919561a4a629b96db25a7d43501f5499bae48/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:23:30 np0005465987 podman[266521]: 2025-10-02 12:23:30.068234761 +0000 UTC m=+0.232930415 container init 01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 08:23:30 np0005465987 podman[266521]: 2025-10-02 12:23:30.074149574 +0000 UTC m=+0.238845208 container start 01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:23:30 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[266560]: [NOTICE]   (266564) : New worker (266566) forked
Oct  2 08:23:30 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[266560]: [NOTICE]   (266564) : Loading success.
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.432 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407810.4311666, e1e08b12-a046-4fbd-a025-dff040cca766 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.432 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] VM Started (Lifecycle Event)#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.466 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.471 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407810.4314005, e1e08b12-a046-4fbd-a025-dff040cca766 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.471 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.511 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.516 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:30.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.744 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.815 2 DEBUG nova.compute.manager [req-99bd1c9c-cbf1-4d66-aab5-2cb997b6774c req-0740185d-5795-4229-ae3c-996236a8ee8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.815 2 DEBUG oslo_concurrency.lockutils [req-99bd1c9c-cbf1-4d66-aab5-2cb997b6774c req-0740185d-5795-4229-ae3c-996236a8ee8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.815 2 DEBUG oslo_concurrency.lockutils [req-99bd1c9c-cbf1-4d66-aab5-2cb997b6774c req-0740185d-5795-4229-ae3c-996236a8ee8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.816 2 DEBUG oslo_concurrency.lockutils [req-99bd1c9c-cbf1-4d66-aab5-2cb997b6774c req-0740185d-5795-4229-ae3c-996236a8ee8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.816 2 DEBUG nova.compute.manager [req-99bd1c9c-cbf1-4d66-aab5-2cb997b6774c req-0740185d-5795-4229-ae3c-996236a8ee8b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Processing event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.817 2 DEBUG nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.820 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407810.8199368, e1e08b12-a046-4fbd-a025-dff040cca766 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.821 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.822 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.825 2 INFO nova.virt.libvirt.driver [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance spawned successfully.#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.825 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.839 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.843 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.850 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.850 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.851 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.851 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.851 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.851 2 DEBUG nova.virt.libvirt.driver [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.860 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.935 2 INFO nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Took 9.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:23:30 np0005465987 nova_compute[230713]: 2025-10-02 12:23:30.935 2 DEBUG nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:31 np0005465987 nova_compute[230713]: 2025-10-02 12:23:31.017 2 INFO nova.compute.manager [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Took 10.25 seconds to build instance.#033[00m
Oct  2 08:23:31 np0005465987 nova_compute[230713]: 2025-10-02 12:23:31.033 2 DEBUG oslo_concurrency.lockutils [None req-56b59e6b-3621-48e5-a83d-cc2342069c4e a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:23:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8569 writes, 43K keys, 8569 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 8569 writes, 8569 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1805 writes, 8801 keys, 1805 commit groups, 1.0 writes per commit group, ingest: 17.26 MB, 0.03 MB/s#012Interval WAL: 1805 writes, 1805 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     48.5      1.08              0.16        24    0.045       0      0       0.0       0.0#012  L6      1/0    9.52 MB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   3.9     96.6     79.8      2.56              0.60        23    0.111    126K    13K       0.0       0.0#012 Sum      1/0    9.52 MB   0.0      0.2     0.1      0.2       0.3      0.1       0.0   4.9     67.9     70.5      3.64              0.76        47    0.077    126K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.3     52.6     52.5      1.31              0.23        12    0.109     41K   3043       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   0.0     96.6     79.8      2.56              0.60        23    0.111    126K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     48.6      1.08              0.16        23    0.047       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.051, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.25 GB write, 0.09 MB/s write, 0.24 GB read, 0.08 MB/s read, 3.6 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 27.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00018 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1609,26.68 MB,8.77638%) FilterBlock(47,360.80 KB,0.115902%) IndexBlock(47,640.83 KB,0.205858%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:23:31 np0005465987 nova_compute[230713]: 2025-10-02 12:23:31.382 2 DEBUG nova.network.neutron [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updated VIF entry in instance network info cache for port a399f412-7fee-426a-9fed-b593373c0a90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:23:31 np0005465987 nova_compute[230713]: 2025-10-02 12:23:31.382 2 DEBUG nova.network.neutron [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating instance_info_cache with network_info: [{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:31 np0005465987 nova_compute[230713]: 2025-10-02 12:23:31.409 2 DEBUG oslo_concurrency.lockutils [req-b7fd1f85-dc9c-4354-a7f1-32cec3552e3d req-5b1a2c9f-dd8f-4883-b49d-d5c4b13966e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:32 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:32Z|00327|binding|INFO|Releasing lport b1ce5636-6283-470c-ab5e-aac212c1256d from this chassis (sb_readonly=0)
Oct  2 08:23:32 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:32Z|00328|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:23:32 np0005465987 nova_compute[230713]: 2025-10-02 12:23:32.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:32.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:32.737 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:32 np0005465987 nova_compute[230713]: 2025-10-02 12:23:32.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:32.740 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:23:32 np0005465987 nova_compute[230713]: 2025-10-02 12:23:32.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:33 np0005465987 nova_compute[230713]: 2025-10-02 12:23:33.618 2 DEBUG nova.compute.manager [req-2e30a5c0-36aa-4b19-a8e6-cda52d2ffac0 req-7df51ee9-87e8-4e74-98d0-1a224d986e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:33 np0005465987 nova_compute[230713]: 2025-10-02 12:23:33.618 2 DEBUG oslo_concurrency.lockutils [req-2e30a5c0-36aa-4b19-a8e6-cda52d2ffac0 req-7df51ee9-87e8-4e74-98d0-1a224d986e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:33 np0005465987 nova_compute[230713]: 2025-10-02 12:23:33.619 2 DEBUG oslo_concurrency.lockutils [req-2e30a5c0-36aa-4b19-a8e6-cda52d2ffac0 req-7df51ee9-87e8-4e74-98d0-1a224d986e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:33 np0005465987 nova_compute[230713]: 2025-10-02 12:23:33.619 2 DEBUG oslo_concurrency.lockutils [req-2e30a5c0-36aa-4b19-a8e6-cda52d2ffac0 req-7df51ee9-87e8-4e74-98d0-1a224d986e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:33 np0005465987 nova_compute[230713]: 2025-10-02 12:23:33.619 2 DEBUG nova.compute.manager [req-2e30a5c0-36aa-4b19-a8e6-cda52d2ffac0 req-7df51ee9-87e8-4e74-98d0-1a224d986e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:33 np0005465987 nova_compute[230713]: 2025-10-02 12:23:33.619 2 WARNING nova.compute.manager [req-2e30a5c0-36aa-4b19-a8e6-cda52d2ffac0 req-7df51ee9-87e8-4e74-98d0-1a224d986e37 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state active and task_state resize_prep.#033[00m
Oct  2 08:23:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:34 np0005465987 nova_compute[230713]: 2025-10-02 12:23:34.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:34.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:36 np0005465987 nova_compute[230713]: 2025-10-02 12:23:36.281 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:36 np0005465987 nova_compute[230713]: 2025-10-02 12:23:36.282 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquired lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:36 np0005465987 nova_compute[230713]: 2025-10-02 12:23:36.282 2 DEBUG nova.network.neutron [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:23:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e283 e283: 3 total, 3 up, 3 in
Oct  2 08:23:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3837116973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:36.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:36.742 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005465987 podman[266577]: 2025-10-02 12:23:36.879058072 +0000 UTC m=+0.075474629 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:23:36 np0005465987 podman[266576]: 2025-10-02 12:23:36.899808574 +0000 UTC m=+0.101378632 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 08:23:36 np0005465987 podman[266575]: 2025-10-02 12:23:36.900266577 +0000 UTC m=+0.107415549 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:23:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e284 e284: 3 total, 3 up, 3 in
Oct  2 08:23:37 np0005465987 nova_compute[230713]: 2025-10-02 12:23:37.841 2 DEBUG nova.network.neutron [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating instance_info_cache with network_info: [{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:37 np0005465987 nova_compute[230713]: 2025-10-02 12:23:37.879 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Releasing lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:37.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.018 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.019 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Creating file /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/1328eb1ce8854394be9f7550326d7a17.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.019 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/1328eb1ce8854394be9f7550326d7a17.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.411 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/1328eb1ce8854394be9f7550326d7a17.tmp" returned: 1 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.413 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766/1328eb1ce8854394be9f7550326d7a17.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.413 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Creating directory /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.414 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:38.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.676 2 DEBUG oslo_concurrency.processutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/e1e08b12-a046-4fbd-a025-dff040cca766" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:38 np0005465987 nova_compute[230713]: 2025-10-02 12:23:38.682 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:23:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e285 e285: 3 total, 3 up, 3 in
Oct  2 08:23:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:39 np0005465987 nova_compute[230713]: 2025-10-02 12:23:39.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:39.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:40.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:41 np0005465987 nova_compute[230713]: 2025-10-02 12:23:41.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:41.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:42.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:43 np0005465987 nova_compute[230713]: 2025-10-02 12:23:43.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:43.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:44 np0005465987 nova_compute[230713]: 2025-10-02 12:23:44.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:44.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:44 np0005465987 nova_compute[230713]: 2025-10-02 12:23:44.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:44 np0005465987 nova_compute[230713]: 2025-10-02 12:23:44.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:23:45 np0005465987 nova_compute[230713]: 2025-10-02 12:23:45.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:45Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:48:e8 10.100.0.3
Oct  2 08:23:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:45Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:48:e8 10.100.0.3
Oct  2 08:23:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:45.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:46 np0005465987 nova_compute[230713]: 2025-10-02 12:23:46.079 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:46.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e286 e286: 3 total, 3 up, 3 in
Oct  2 08:23:47 np0005465987 nova_compute[230713]: 2025-10-02 12:23:47.029 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:47.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:47 np0005465987 nova_compute[230713]: 2025-10-02 12:23:47.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:47 np0005465987 nova_compute[230713]: 2025-10-02 12:23:47.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:23:47 np0005465987 nova_compute[230713]: 2025-10-02 12:23:47.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:23:48 np0005465987 nova_compute[230713]: 2025-10-02 12:23:48.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:48 np0005465987 nova_compute[230713]: 2025-10-02 12:23:48.136 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:48 np0005465987 nova_compute[230713]: 2025-10-02 12:23:48.136 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:48 np0005465987 nova_compute[230713]: 2025-10-02 12:23:48.136 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:23:48 np0005465987 nova_compute[230713]: 2025-10-02 12:23:48.137 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 63cd9316-0da7-4274-b2dc-3501e55e6617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:48.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:48 np0005465987 nova_compute[230713]: 2025-10-02 12:23:48.739 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:23:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:49 np0005465987 nova_compute[230713]: 2025-10-02 12:23:49.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:49 np0005465987 nova_compute[230713]: 2025-10-02 12:23:49.737 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updating instance_info_cache with network_info: [{"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:49 np0005465987 nova_compute[230713]: 2025-10-02 12:23:49.766 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:49 np0005465987 nova_compute[230713]: 2025-10-02 12:23:49.767 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:23:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:49.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:50.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:51 np0005465987 kernel: tapa399f412-7f (unregistering): left promiscuous mode
Oct  2 08:23:51 np0005465987 NetworkManager[44910]: <info>  [1759407831.0401] device (tapa399f412-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:51Z|00329|binding|INFO|Releasing lport a399f412-7fee-426a-9fed-b593373c0a90 from this chassis (sb_readonly=0)
Oct  2 08:23:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:51Z|00330|binding|INFO|Setting lport a399f412-7fee-426a-9fed-b593373c0a90 down in Southbound
Oct  2 08:23:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:23:51Z|00331|binding|INFO|Removing iface tapa399f412-7f ovn-installed in OVS
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.064 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:48:e8 10.100.0.3'], port_security=['fa:16:3e:cd:48:e8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e1e08b12-a046-4fbd-a025-dff040cca766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7754c79a-cca5-48c7-9169-831eaad23ccc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c2c11ebecb14f3188f35ea473c4ca02', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c0d053f-a096-4f8c-8162-5ef19e29b5d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45b5774e-2213-45dd-ab74-f2a3868d167c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=a399f412-7fee-426a-9fed-b593373c0a90) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.065 138325 INFO neutron.agent.ovn.metadata.agent [-] Port a399f412-7fee-426a-9fed-b593373c0a90 in datapath 7754c79a-cca5-48c7-9169-831eaad23ccc unbound from our chassis#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.066 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7754c79a-cca5-48c7-9169-831eaad23ccc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.067 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2710564c-1398-42e6-9c01-e55a286057f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.067 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc namespace which is not needed anymore#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:51 np0005465987 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct  2 08:23:51 np0005465987 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000063.scope: Consumed 15.174s CPU time.
Oct  2 08:23:51 np0005465987 systemd-machined[188335]: Machine qemu-43-instance-00000063 terminated.
Oct  2 08:23:51 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[266560]: [NOTICE]   (266564) : haproxy version is 2.8.14-c23fe91
Oct  2 08:23:51 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[266560]: [NOTICE]   (266564) : path to executable is /usr/sbin/haproxy
Oct  2 08:23:51 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[266560]: [WARNING]  (266564) : Exiting Master process...
Oct  2 08:23:51 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[266560]: [ALERT]    (266564) : Current worker (266566) exited with code 143 (Terminated)
Oct  2 08:23:51 np0005465987 neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc[266560]: [WARNING]  (266564) : All workers exited. Exiting... (0)
Oct  2 08:23:51 np0005465987 systemd[1]: libpod-01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad.scope: Deactivated successfully.
Oct  2 08:23:51 np0005465987 podman[266662]: 2025-10-02 12:23:51.239654282 +0000 UTC m=+0.045586837 container died 01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:23:51 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad-userdata-shm.mount: Deactivated successfully.
Oct  2 08:23:51 np0005465987 systemd[1]: var-lib-containers-storage-overlay-fa1a39f7b7b87e129e480695969919561a4a629b96db25a7d43501f5499bae48-merged.mount: Deactivated successfully.
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:51 np0005465987 podman[266662]: 2025-10-02 12:23:51.283332594 +0000 UTC m=+0.089265129 container cleanup 01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:51 np0005465987 systemd[1]: libpod-conmon-01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad.scope: Deactivated successfully.
Oct  2 08:23:51 np0005465987 podman[266702]: 2025-10-02 12:23:51.346428912 +0000 UTC m=+0.041218136 container remove 01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.351 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[85b924ff-ad03-47c6-a3a2-efebf2b72461]: (4, ('Thu Oct  2 12:23:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad)\n01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad\nThu Oct  2 12:23:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc (01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad)\n01b8e1043dc493c38349b9299f23b015da5aa75bf06026fa9e9d52fca08613ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.354 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb9717b-f22b-47f5-908d-d6c3c8bdb822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.355 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7754c79a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:51 np0005465987 kernel: tap7754c79a-c0: left promiscuous mode
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.374 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[371ae159-9a62-410a-a4ab-f901fd21ef68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.382 2 DEBUG nova.compute.manager [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-unplugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.382 2 DEBUG oslo_concurrency.lockutils [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.382 2 DEBUG oslo_concurrency.lockutils [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.382 2 DEBUG oslo_concurrency.lockutils [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.383 2 DEBUG nova.compute.manager [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-unplugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.383 2 WARNING nova.compute.manager [req-f617b69a-342d-4ec4-82df-8ad14ff05b32 req-71627f87-efec-4181-b551-220845caf64b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-unplugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.396 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[235496ce-6bd4-4509-bdee-b0f042067cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.397 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2ea068-8112-4946-a1aa-af94b66c958f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.420 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[21b8802b-c2a2-46ff-8053-85173244d770]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587771, 'reachable_time': 44143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266721, 'error': None, 'target': 'ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.423 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7754c79a-cca5-48c7-9169-831eaad23ccc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:23:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:23:51.424 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[a11b42f1-10f1-4387-9a3d-55ad6a7b656f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:51 np0005465987 systemd[1]: run-netns-ovnmeta\x2d7754c79a\x2dcca5\x2d48c7\x2d9169\x2d831eaad23ccc.mount: Deactivated successfully.
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.756 2 INFO nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.759 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.766 2 INFO nova.virt.libvirt.driver [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Instance destroyed successfully.#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.767 2 DEBUG nova.virt.libvirt.vif [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-850761329',display_name='tempest-DeleteServersTestJSON-server-850761329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-850761329',id=99,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-biy5z6a6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:23:35Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e1e08b12-a046-4fbd-a025-dff040cca766,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-484493292-network", "vif_mac": "fa:16:3e:cd:48:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.768 2 DEBUG nova.network.os_vif_util [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-484493292-network", "vif_mac": "fa:16:3e:cd:48:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.769 2 DEBUG nova.network.os_vif_util [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.770 2 DEBUG os_vif [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa399f412-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.780 2 INFO os_vif [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f')#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.786 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.786 2 DEBUG nova.virt.libvirt.driver [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:23:51 np0005465987 nova_compute[230713]: 2025-10-02 12:23:51.943 2 DEBUG neutronclient.v2_0.client [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port a399f412-7fee-426a-9fed-b593373c0a90 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:23:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:51.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:52 np0005465987 nova_compute[230713]: 2025-10-02 12:23:52.102 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:52 np0005465987 nova_compute[230713]: 2025-10-02 12:23:52.103 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:52 np0005465987 nova_compute[230713]: 2025-10-02 12:23:52.104 2 DEBUG oslo_concurrency.lockutils [None req-1abc568b-689a-46ec-b157-be59e0e448e7 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:52.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:53 np0005465987 nova_compute[230713]: 2025-10-02 12:23:53.751 2 DEBUG nova.compute.manager [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:53 np0005465987 nova_compute[230713]: 2025-10-02 12:23:53.752 2 DEBUG oslo_concurrency.lockutils [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:53 np0005465987 nova_compute[230713]: 2025-10-02 12:23:53.752 2 DEBUG oslo_concurrency.lockutils [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:53 np0005465987 nova_compute[230713]: 2025-10-02 12:23:53.752 2 DEBUG oslo_concurrency.lockutils [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:53 np0005465987 nova_compute[230713]: 2025-10-02 12:23:53.752 2 DEBUG nova.compute.manager [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:53 np0005465987 nova_compute[230713]: 2025-10-02 12:23:53.753 2 WARNING nova.compute.manager [req-4b478065-79a7-47b8-af5d-9e4a31338838 req-23febb75-7fb5-49ca-be95-c443d47b003d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:23:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:53 np0005465987 nova_compute[230713]: 2025-10-02 12:23:53.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:53.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:54 np0005465987 nova_compute[230713]: 2025-10-02 12:23:54.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:23:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:54.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:23:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:55.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:55 np0005465987 nova_compute[230713]: 2025-10-02 12:23:55.974 2 DEBUG nova.compute.manager [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-changed-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:55 np0005465987 nova_compute[230713]: 2025-10-02 12:23:55.974 2 DEBUG nova.compute.manager [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Refreshing instance network info cache due to event network-changed-a399f412-7fee-426a-9fed-b593373c0a90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:23:55 np0005465987 nova_compute[230713]: 2025-10-02 12:23:55.974 2 DEBUG oslo_concurrency.lockutils [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:55 np0005465987 nova_compute[230713]: 2025-10-02 12:23:55.974 2 DEBUG oslo_concurrency.lockutils [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:55 np0005465987 nova_compute[230713]: 2025-10-02 12:23:55.974 2 DEBUG nova.network.neutron [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Refreshing network info cache for port a399f412-7fee-426a-9fed-b593373c0a90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:23:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:56.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:56 np0005465987 nova_compute[230713]: 2025-10-02 12:23:56.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:56 np0005465987 nova_compute[230713]: 2025-10-02 12:23:56.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:56 np0005465987 nova_compute[230713]: 2025-10-02 12:23:56.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:56 np0005465987 nova_compute[230713]: 2025-10-02 12:23:56.967 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:56 np0005465987 nova_compute[230713]: 2025-10-02 12:23:56.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:23:56 np0005465987 nova_compute[230713]: 2025-10-02 12:23:56.997 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.005124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837005160, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 851, "num_deletes": 257, "total_data_size": 1476239, "memory_usage": 1507776, "flush_reason": "Manual Compaction"}
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837011878, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 972642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43300, "largest_seqno": 44146, "table_properties": {"data_size": 968677, "index_size": 1681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9264, "raw_average_key_size": 19, "raw_value_size": 960429, "raw_average_value_size": 2017, "num_data_blocks": 73, "num_entries": 476, "num_filter_entries": 476, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407788, "oldest_key_time": 1759407788, "file_creation_time": 1759407837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 6822 microseconds, and 2910 cpu microseconds.
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.011939) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 972642 bytes OK
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.011968) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.013726) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.013741) EVENT_LOG_v1 {"time_micros": 1759407837013736, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.013760) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1471813, prev total WAL file size 1471813, number of live WAL files 2.
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.014472) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323534' seq:72057594037927935, type:22 .. '6C6F676D0031353036' seq:0, type:0; will stop at (end)
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(949KB)], [81(9743KB)]
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837014499, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 10949931, "oldest_snapshot_seqno": -1}
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 6841 keys, 10814512 bytes, temperature: kUnknown
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837076338, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 10814512, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10767874, "index_size": 28414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 176078, "raw_average_key_size": 25, "raw_value_size": 10644563, "raw_average_value_size": 1555, "num_data_blocks": 1133, "num_entries": 6841, "num_filter_entries": 6841, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759407837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.076551) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10814512 bytes
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.077542) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.9 rd, 174.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.5 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(22.4) write-amplify(11.1) OK, records in: 7371, records dropped: 530 output_compression: NoCompression
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.077557) EVENT_LOG_v1 {"time_micros": 1759407837077550, "job": 50, "event": "compaction_finished", "compaction_time_micros": 61903, "compaction_time_cpu_micros": 31146, "output_level": 6, "num_output_files": 1, "total_output_size": 10814512, "num_input_records": 7371, "num_output_records": 6841, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837077801, "job": 50, "event": "table_file_deletion", "file_number": 83}
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407837079329, "job": 50, "event": "table_file_deletion", "file_number": 81}
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.014366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.079462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.079469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.079473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.079476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:23:57.079479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:23:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:57.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:23:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:58.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:23:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:58 np0005465987 podman[266722]: 2025-10-02 12:23:58.840105086 +0000 UTC m=+0.058088841 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:23:58 np0005465987 nova_compute[230713]: 2025-10-02 12:23:58.995 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.020 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.021 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.021 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.021 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.021 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e287 e287: 3 total, 3 up, 3 in
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3772221847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.434 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.577 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.577 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.582 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.582 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.809 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.811 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4263MB free_disk=20.841617584228516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.811 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.812 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.859 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Migration for instance e1e08b12-a046-4fbd-a025-dff040cca766 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.887 2 INFO nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating resource usage from migration c0e29430-3f1f-4f18-bd83-d892eb2d6811#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.888 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Starting to track outgoing migration c0e29430-3f1f-4f18-bd83-d892eb2d6811 with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.931 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 63cd9316-0da7-4274-b2dc-3501e55e6617 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.932 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Migration c0e29430-3f1f-4f18-bd83-d892eb2d6811 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.932 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:23:59 np0005465987 nova_compute[230713]: 2025-10-02 12:23:59.932 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:23:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:23:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:59.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:00 np0005465987 nova_compute[230713]: 2025-10-02 12:24:00.074 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/76348305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:00 np0005465987 nova_compute[230713]: 2025-10-02 12:24:00.518 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:00 np0005465987 nova_compute[230713]: 2025-10-02 12:24:00.524 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:24:00 np0005465987 nova_compute[230713]: 2025-10-02 12:24:00.548 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:24:00 np0005465987 nova_compute[230713]: 2025-10-02 12:24:00.574 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:24:00 np0005465987 nova_compute[230713]: 2025-10-02 12:24:00.574 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:00 np0005465987 nova_compute[230713]: 2025-10-02 12:24:00.595 2 DEBUG nova.network.neutron [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updated VIF entry in instance network info cache for port a399f412-7fee-426a-9fed-b593373c0a90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:24:00 np0005465987 nova_compute[230713]: 2025-10-02 12:24:00.596 2 DEBUG nova.network.neutron [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating instance_info_cache with network_info: [{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:24:00 np0005465987 nova_compute[230713]: 2025-10-02 12:24:00.614 2 DEBUG oslo_concurrency.lockutils [req-0e787e3c-b9c4-473b-ace0-5364588aa154 req-3410c0d5-8811-42e4-9364-8f277c692122 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:24:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:00.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:01 np0005465987 nova_compute[230713]: 2025-10-02 12:24:01.545 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:01 np0005465987 nova_compute[230713]: 2025-10-02 12:24:01.546 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:01 np0005465987 nova_compute[230713]: 2025-10-02 12:24:01.546 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:24:01 np0005465987 nova_compute[230713]: 2025-10-02 12:24:01.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:01 np0005465987 nova_compute[230713]: 2025-10-02 12:24:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:02 np0005465987 nova_compute[230713]: 2025-10-02 12:24:02.254 2 DEBUG nova.compute.manager [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:02 np0005465987 nova_compute[230713]: 2025-10-02 12:24:02.255 2 DEBUG oslo_concurrency.lockutils [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:02 np0005465987 nova_compute[230713]: 2025-10-02 12:24:02.255 2 DEBUG oslo_concurrency.lockutils [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:02 np0005465987 nova_compute[230713]: 2025-10-02 12:24:02.256 2 DEBUG oslo_concurrency.lockutils [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:02 np0005465987 nova_compute[230713]: 2025-10-02 12:24:02.256 2 DEBUG nova.compute.manager [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:24:02 np0005465987 nova_compute[230713]: 2025-10-02 12:24:02.256 2 WARNING nova.compute.manager [req-aea282db-b4a7-4c90-91f1-1636ebf71dc7 req-6d3f617c-38be-47f2-b449-bc4501b41bbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:24:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e288 e288: 3 total, 3 up, 3 in
Oct  2 08:24:02 np0005465987 nova_compute[230713]: 2025-10-02 12:24:02.457 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:02 np0005465987 nova_compute[230713]: 2025-10-02 12:24:02.458 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:02 np0005465987 nova_compute[230713]: 2025-10-02 12:24:02.459 2 DEBUG nova.compute.manager [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Going to confirm migration 13 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:24:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:02.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:03 np0005465987 nova_compute[230713]: 2025-10-02 12:24:03.789 2 DEBUG neutronclient.v2_0.client [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port a399f412-7fee-426a-9fed-b593373c0a90 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:24:03 np0005465987 nova_compute[230713]: 2025-10-02 12:24:03.790 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:24:03 np0005465987 nova_compute[230713]: 2025-10-02 12:24:03.790 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquired lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:24:03 np0005465987 nova_compute[230713]: 2025-10-02 12:24:03.791 2 DEBUG nova.network.neutron [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:24:03 np0005465987 nova_compute[230713]: 2025-10-02 12:24:03.791 2 DEBUG nova.objects.instance [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'info_cache' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:03.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:04 np0005465987 nova_compute[230713]: 2025-10-02 12:24:04.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:04 np0005465987 nova_compute[230713]: 2025-10-02 12:24:04.521 2 DEBUG nova.compute.manager [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:04 np0005465987 nova_compute[230713]: 2025-10-02 12:24:04.522 2 DEBUG oslo_concurrency.lockutils [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:04 np0005465987 nova_compute[230713]: 2025-10-02 12:24:04.522 2 DEBUG oslo_concurrency.lockutils [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:04 np0005465987 nova_compute[230713]: 2025-10-02 12:24:04.522 2 DEBUG oslo_concurrency.lockutils [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:04 np0005465987 nova_compute[230713]: 2025-10-02 12:24:04.523 2 DEBUG nova.compute.manager [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] No waiting events found dispatching network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:24:04 np0005465987 nova_compute[230713]: 2025-10-02 12:24:04.523 2 WARNING nova.compute.manager [req-d52f5a9f-1479-484b-b477-3238f5b5dd39 req-31b26d69-095f-475e-af1f-64bf469b3686 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Received unexpected event network-vif-plugged-a399f412-7fee-426a-9fed-b593373c0a90 for instance with vm_state resized and task_state deleting.#033[00m
Oct  2 08:24:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:04.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:05 np0005465987 nova_compute[230713]: 2025-10-02 12:24:05.510 2 DEBUG nova.network.neutron [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Updating instance_info_cache with network_info: [{"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:24:05 np0005465987 nova_compute[230713]: 2025-10-02 12:24:05.529 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Releasing lock "refresh_cache-e1e08b12-a046-4fbd-a025-dff040cca766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:24:05 np0005465987 nova_compute[230713]: 2025-10-02 12:24:05.530 2 DEBUG nova.objects.instance [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lazy-loading 'migration_context' on Instance uuid e1e08b12-a046-4fbd-a025-dff040cca766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:05 np0005465987 nova_compute[230713]: 2025-10-02 12:24:05.625 2 DEBUG nova.storage.rbd_utils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] removing snapshot(nova-resize) on rbd image(e1e08b12-a046-4fbd-a025-dff040cca766_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:24:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:05.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.295 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407831.2939079, e1e08b12-a046-4fbd-a025-dff040cca766 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.296 2 INFO nova.compute.manager [-] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:24:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e289 e289: 3 total, 3 up, 3 in
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.326 2 DEBUG nova.compute.manager [None req-105cc518-7f6a-461b-afc4-cb50b3c0a609 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.330 2 DEBUG nova.compute.manager [None req-105cc518-7f6a-461b-afc4-cb50b3c0a609 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.347 2 DEBUG nova.virt.libvirt.vif [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-850761329',display_name='tempest-DeleteServersTestJSON-server-850761329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-850761329',id=99,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:24:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c2c11ebecb14f3188f35ea473c4ca02',ramdisk_id='',reservation_id='r-biy5z6a6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1602490521',owner_user_name='tempest-DeleteServersTestJSON-1602490521-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:24:02Z,user_data=None,user_id='a9f7faffac7240869a0196df1ddda7e5',uuid=e1e08b12-a046-4fbd-a025-dff040cca766,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.347 2 DEBUG nova.network.os_vif_util [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converting VIF {"id": "a399f412-7fee-426a-9fed-b593373c0a90", "address": "fa:16:3e:cd:48:e8", "network": {"id": "7754c79a-cca5-48c7-9169-831eaad23ccc", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-484493292-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c2c11ebecb14f3188f35ea473c4ca02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa399f412-7f", "ovs_interfaceid": "a399f412-7fee-426a-9fed-b593373c0a90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.348 2 DEBUG nova.network.os_vif_util [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.348 2 DEBUG os_vif [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa399f412-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.353 2 INFO os_vif [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:48:e8,bridge_name='br-int',has_traffic_filtering=True,id=a399f412-7fee-426a-9fed-b593373c0a90,network=Network(7754c79a-cca5-48c7-9169-831eaad23ccc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa399f412-7f')#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.353 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.354 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.364 2 INFO nova.compute.manager [None req-105cc518-7f6a-461b-afc4-cb50b3c0a609 - - - - - -] [instance: e1e08b12-a046-4fbd-a025-dff040cca766] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.488 2 DEBUG oslo_concurrency.processutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:06.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2470768981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.919 2 DEBUG oslo_concurrency.processutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.925 2 DEBUG nova.compute.provider_tree [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:24:06 np0005465987 nova_compute[230713]: 2025-10-02 12:24:06.947 2 DEBUG nova.scheduler.client.report [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:24:07 np0005465987 nova_compute[230713]: 2025-10-02 12:24:07.004 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:07 np0005465987 nova_compute[230713]: 2025-10-02 12:24:07.108 2 INFO nova.scheduler.client.report [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Deleted allocation for migration c0e29430-3f1f-4f18-bd83-d892eb2d6811#033[00m
Oct  2 08:24:07 np0005465987 nova_compute[230713]: 2025-10-02 12:24:07.157 2 DEBUG oslo_concurrency.lockutils [None req-033734b9-3251-4aa2-a16a-fddfd3a161d4 a9f7faffac7240869a0196df1ddda7e5 1c2c11ebecb14f3188f35ea473c4ca02 - - default default] Lock "e1e08b12-a046-4fbd-a025-dff040cca766" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:07 np0005465987 podman[266847]: 2025-10-02 12:24:07.846162862 +0000 UTC m=+0.050489651 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:24:07 np0005465987 podman[266846]: 2025-10-02 12:24:07.861568285 +0000 UTC m=+0.066948154 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:24:07 np0005465987 podman[266845]: 2025-10-02 12:24:07.869072432 +0000 UTC m=+0.079867970 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  2 08:24:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:07.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:08.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:09 np0005465987 nova_compute[230713]: 2025-10-02 12:24:09.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:09 np0005465987 nova_compute[230713]: 2025-10-02 12:24:09.976 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:09.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:10.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:11 np0005465987 nova_compute[230713]: 2025-10-02 12:24:11.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 e290: 3 total, 3 up, 3 in
Oct  2 08:24:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:12.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:24:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:24:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:24:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:13.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:14 np0005465987 nova_compute[230713]: 2025-10-02 12:24:14.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:14.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:24:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/10777761' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:24:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:24:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/10777761' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:24:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:15.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:16.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:16 np0005465987 nova_compute[230713]: 2025-10-02 12:24:16.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:18.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:18.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:19 np0005465987 nova_compute[230713]: 2025-10-02 12:24:19.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.267770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859267833, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 560, "num_deletes": 253, "total_data_size": 755924, "memory_usage": 767632, "flush_reason": "Manual Compaction"}
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859275352, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 497678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44151, "largest_seqno": 44706, "table_properties": {"data_size": 494701, "index_size": 949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7504, "raw_average_key_size": 19, "raw_value_size": 488510, "raw_average_value_size": 1295, "num_data_blocks": 41, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407838, "oldest_key_time": 1759407838, "file_creation_time": 1759407859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 7630 microseconds, and 3583 cpu microseconds.
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.275407) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 497678 bytes OK
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.275430) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.277872) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.277897) EVENT_LOG_v1 {"time_micros": 1759407859277889, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.277921) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 752651, prev total WAL file size 752651, number of live WAL files 2.
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.278878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(486KB)], [84(10MB)]
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859278964, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11312190, "oldest_snapshot_seqno": -1}
Oct  2 08:24:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:24:19Z|00332|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 6696 keys, 9424802 bytes, temperature: kUnknown
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859351492, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9424802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9380507, "index_size": 26455, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 173848, "raw_average_key_size": 25, "raw_value_size": 9260984, "raw_average_value_size": 1383, "num_data_blocks": 1042, "num_entries": 6696, "num_filter_entries": 6696, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759407859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.351778) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9424802 bytes
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.353109) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 130.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(41.7) write-amplify(18.9) OK, records in: 7218, records dropped: 522 output_compression: NoCompression
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.353140) EVENT_LOG_v1 {"time_micros": 1759407859353120, "job": 52, "event": "compaction_finished", "compaction_time_micros": 72355, "compaction_time_cpu_micros": 49126, "output_level": 6, "num_output_files": 1, "total_output_size": 9424802, "num_input_records": 7218, "num_output_records": 6696, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859353359, "job": 52, "event": "table_file_deletion", "file_number": 86}
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407859355746, "job": 52, "event": "table_file_deletion", "file_number": 84}
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.278649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.355903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.355911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.355914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.355918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:19 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:24:19.355921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:19 np0005465987 nova_compute[230713]: 2025-10-02 12:24:19.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:20.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:24:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:24:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:21 np0005465987 nova_compute[230713]: 2025-10-02 12:24:21.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:22.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:22.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:24.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:24 np0005465987 nova_compute[230713]: 2025-10-02 12:24:24.030 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:24 np0005465987 nova_compute[230713]: 2025-10-02 12:24:24.053 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Triggering sync for uuid 63cd9316-0da7-4274-b2dc-3501e55e6617 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:24:24 np0005465987 nova_compute[230713]: 2025-10-02 12:24:24.053 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "63cd9316-0da7-4274-b2dc-3501e55e6617" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:24 np0005465987 nova_compute[230713]: 2025-10-02 12:24:24.054 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:24 np0005465987 nova_compute[230713]: 2025-10-02 12:24:24.076 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:24 np0005465987 nova_compute[230713]: 2025-10-02 12:24:24.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:24.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:26.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:26 np0005465987 nova_compute[230713]: 2025-10-02 12:24:26.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:27 np0005465987 nova_compute[230713]: 2025-10-02 12:24:27.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:24:27.950 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:24:27.950 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:24:27.951 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:28.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:28.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:29 np0005465987 nova_compute[230713]: 2025-10-02 12:24:29.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:29 np0005465987 nova_compute[230713]: 2025-10-02 12:24:29.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:29 np0005465987 podman[267089]: 2025-10-02 12:24:29.871827314 +0000 UTC m=+0.083975083 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:24:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:30.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:30.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:31 np0005465987 nova_compute[230713]: 2025-10-02 12:24:31.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:32.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:24:33.165 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:24:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:24:33.166 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:24:33 np0005465987 nova_compute[230713]: 2025-10-02 12:24:33.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:34 np0005465987 nova_compute[230713]: 2025-10-02 12:24:34.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:34.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:34 np0005465987 nova_compute[230713]: 2025-10-02 12:24:34.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:34.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:36.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:36 np0005465987 ovn_controller[129172]: 2025-10-02T12:24:36Z|00333|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:24:36 np0005465987 nova_compute[230713]: 2025-10-02 12:24:36.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:36.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:36 np0005465987 nova_compute[230713]: 2025-10-02 12:24:36.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:37 np0005465987 nova_compute[230713]: 2025-10-02 12:24:37.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:38.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:38.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:38 np0005465987 podman[267109]: 2025-10-02 12:24:38.850759993 +0000 UTC m=+0.058870722 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 08:24:38 np0005465987 podman[267115]: 2025-10-02 12:24:38.87494888 +0000 UTC m=+0.072627931 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:24:38 np0005465987 podman[267108]: 2025-10-02 12:24:38.88368047 +0000 UTC m=+0.097466615 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 08:24:39 np0005465987 nova_compute[230713]: 2025-10-02 12:24:39.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:40.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:40.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:41 np0005465987 nova_compute[230713]: 2025-10-02 12:24:41.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:42.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:42.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:24:43.168 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:44.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:44 np0005465987 nova_compute[230713]: 2025-10-02 12:24:44.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:44.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:46.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:46.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:46 np0005465987 nova_compute[230713]: 2025-10-02 12:24:46.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:47 np0005465987 nova_compute[230713]: 2025-10-02 12:24:47.989 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:48.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:48.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:49 np0005465987 nova_compute[230713]: 2025-10-02 12:24:49.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:49 np0005465987 nova_compute[230713]: 2025-10-02 12:24:49.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:49 np0005465987 nova_compute[230713]: 2025-10-02 12:24:49.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:24:50 np0005465987 nova_compute[230713]: 2025-10-02 12:24:50.005 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:24:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:50.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:50.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:50 np0005465987 nova_compute[230713]: 2025-10-02 12:24:50.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:51 np0005465987 nova_compute[230713]: 2025-10-02 12:24:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:51 np0005465987 nova_compute[230713]: 2025-10-02 12:24:51.998 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:52.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:52.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:54.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.112 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.113 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.131 2 DEBUG nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.275 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.276 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.283 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.283 2 INFO nova.compute.claims [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.368 2 DEBUG nova.scheduler.client.report [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.385 2 DEBUG nova.scheduler.client.report [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.385 2 DEBUG nova.compute.provider_tree [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.408 2 DEBUG nova.scheduler.client.report [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.433 2 DEBUG nova.scheduler.client.report [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.496 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:54.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2609753080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.936 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.941 2 DEBUG nova.compute.provider_tree [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:24:54 np0005465987 nova_compute[230713]: 2025-10-02 12:24:54.968 2 DEBUG nova.scheduler.client.report [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.007 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.008 2 DEBUG nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.075 2 DEBUG nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.076 2 DEBUG nova.network.neutron [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.103 2 INFO nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.121 2 DEBUG nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.220 2 DEBUG nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.222 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.223 2 INFO nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Creating image(s)#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.259 2 DEBUG nova.storage.rbd_utils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] rbd image e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.300 2 DEBUG nova.storage.rbd_utils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] rbd image e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.342 2 DEBUG nova.storage.rbd_utils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] rbd image e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.347 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.382 2 DEBUG nova.policy [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39945f35886a49cb93285686405e043a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5bedc1d8125465b92c98f9ebeb80ddb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.418 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.419 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.420 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.420 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.448 2 DEBUG nova.storage.rbd_utils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] rbd image e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.451 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.791 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.863 2 DEBUG nova.storage.rbd_utils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] resizing rbd image e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.988 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:55 np0005465987 nova_compute[230713]: 2025-10-02 12:24:55.995 2 DEBUG nova.objects.instance [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lazy-loading 'migration_context' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:56 np0005465987 nova_compute[230713]: 2025-10-02 12:24:56.015 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:24:56 np0005465987 nova_compute[230713]: 2025-10-02 12:24:56.016 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Ensure instance console log exists: /var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:24:56 np0005465987 nova_compute[230713]: 2025-10-02 12:24:56.017 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:56 np0005465987 nova_compute[230713]: 2025-10-02 12:24:56.017 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:56 np0005465987 nova_compute[230713]: 2025-10-02 12:24:56.018 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:24:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:56.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:24:56 np0005465987 nova_compute[230713]: 2025-10-02 12:24:56.238 2 DEBUG nova.network.neutron [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Successfully created port: 79000dd0-8403-4378-a1a4-87315f5019fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:24:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:24:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:56.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:24:56 np0005465987 nova_compute[230713]: 2025-10-02 12:24:56.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:57 np0005465987 nova_compute[230713]: 2025-10-02 12:24:57.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:58.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.248 2 DEBUG nova.network.neutron [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Successfully updated port: 79000dd0-8403-4378-a1a4-87315f5019fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.448 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.449 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquired lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.449 2 DEBUG nova.network.neutron [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.496 2 DEBUG nova.compute.manager [req-e3e8de70-1568-48cd-95a2-94e93d257351 req-b322431b-b48e-41fe-8b7c-45a689f4bbf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-changed-79000dd0-8403-4378-a1a4-87315f5019fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.496 2 DEBUG nova.compute.manager [req-e3e8de70-1568-48cd-95a2-94e93d257351 req-b322431b-b48e-41fe-8b7c-45a689f4bbf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Refreshing instance network info cache due to event network-changed-79000dd0-8403-4378-a1a4-87315f5019fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.496 2 DEBUG oslo_concurrency.lockutils [req-e3e8de70-1568-48cd-95a2-94e93d257351 req-b322431b-b48e-41fe-8b7c-45a689f4bbf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.716 2 DEBUG nova.network.neutron [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:24:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:24:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:58.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:58 np0005465987 nova_compute[230713]: 2025-10-02 12:24:58.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.019 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.019 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.020 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.020 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.020 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2713298449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.466 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.626 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.627 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:24:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e291 e291: 3 total, 3 up, 3 in
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.808 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.810 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4243MB free_disk=20.820556640625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.810 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.810 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.918 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 63cd9316-0da7-4274-b2dc-3501e55e6617 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.919 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.919 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:24:59 np0005465987 nova_compute[230713]: 2025-10-02 12:24:59.919 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.021 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:00.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:25:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1937089493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.509 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.517 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.548 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.633 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.634 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.738 2 DEBUG nova.network.neutron [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating instance_info_cache with network_info: [{"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.831 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Releasing lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.831 2 DEBUG nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Instance network_info: |[{"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.832 2 DEBUG oslo_concurrency.lockutils [req-e3e8de70-1568-48cd-95a2-94e93d257351 req-b322431b-b48e-41fe-8b7c-45a689f4bbf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.833 2 DEBUG nova.network.neutron [req-e3e8de70-1568-48cd-95a2-94e93d257351 req-b322431b-b48e-41fe-8b7c-45a689f4bbf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Refreshing network info cache for port 79000dd0-8403-4378-a1a4-87315f5019fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:25:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e292 e292: 3 total, 3 up, 3 in
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.840 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Start _get_guest_xml network_info=[{"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.847 2 WARNING nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.851 2 DEBUG nova.virt.libvirt.host [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.852 2 DEBUG nova.virt.libvirt.host [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.857 2 DEBUG nova.virt.libvirt.host [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.857 2 DEBUG nova.virt.libvirt.host [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.859 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.859 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.859 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.860 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.860 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.860 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.860 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.860 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.860 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.861 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.861 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.861 2 DEBUG nova.virt.hardware [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:25:00 np0005465987 nova_compute[230713]: 2025-10-02 12:25:00.864 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:00 np0005465987 podman[267402]: 2025-10-02 12:25:00.87364541 +0000 UTC m=+0.083335156 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:25:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:25:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4110028255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.308 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.341 2 DEBUG nova.storage.rbd_utils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] rbd image e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.345 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.634 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:25:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2996827895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.756 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.759 2 DEBUG nova.virt.libvirt.vif [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1922630973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1922630973',id=103,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOQDlxJeDw1n2lNeT07GFQw8tac43jR2Y/OIPYODq3fJ2v4BDr9CzOuh+zzaReBehEO76GEvBdX/fypn1I4RDJoLYBYiQL/9PaGBzrrf1rNcTZY0TsVr6HL2UpV8dqNmw==',key_name='tempest-keypair-1483715208',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5bedc1d8125465b92c98f9ebeb80ddb',ramdisk_id='',reservation_id='r-ibanttiz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-833632879',owner_user_name='tempest-TaggedAttachmentsTest-833632879-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:24:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39945f35886a49cb93285686405e043a',uuid=e69e3ed0-aca2-475e-9c98-e8d075ea4ce3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.759 2 DEBUG nova.network.os_vif_util [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converting VIF {"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.761 2 DEBUG nova.network.os_vif_util [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:fc:f5,bridge_name='br-int',has_traffic_filtering=True,id=79000dd0-8403-4378-a1a4-87315f5019fb,network=Network(27c9fe46-eb26-48b9-9738-9fa4cca1ca98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79000dd0-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.762 2 DEBUG nova.objects.instance [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lazy-loading 'pci_devices' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.791 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <uuid>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</uuid>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <name>instance-00000067</name>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <nova:name>tempest-device-tagging-server-1922630973</nova:name>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:25:00</nova:creationTime>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <nova:user uuid="39945f35886a49cb93285686405e043a">tempest-TaggedAttachmentsTest-833632879-project-member</nova:user>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <nova:project uuid="d5bedc1d8125465b92c98f9ebeb80ddb">tempest-TaggedAttachmentsTest-833632879</nova:project>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <nova:port uuid="79000dd0-8403-4378-a1a4-87315f5019fb">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <entry name="serial">e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <entry name="uuid">e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:93:fc:f5"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <target dev="tap79000dd0-84"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log" append="off"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:25:01 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:25:01 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:25:01 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:25:01 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.793 2 DEBUG nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Preparing to wait for external event network-vif-plugged-79000dd0-8403-4378-a1a4-87315f5019fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.794 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.794 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.795 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.795 2 DEBUG nova.virt.libvirt.vif [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1922630973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1922630973',id=103,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOQDlxJeDw1n2lNeT07GFQw8tac43jR2Y/OIPYODq3fJ2v4BDr9CzOuh+zzaReBehEO76GEvBdX/fypn1I4RDJoLYBYiQL/9PaGBzrrf1rNcTZY0TsVr6HL2UpV8dqNmw==',key_name='tempest-keypair-1483715208',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5bedc1d8125465b92c98f9ebeb80ddb',ramdisk_id='',reservation_id='r-ibanttiz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-833632879',owner_user_name='tempest-TaggedAttachmentsTest-833632879-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:24:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39945f35886a49cb93285686405e043a',uuid=e69e3ed0-aca2-475e-9c98-e8d075ea4ce3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.796 2 DEBUG nova.network.os_vif_util [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converting VIF {"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.796 2 DEBUG nova.network.os_vif_util [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:fc:f5,bridge_name='br-int',has_traffic_filtering=True,id=79000dd0-8403-4378-a1a4-87315f5019fb,network=Network(27c9fe46-eb26-48b9-9738-9fa4cca1ca98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79000dd0-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.796 2 DEBUG os_vif [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:fc:f5,bridge_name='br-int',has_traffic_filtering=True,id=79000dd0-8403-4378-a1a4-87315f5019fb,network=Network(27c9fe46-eb26-48b9-9738-9fa4cca1ca98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79000dd0-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.798 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.798 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79000dd0-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79000dd0-84, col_values=(('external_ids', {'iface-id': '79000dd0-8403-4378-a1a4-87315f5019fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:fc:f5', 'vm-uuid': 'e69e3ed0-aca2-475e-9c98-e8d075ea4ce3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:01 np0005465987 NetworkManager[44910]: <info>  [1759407901.8404] manager: (tap79000dd0-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.847 2 INFO os_vif [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:fc:f5,bridge_name='br-int',has_traffic_filtering=True,id=79000dd0-8403-4378-a1a4-87315f5019fb,network=Network(27c9fe46-eb26-48b9-9738-9fa4cca1ca98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79000dd0-84')#033[00m
Oct  2 08:25:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e293 e293: 3 total, 3 up, 3 in
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:01 np0005465987 nova_compute[230713]: 2025-10-02 12:25:01.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:25:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:02.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.081 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.081 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.081 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] No VIF found with MAC fa:16:3e:93:fc:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.082 2 INFO nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Using config drive#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.105 2 DEBUG nova.storage.rbd_utils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] rbd image e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.543 2 DEBUG nova.network.neutron [req-e3e8de70-1568-48cd-95a2-94e93d257351 req-b322431b-b48e-41fe-8b7c-45a689f4bbf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updated VIF entry in instance network info cache for port 79000dd0-8403-4378-a1a4-87315f5019fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.544 2 DEBUG nova.network.neutron [req-e3e8de70-1568-48cd-95a2-94e93d257351 req-b322431b-b48e-41fe-8b7c-45a689f4bbf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating instance_info_cache with network_info: [{"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.644 2 DEBUG oslo_concurrency.lockutils [req-e3e8de70-1568-48cd-95a2-94e93d257351 req-b322431b-b48e-41fe-8b7c-45a689f4bbf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.689 2 INFO nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Creating config drive at /var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/disk.config#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.701 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbzobnmgr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:02.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.844 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbzobnmgr" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.889 2 DEBUG nova.storage.rbd_utils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] rbd image e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:25:02 np0005465987 nova_compute[230713]: 2025-10-02 12:25:02.894 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/disk.config e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.200 2 DEBUG oslo_concurrency.processutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/disk.config e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.202 2 INFO nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Deleting local config drive /var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/disk.config because it was imported into RBD.#033[00m
Oct  2 08:25:03 np0005465987 kernel: tap79000dd0-84: entered promiscuous mode
Oct  2 08:25:03 np0005465987 NetworkManager[44910]: <info>  [1759407903.2424] manager: (tap79000dd0-84): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Oct  2 08:25:03 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:03Z|00334|binding|INFO|Claiming lport 79000dd0-8403-4378-a1a4-87315f5019fb for this chassis.
Oct  2 08:25:03 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:03Z|00335|binding|INFO|79000dd0-8403-4378-a1a4-87315f5019fb: Claiming fa:16:3e:93:fc:f5 10.100.0.7
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.256 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:fc:f5 10.100.0.7'], port_security=['fa:16:3e:93:fc:f5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e69e3ed0-aca2-475e-9c98-e8d075ea4ce3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5bedc1d8125465b92c98f9ebeb80ddb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f10d3d11-b21b-40a8-b011-788ac820525d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7b68bc8-fcf2-4b38-a87f-245c078df7f7, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=79000dd0-8403-4378-a1a4-87315f5019fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.257 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 79000dd0-8403-4378-a1a4-87315f5019fb in datapath 27c9fe46-eb26-48b9-9738-9fa4cca1ca98 bound to our chassis#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.258 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27c9fe46-eb26-48b9-9738-9fa4cca1ca98#033[00m
Oct  2 08:25:03 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:03Z|00336|binding|INFO|Setting lport 79000dd0-8403-4378-a1a4-87315f5019fb ovn-installed in OVS
Oct  2 08:25:03 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:03Z|00337|binding|INFO|Setting lport 79000dd0-8403-4378-a1a4-87315f5019fb up in Southbound
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.273 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e08f4747-9cd9-4e07-a5dc-7e7afb4c677e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.274 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27c9fe46-e1 in ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.276 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27c9fe46-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.276 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc14cb4-a416-41e9-9f43-801c10085a05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.277 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f8babb-ff95-4da2-83e7-814d2ea8c48f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 systemd-udevd[267558]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.288 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1e67a9-1462-4a43-81a4-b5b468b9d5ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 systemd-machined[188335]: New machine qemu-44-instance-00000067.
Oct  2 08:25:03 np0005465987 NetworkManager[44910]: <info>  [1759407903.3061] device (tap79000dd0-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:25:03 np0005465987 NetworkManager[44910]: <info>  [1759407903.3080] device (tap79000dd0-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:25:03 np0005465987 systemd[1]: Started Virtual Machine qemu-44-instance-00000067.
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.313 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[692d800d-92bb-4032-b099-5cc354e1322a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.348 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0276dd29-cdc8-42e2-bebe-429e6ac90bc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.352 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8eda22-782c-424d-8e66-adc89da0b9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 NetworkManager[44910]: <info>  [1759407903.3535] manager: (tap27c9fe46-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/174)
Oct  2 08:25:03 np0005465987 systemd-udevd[267561]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.385 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[833709ac-4847-49ed-b6b0-690ed8338a01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.388 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe16ceb-53ba-4ec9-93c6-ec53276b8927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 NetworkManager[44910]: <info>  [1759407903.4112] device (tap27c9fe46-e0): carrier: link connected
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.416 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0e298350-32ff-477b-80d6-3a4387e25857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.440 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0626b45c-0baf-4d98-a9fa-32edf8b2b5fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27c9fe46-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:70:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597198, 'reachable_time': 42493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267589, 'error': None, 'target': 'ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.459 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1da614db-b5a0-4210-b2dc-5802272f44d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:7005'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597198, 'tstamp': 597198}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267590, 'error': None, 'target': 'ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.475 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a771f350-476f-45fb-b0e5-757647ab8cb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27c9fe46-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:70:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597198, 'reachable_time': 42493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267591, 'error': None, 'target': 'ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.511 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fe116c0a-175d-4a1b-800a-4ed5ef8f829d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.580 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[95fa16af-8223-4afb-aa33-c122701688d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.581 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27c9fe46-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.581 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.582 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27c9fe46-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:03 np0005465987 kernel: tap27c9fe46-e0: entered promiscuous mode
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005465987 NetworkManager[44910]: <info>  [1759407903.5857] manager: (tap27c9fe46-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.588 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27c9fe46-e0, col_values=(('external_ids', {'iface-id': 'dd2034d9-8959-495d-af8e-a08a6bc18b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.593 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27c9fe46-eb26-48b9-9738-9fa4cca1ca98.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27c9fe46-eb26-48b9-9738-9fa4cca1ca98.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.594 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[185bbfcd-bf98-4790-b9d9-1ff66f403938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.595 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-27c9fe46-eb26-48b9-9738-9fa4cca1ca98
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/27c9fe46-eb26-48b9-9738-9fa4cca1ca98.pid.haproxy
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 27c9fe46-eb26-48b9-9738-9fa4cca1ca98
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:25:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:03.595 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'env', 'PROCESS_TAG=haproxy-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27c9fe46-eb26-48b9-9738-9fa4cca1ca98.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:25:03 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:03Z|00338|binding|INFO|Releasing lport dd2034d9-8959-495d-af8e-a08a6bc18b56 from this chassis (sb_readonly=0)
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.685 2 DEBUG nova.compute.manager [req-74fb946c-0cca-447f-a675-8dbb5e8ab9f0 req-99925ed6-d8e3-4553-87a7-87a73c0d026a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-plugged-79000dd0-8403-4378-a1a4-87315f5019fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.686 2 DEBUG oslo_concurrency.lockutils [req-74fb946c-0cca-447f-a675-8dbb5e8ab9f0 req-99925ed6-d8e3-4553-87a7-87a73c0d026a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.686 2 DEBUG oslo_concurrency.lockutils [req-74fb946c-0cca-447f-a675-8dbb5e8ab9f0 req-99925ed6-d8e3-4553-87a7-87a73c0d026a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.687 2 DEBUG oslo_concurrency.lockutils [req-74fb946c-0cca-447f-a675-8dbb5e8ab9f0 req-99925ed6-d8e3-4553-87a7-87a73c0d026a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:03 np0005465987 nova_compute[230713]: 2025-10-02 12:25:03.687 2 DEBUG nova.compute.manager [req-74fb946c-0cca-447f-a675-8dbb5e8ab9f0 req-99925ed6-d8e3-4553-87a7-87a73c0d026a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Processing event network-vif-plugged-79000dd0-8403-4378-a1a4-87315f5019fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:25:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:04 np0005465987 podman[267661]: 2025-10-02 12:25:04.046005489 +0000 UTC m=+0.064544919 container create cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:25:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:04.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:04 np0005465987 systemd[1]: Started libpod-conmon-cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6.scope.
Oct  2 08:25:04 np0005465987 podman[267661]: 2025-10-02 12:25:04.008225559 +0000 UTC m=+0.026765039 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:25:04 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:25:04 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335ae2d8bc275134815651c707ba108fd87c6b9278691f651c601d8c6a79c030/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:25:04 np0005465987 podman[267661]: 2025-10-02 12:25:04.120891541 +0000 UTC m=+0.139430951 container init cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:25:04 np0005465987 podman[267661]: 2025-10-02 12:25:04.127698208 +0000 UTC m=+0.146237608 container start cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:25:04 np0005465987 neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98[267677]: [NOTICE]   (267681) : New worker (267683) forked
Oct  2 08:25:04 np0005465987 neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98[267677]: [NOTICE]   (267681) : Loading success.
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.453 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407904.452838, e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.453 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] VM Started (Lifecycle Event)#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.455 2 DEBUG nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.458 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.462 2 INFO nova.virt.libvirt.driver [-] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Instance spawned successfully.#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.462 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.608 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.611 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.659 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.659 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.660 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.660 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.660 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.661 2 DEBUG nova.virt.libvirt.driver [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:25:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:04.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.744 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.745 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407904.4530635, e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:25:04 np0005465987 nova_compute[230713]: 2025-10-02 12:25:04.745 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.071 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.075 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759407904.4579246, e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.075 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.333 2 INFO nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Took 10.11 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.333 2 DEBUG nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.385 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.391 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.476 2 INFO nova.compute.manager [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Took 11.29 seconds to build instance.#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.751 2 DEBUG oslo_concurrency.lockutils [None req-bbdd4698-e10a-4eef-bb2a-3d99a2614b76 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.928 2 DEBUG nova.compute.manager [req-f1ae1955-756d-43c0-9fe6-475f1a702a18 req-848a31cb-8598-4ad1-807b-a3360b34adb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-plugged-79000dd0-8403-4378-a1a4-87315f5019fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.929 2 DEBUG oslo_concurrency.lockutils [req-f1ae1955-756d-43c0-9fe6-475f1a702a18 req-848a31cb-8598-4ad1-807b-a3360b34adb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.929 2 DEBUG oslo_concurrency.lockutils [req-f1ae1955-756d-43c0-9fe6-475f1a702a18 req-848a31cb-8598-4ad1-807b-a3360b34adb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.929 2 DEBUG oslo_concurrency.lockutils [req-f1ae1955-756d-43c0-9fe6-475f1a702a18 req-848a31cb-8598-4ad1-807b-a3360b34adb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.930 2 DEBUG nova.compute.manager [req-f1ae1955-756d-43c0-9fe6-475f1a702a18 req-848a31cb-8598-4ad1-807b-a3360b34adb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] No waiting events found dispatching network-vif-plugged-79000dd0-8403-4378-a1a4-87315f5019fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:05 np0005465987 nova_compute[230713]: 2025-10-02 12:25:05.930 2 WARNING nova.compute.manager [req-f1ae1955-756d-43c0-9fe6-475f1a702a18 req-848a31cb-8598-4ad1-807b-a3360b34adb1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received unexpected event network-vif-plugged-79000dd0-8403-4378-a1a4-87315f5019fb for instance with vm_state active and task_state None.#033[00m
Oct  2 08:25:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:06.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:06.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:06 np0005465987 nova_compute[230713]: 2025-10-02 12:25:06.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e294 e294: 3 total, 3 up, 3 in
Oct  2 08:25:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:08.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:08.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:08 np0005465987 nova_compute[230713]: 2025-10-02 12:25:08.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:09 np0005465987 nova_compute[230713]: 2025-10-02 12:25:09.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:09 np0005465987 podman[267693]: 2025-10-02 12:25:09.852784325 +0000 UTC m=+0.062692368 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:25:09 np0005465987 podman[267694]: 2025-10-02 12:25:09.855765857 +0000 UTC m=+0.066514933 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:25:09 np0005465987 podman[267692]: 2025-10-02 12:25:09.876576639 +0000 UTC m=+0.096794265 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:25:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:10.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:10.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:11 np0005465987 nova_compute[230713]: 2025-10-02 12:25:11.793 2 DEBUG nova.compute.manager [req-8589789b-3793-4e79-8fa5-6826d1752f5d req-8ac4de2a-395a-4515-9c3b-8db7d8a246b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-changed-79000dd0-8403-4378-a1a4-87315f5019fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:11 np0005465987 nova_compute[230713]: 2025-10-02 12:25:11.795 2 DEBUG nova.compute.manager [req-8589789b-3793-4e79-8fa5-6826d1752f5d req-8ac4de2a-395a-4515-9c3b-8db7d8a246b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Refreshing instance network info cache due to event network-changed-79000dd0-8403-4378-a1a4-87315f5019fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:25:11 np0005465987 nova_compute[230713]: 2025-10-02 12:25:11.795 2 DEBUG oslo_concurrency.lockutils [req-8589789b-3793-4e79-8fa5-6826d1752f5d req-8ac4de2a-395a-4515-9c3b-8db7d8a246b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:25:11 np0005465987 nova_compute[230713]: 2025-10-02 12:25:11.795 2 DEBUG oslo_concurrency.lockutils [req-8589789b-3793-4e79-8fa5-6826d1752f5d req-8ac4de2a-395a-4515-9c3b-8db7d8a246b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:25:11 np0005465987 nova_compute[230713]: 2025-10-02 12:25:11.796 2 DEBUG nova.network.neutron [req-8589789b-3793-4e79-8fa5-6826d1752f5d req-8ac4de2a-395a-4515-9c3b-8db7d8a246b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Refreshing network info cache for port 79000dd0-8403-4378-a1a4-87315f5019fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:25:11 np0005465987 nova_compute[230713]: 2025-10-02 12:25:11.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:12.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:12.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:14.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:14 np0005465987 nova_compute[230713]: 2025-10-02 12:25:14.293 2 DEBUG nova.network.neutron [req-8589789b-3793-4e79-8fa5-6826d1752f5d req-8ac4de2a-395a-4515-9c3b-8db7d8a246b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updated VIF entry in instance network info cache for port 79000dd0-8403-4378-a1a4-87315f5019fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:25:14 np0005465987 nova_compute[230713]: 2025-10-02 12:25:14.295 2 DEBUG nova.network.neutron [req-8589789b-3793-4e79-8fa5-6826d1752f5d req-8ac4de2a-395a-4515-9c3b-8db7d8a246b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating instance_info_cache with network_info: [{"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:14 np0005465987 nova_compute[230713]: 2025-10-02 12:25:14.396 2 DEBUG oslo_concurrency.lockutils [req-8589789b-3793-4e79-8fa5-6826d1752f5d req-8ac4de2a-395a-4515-9c3b-8db7d8a246b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:14 np0005465987 nova_compute[230713]: 2025-10-02 12:25:14.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:14.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:16.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:16.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:16 np0005465987 nova_compute[230713]: 2025-10-02 12:25:16.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:17Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:fc:f5 10.100.0.7
Oct  2 08:25:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:17Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:fc:f5 10.100.0.7
Oct  2 08:25:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:18.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:19 np0005465987 nova_compute[230713]: 2025-10-02 12:25:19.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:20.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:20.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:21 np0005465987 nova_compute[230713]: 2025-10-02 12:25:21.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:25:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:25:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:25:21 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:25:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:22.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:22.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:25:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:25:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:25:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e295 e295: 3 total, 3 up, 3 in
Oct  2 08:25:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:24.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:24 np0005465987 nova_compute[230713]: 2025-10-02 12:25:24.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:24.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:26.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:26 np0005465987 nova_compute[230713]: 2025-10-02 12:25:26.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e296 e296: 3 total, 3 up, 3 in
Oct  2 08:25:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:27.952 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:27.953 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:27.954 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:28.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:28 np0005465987 nova_compute[230713]: 2025-10-02 12:25:28.165 2 DEBUG oslo_concurrency.lockutils [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "interface-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:28 np0005465987 nova_compute[230713]: 2025-10-02 12:25:28.165 2 DEBUG oslo_concurrency.lockutils [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "interface-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:28 np0005465987 nova_compute[230713]: 2025-10-02 12:25:28.167 2 DEBUG nova.objects.instance [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lazy-loading 'flavor' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:28.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:28 np0005465987 nova_compute[230713]: 2025-10-02 12:25:28.791 2 DEBUG nova.objects.instance [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lazy-loading 'pci_requests' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e297 e297: 3 total, 3 up, 3 in
Oct  2 08:25:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:29 np0005465987 nova_compute[230713]: 2025-10-02 12:25:29.224 2 DEBUG nova.network.neutron [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:25:29 np0005465987 nova_compute[230713]: 2025-10-02 12:25:29.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:29 np0005465987 nova_compute[230713]: 2025-10-02 12:25:29.788 2 DEBUG nova.policy [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39945f35886a49cb93285686405e043a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5bedc1d8125465b92c98f9ebeb80ddb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:25:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:30.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:30.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:30 np0005465987 nova_compute[230713]: 2025-10-02 12:25:30.978 2 DEBUG nova.network.neutron [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Successfully created port: a5a2116f-132d-4969-bd1c-ce09c2b93097 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:25:31 np0005465987 podman[268007]: 2025-10-02 12:25:31.840472432 +0000 UTC m=+0.057458234 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  2 08:25:31 np0005465987 nova_compute[230713]: 2025-10-02 12:25:31.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:32.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:32.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:34.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:34 np0005465987 nova_compute[230713]: 2025-10-02 12:25:34.273 2 DEBUG nova.network.neutron [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Successfully updated port: a5a2116f-132d-4969-bd1c-ce09c2b93097 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:25:34 np0005465987 nova_compute[230713]: 2025-10-02 12:25:34.383 2 DEBUG oslo_concurrency.lockutils [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:25:34 np0005465987 nova_compute[230713]: 2025-10-02 12:25:34.384 2 DEBUG oslo_concurrency.lockutils [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquired lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:25:34 np0005465987 nova_compute[230713]: 2025-10-02 12:25:34.384 2 DEBUG nova.network.neutron [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:25:34 np0005465987 nova_compute[230713]: 2025-10-02 12:25:34.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:34.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:36.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:36.186 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:36 np0005465987 nova_compute[230713]: 2025-10-02 12:25:36.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:36.189 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:25:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:25:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:25:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:36.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:36 np0005465987 nova_compute[230713]: 2025-10-02 12:25:36.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e298 e298: 3 total, 3 up, 3 in
Oct  2 08:25:37 np0005465987 nova_compute[230713]: 2025-10-02 12:25:37.844 2 DEBUG nova.compute.manager [req-b3a11f0f-9b4d-4249-b94f-c1f8c56c67d0 req-4efe2c49-6b00-4694-9b84-560ff5d755d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-changed-a5a2116f-132d-4969-bd1c-ce09c2b93097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:37 np0005465987 nova_compute[230713]: 2025-10-02 12:25:37.845 2 DEBUG nova.compute.manager [req-b3a11f0f-9b4d-4249-b94f-c1f8c56c67d0 req-4efe2c49-6b00-4694-9b84-560ff5d755d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Refreshing instance network info cache due to event network-changed-a5a2116f-132d-4969-bd1c-ce09c2b93097. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:25:37 np0005465987 nova_compute[230713]: 2025-10-02 12:25:37.846 2 DEBUG oslo_concurrency.lockutils [req-b3a11f0f-9b4d-4249-b94f-c1f8c56c67d0 req-4efe2c49-6b00-4694-9b84-560ff5d755d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:25:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:38.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.729 2 DEBUG nova.network.neutron [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating instance_info_cache with network_info: [{"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.762 2 DEBUG oslo_concurrency.lockutils [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Releasing lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.764 2 DEBUG oslo_concurrency.lockutils [req-b3a11f0f-9b4d-4249-b94f-c1f8c56c67d0 req-4efe2c49-6b00-4694-9b84-560ff5d755d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.765 2 DEBUG nova.network.neutron [req-b3a11f0f-9b4d-4249-b94f-c1f8c56c67d0 req-4efe2c49-6b00-4694-9b84-560ff5d755d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Refreshing network info cache for port a5a2116f-132d-4969-bd1c-ce09c2b93097 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.768 2 DEBUG nova.virt.libvirt.vif [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1922630973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1922630973',id=103,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOQDlxJeDw1n2lNeT07GFQw8tac43jR2Y/OIPYODq3fJ2v4BDr9CzOuh+zzaReBehEO76GEvBdX/fypn1I4RDJoLYBYiQL/9PaGBzrrf1rNcTZY0TsVr6HL2UpV8dqNmw==',key_name='tempest-keypair-1483715208',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:25:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5bedc1d8125465b92c98f9ebeb80ddb',ramdisk_id='',reservation_id='r-ibanttiz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-833632879',owner_user_name='tempest-TaggedAttachmentsTest-833632879-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:25:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39945f35886a49cb93285686405e043a',uuid=e69e3ed0-aca2-475e-9c98-e8d075ea4ce3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.769 2 DEBUG nova.network.os_vif_util [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converting VIF {"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.770 2 DEBUG nova.network.os_vif_util [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.771 2 DEBUG os_vif [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.776 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5a2116f-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.777 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5a2116f-13, col_values=(('external_ids', {'iface-id': 'a5a2116f-132d-4969-bd1c-ce09c2b93097', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:4b:b9', 'vm-uuid': 'e69e3ed0-aca2-475e-9c98-e8d075ea4ce3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:38 np0005465987 NetworkManager[44910]: <info>  [1759407938.7801] manager: (tapa5a2116f-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Oct  2 08:25:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:38.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.789 2 INFO os_vif [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13')#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.790 2 DEBUG nova.virt.libvirt.vif [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1922630973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1922630973',id=103,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOQDlxJeDw1n2lNeT07GFQw8tac43jR2Y/OIPYODq3fJ2v4BDr9CzOuh+zzaReBehEO76GEvBdX/fypn1I4RDJoLYBYiQL/9PaGBzrrf1rNcTZY0TsVr6HL2UpV8dqNmw==',key_name='tempest-keypair-1483715208',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:25:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5bedc1d8125465b92c98f9ebeb80ddb',ramdisk_id='',reservation_id='r-ibanttiz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-833632879',owner_user_name='tempest-TaggedAttachmentsTest-833632879-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:25:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39945f35886a49cb93285686405e043a',uuid=e69e3ed0-aca2-475e-9c98-e8d075ea4ce3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.790 2 DEBUG nova.network.os_vif_util [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converting VIF {"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.791 2 DEBUG nova.network.os_vif_util [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.794 2 DEBUG nova.virt.libvirt.guest [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] attach device xml: <interface type="ethernet">
Oct  2 08:25:38 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:c7:4b:b9"/>
Oct  2 08:25:38 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:25:38 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:25:38 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:25:38 np0005465987 nova_compute[230713]:  <target dev="tapa5a2116f-13"/>
Oct  2 08:25:38 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:25:38 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:25:38 np0005465987 kernel: tapa5a2116f-13: entered promiscuous mode
Oct  2 08:25:38 np0005465987 NetworkManager[44910]: <info>  [1759407938.8083] manager: (tapa5a2116f-13): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Oct  2 08:25:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:38Z|00339|binding|INFO|Claiming lport a5a2116f-132d-4969-bd1c-ce09c2b93097 for this chassis.
Oct  2 08:25:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:38Z|00340|binding|INFO|a5a2116f-132d-4969-bd1c-ce09c2b93097: Claiming fa:16:3e:c7:4b:b9 10.10.10.157
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:38 np0005465987 systemd-udevd[268081]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:25:38 np0005465987 NetworkManager[44910]: <info>  [1759407938.8623] device (tapa5a2116f-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:25:38 np0005465987 NetworkManager[44910]: <info>  [1759407938.8630] device (tapa5a2116f-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.867 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:4b:b9 10.10.10.157'], port_security=['fa:16:3e:c7:4b:b9 10.10.10.157'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.157/24', 'neutron:device_id': 'e69e3ed0-aca2-475e-9c98-e8d075ea4ce3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e62bfe80-48ab-4aba-960a-acf615f1e469', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5bedc1d8125465b92c98f9ebeb80ddb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4217d4cc-2d8f-43a2-b88c-089959cf0d76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a42af78-d20d-4ab0-870c-f0bfb76579fa, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=a5a2116f-132d-4969-bd1c-ce09c2b93097) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.868 138325 INFO neutron.agent.ovn.metadata.agent [-] Port a5a2116f-132d-4969-bd1c-ce09c2b93097 in datapath e62bfe80-48ab-4aba-960a-acf615f1e469 bound to our chassis#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.870 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e62bfe80-48ab-4aba-960a-acf615f1e469#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.885 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[11b3f2a9-7f45-470f-932d-ee8e53b552cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.886 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape62bfe80-41 in ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.888 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape62bfe80-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.888 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bd5548-e1ec-4d76-9724-d563ac11d6db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.889 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1190d736-8eea-4fa6-942b-b0b65fe69f1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:38Z|00341|binding|INFO|Setting lport a5a2116f-132d-4969-bd1c-ce09c2b93097 ovn-installed in OVS
Oct  2 08:25:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:38Z|00342|binding|INFO|Setting lport a5a2116f-132d-4969-bd1c-ce09c2b93097 up in Southbound
Oct  2 08:25:38 np0005465987 nova_compute[230713]: 2025-10-02 12:25:38.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.900 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4f325b-3efc-4bd5-99cf-94fdc0473361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.916 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5e823e-1ae1-4fe0-b710-a264de9aa471]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.946 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb4a2c3-99e3-40c9-8218-4d1c4d49dece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.952 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b62ab6-1167-4481-aa35-6dc00fd0c171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:38 np0005465987 NetworkManager[44910]: <info>  [1759407938.9537] manager: (tape62bfe80-40): new Veth device (/org/freedesktop/NetworkManager/Devices/178)
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.990 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[68292953-911f-493d-9eb8-191e11797021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:38.993 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3e22c3-d2d8-4118-90ba-4d68d2594240]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:39 np0005465987 NetworkManager[44910]: <info>  [1759407939.0172] device (tape62bfe80-40): carrier: link connected
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.023 2 DEBUG nova.virt.libvirt.driver [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.023 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9733225a-2a55-402f-ac68-b42594286f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.023 2 DEBUG nova.virt.libvirt.driver [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.024 2 DEBUG nova.virt.libvirt.driver [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] No VIF found with MAC fa:16:3e:93:fc:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.040 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e966af14-5a80-4525-bf02-29a4d8a6edc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape62bfe80-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f0:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600759, 'reachable_time': 23971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268108, 'error': None, 'target': 'ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.055 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe0dd97-f40a-41f9-af1f-fc7fa71d9bee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:f075'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600759, 'tstamp': 600759}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268109, 'error': None, 'target': 'ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.075 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b7e89e-bf89-491f-88fc-0e7b48ce54fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape62bfe80-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f0:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600759, 'reachable_time': 23971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268110, 'error': None, 'target': 'ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.107 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a779e361-06cf-41cb-a79b-09088da30ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.160 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6e05204c-606f-4727-98b1-a6f1e1436704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.162 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape62bfe80-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.162 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.162 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape62bfe80-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:39 np0005465987 NetworkManager[44910]: <info>  [1759407939.1652] manager: (tape62bfe80-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Oct  2 08:25:39 np0005465987 kernel: tape62bfe80-40: entered promiscuous mode
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.167 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape62bfe80-40, col_values=(('external_ids', {'iface-id': '2a2d3e39-e72b-4b59-9629-300f7c238d63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.168 2 DEBUG nova.virt.libvirt.guest [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  <nova:name>tempest-device-tagging-server-1922630973</nova:name>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:25:39</nova:creationTime>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    <nova:user uuid="39945f35886a49cb93285686405e043a">tempest-TaggedAttachmentsTest-833632879-project-member</nova:user>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    <nova:project uuid="d5bedc1d8125465b92c98f9ebeb80ddb">tempest-TaggedAttachmentsTest-833632879</nova:project>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    <nova:port uuid="79000dd0-8403-4378-a1a4-87315f5019fb">
Oct  2 08:25:39 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    <nova:port uuid="a5a2116f-132d-4969-bd1c-ce09c2b93097">
Oct  2 08:25:39 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.10.10.157" ipVersion="4"/>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:39 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:25:39 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:25:39 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:25:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:39Z|00343|binding|INFO|Releasing lport 2a2d3e39-e72b-4b59-9629-300f7c238d63 from this chassis (sb_readonly=0)
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.185 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e62bfe80-48ab-4aba-960a-acf615f1e469.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e62bfe80-48ab-4aba-960a-acf615f1e469.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.186 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1298563e-f128-4baa-9389-33f202cb558d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.187 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-e62bfe80-48ab-4aba-960a-acf615f1e469
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/e62bfe80-48ab-4aba-960a-acf615f1e469.pid.haproxy
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID e62bfe80-48ab-4aba-960a-acf615f1e469
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:25:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:39.187 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469', 'env', 'PROCESS_TAG=haproxy-e62bfe80-48ab-4aba-960a-acf615f1e469', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e62bfe80-48ab-4aba-960a-acf615f1e469.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.221 2 DEBUG oslo_concurrency.lockutils [None req-a40b6686-20e8-4519-9ddc-ac1bc0cc8fff 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "interface-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:39 np0005465987 podman[268143]: 2025-10-02 12:25:39.526144852 +0000 UTC m=+0.050031269 container create f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:25:39 np0005465987 systemd[1]: Started libpod-conmon-f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f.scope.
Oct  2 08:25:39 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:25:39 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bce8f8544c821bfd3c88e9743141380ea6e2ba3538eabcb1ed51c7ad8d117da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:25:39 np0005465987 podman[268143]: 2025-10-02 12:25:39.499769386 +0000 UTC m=+0.023655823 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:25:39 np0005465987 podman[268143]: 2025-10-02 12:25:39.602280718 +0000 UTC m=+0.126167155 container init f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:25:39 np0005465987 podman[268143]: 2025-10-02 12:25:39.609541648 +0000 UTC m=+0.133428065 container start f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:25:39 np0005465987 neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469[268158]: [NOTICE]   (268162) : New worker (268164) forked
Oct  2 08:25:39 np0005465987 neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469[268158]: [NOTICE]   (268162) : Loading success.
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.914 2 DEBUG nova.compute.manager [req-09748818-c9b9-451a-a7f0-2b0bcbe252ce req-45c969a6-20e8-4008-9538-ba32cd429020 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-plugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.915 2 DEBUG oslo_concurrency.lockutils [req-09748818-c9b9-451a-a7f0-2b0bcbe252ce req-45c969a6-20e8-4008-9538-ba32cd429020 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.915 2 DEBUG oslo_concurrency.lockutils [req-09748818-c9b9-451a-a7f0-2b0bcbe252ce req-45c969a6-20e8-4008-9538-ba32cd429020 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.915 2 DEBUG oslo_concurrency.lockutils [req-09748818-c9b9-451a-a7f0-2b0bcbe252ce req-45c969a6-20e8-4008-9538-ba32cd429020 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.916 2 DEBUG nova.compute.manager [req-09748818-c9b9-451a-a7f0-2b0bcbe252ce req-45c969a6-20e8-4008-9538-ba32cd429020 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] No waiting events found dispatching network-vif-plugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:39 np0005465987 nova_compute[230713]: 2025-10-02 12:25:39.916 2 WARNING nova.compute.manager [req-09748818-c9b9-451a-a7f0-2b0bcbe252ce req-45c969a6-20e8-4008-9538-ba32cd429020 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received unexpected event network-vif-plugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:25:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:39Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:4b:b9 10.10.10.157
Oct  2 08:25:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:39Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:4b:b9 10.10.10.157
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.068 2 DEBUG oslo_concurrency.lockutils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.069 2 DEBUG oslo_concurrency.lockutils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.090 2 DEBUG nova.objects.instance [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lazy-loading 'flavor' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.121 2 DEBUG oslo_concurrency.lockutils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.298 2 DEBUG oslo_concurrency.lockutils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.299 2 DEBUG oslo_concurrency.lockutils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.299 2 INFO nova.compute.manager [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Attaching volume 739adda1-476e-45a8-9f90-438ba66c8243 to /dev/vdb#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.465 2 DEBUG os_brick.utils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.466 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.478 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.478 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[daeb5e01-6aec-41d2-88d2-5e7cd7acca64]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.479 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.486 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.487 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb4f219-ca1c-46a8-b19f-0d244a546208]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.488 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.495 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.495 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[24328ae7-204b-4264-9252-91a86d92c05b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.496 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa69d22-46a6-49b7-81f7-d1bf25cdcad1]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.497 2 DEBUG oslo_concurrency.processutils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.528 2 DEBUG oslo_concurrency.processutils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.530 2 DEBUG os_brick.initiator.connectors.lightos [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.530 2 DEBUG os_brick.initiator.connectors.lightos [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.530 2 DEBUG os_brick.initiator.connectors.lightos [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.531 2 DEBUG os_brick.utils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:25:40 np0005465987 nova_compute[230713]: 2025-10-02 12:25:40.531 2 DEBUG nova.virt.block_device [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating existing volume attachment record: 4ca4093c-a4f7-424a-beb7-6a4a3f2da4cf _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:25:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:40.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:40 np0005465987 podman[268182]: 2025-10-02 12:25:40.883916387 +0000 UTC m=+0.087412017 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:25:40 np0005465987 podman[268181]: 2025-10-02 12:25:40.887538397 +0000 UTC m=+0.084276122 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:25:40 np0005465987 podman[268180]: 2025-10-02 12:25:40.924485934 +0000 UTC m=+0.129696422 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.422 2 DEBUG nova.objects.instance [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lazy-loading 'flavor' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.452 2 DEBUG nova.virt.libvirt.driver [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Attempting to attach volume 739adda1-476e-45a8-9f90-438ba66c8243 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.455 2 DEBUG nova.virt.libvirt.guest [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:25:41 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:25:41 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-739adda1-476e-45a8-9f90-438ba66c8243">
Oct  2 08:25:41 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:41 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:41 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:41 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:25:41 np0005465987 nova_compute[230713]:  <auth username="openstack">
Oct  2 08:25:41 np0005465987 nova_compute[230713]:    <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:25:41 np0005465987 nova_compute[230713]:  </auth>
Oct  2 08:25:41 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:25:41 np0005465987 nova_compute[230713]:  <serial>739adda1-476e-45a8-9f90-438ba66c8243</serial>
Oct  2 08:25:41 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:25:41 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.579 2 DEBUG nova.virt.libvirt.driver [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.580 2 DEBUG nova.virt.libvirt.driver [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.580 2 DEBUG nova.virt.libvirt.driver [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] No VIF found with MAC fa:16:3e:93:fc:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.723 2 DEBUG nova.network.neutron [req-b3a11f0f-9b4d-4249-b94f-c1f8c56c67d0 req-4efe2c49-6b00-4694-9b84-560ff5d755d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updated VIF entry in instance network info cache for port a5a2116f-132d-4969-bd1c-ce09c2b93097. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.724 2 DEBUG nova.network.neutron [req-b3a11f0f-9b4d-4249-b94f-c1f8c56c67d0 req-4efe2c49-6b00-4694-9b84-560ff5d755d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating instance_info_cache with network_info: [{"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.745 2 DEBUG oslo_concurrency.lockutils [req-b3a11f0f-9b4d-4249-b94f-c1f8c56c67d0 req-4efe2c49-6b00-4694-9b84-560ff5d755d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:41 np0005465987 nova_compute[230713]: 2025-10-02 12:25:41.784 2 DEBUG oslo_concurrency.lockutils [None req-d7c6afab-b6e8-4482-b1e2-c513bef6e694 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:42 np0005465987 nova_compute[230713]: 2025-10-02 12:25:42.069 2 DEBUG nova.compute.manager [req-6f11ee68-f32a-4669-ad53-a9ae4fc1337d req-f6f8e173-1072-40f1-a67d-27773eecf554 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-plugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:42 np0005465987 nova_compute[230713]: 2025-10-02 12:25:42.070 2 DEBUG oslo_concurrency.lockutils [req-6f11ee68-f32a-4669-ad53-a9ae4fc1337d req-f6f8e173-1072-40f1-a67d-27773eecf554 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:42 np0005465987 nova_compute[230713]: 2025-10-02 12:25:42.070 2 DEBUG oslo_concurrency.lockutils [req-6f11ee68-f32a-4669-ad53-a9ae4fc1337d req-f6f8e173-1072-40f1-a67d-27773eecf554 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:42 np0005465987 nova_compute[230713]: 2025-10-02 12:25:42.071 2 DEBUG oslo_concurrency.lockutils [req-6f11ee68-f32a-4669-ad53-a9ae4fc1337d req-f6f8e173-1072-40f1-a67d-27773eecf554 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:42 np0005465987 nova_compute[230713]: 2025-10-02 12:25:42.071 2 DEBUG nova.compute.manager [req-6f11ee68-f32a-4669-ad53-a9ae4fc1337d req-f6f8e173-1072-40f1-a67d-27773eecf554 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] No waiting events found dispatching network-vif-plugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:42 np0005465987 nova_compute[230713]: 2025-10-02 12:25:42.072 2 WARNING nova.compute.manager [req-6f11ee68-f32a-4669-ad53-a9ae4fc1337d req-f6f8e173-1072-40f1-a67d-27773eecf554 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received unexpected event network-vif-plugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:25:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:42.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:43.036 138432 DEBUG eventlet.wsgi.server [-] (138432) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:43.037 138432 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: Accept: */*#015
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: Connection: close#015
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: Content-Type: text/plain#015
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: Host: 169.254.169.254#015
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: User-Agent: curl/7.84.0#015
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: X-Forwarded-For: 10.100.0.7#015
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: X-Ovn-Network-Id: 27c9fe46-eb26-48b9-9738-9fa4cca1ca98 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  2 08:25:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:43Z|00344|binding|INFO|Releasing lport 2a2d3e39-e72b-4b59-9629-300f7c238d63 from this chassis (sb_readonly=0)
Oct  2 08:25:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:43Z|00345|binding|INFO|Releasing lport dd2034d9-8959-495d-af8e-a08a6bc18b56 from this chassis (sb_readonly=0)
Oct  2 08:25:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:43Z|00346|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:25:43 np0005465987 nova_compute[230713]: 2025-10-02 12:25:43.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:43.642 138432 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  2 08:25:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:43.643 138432 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1918 time: 0.6060531#033[00m
Oct  2 08:25:43 np0005465987 haproxy-metadata-proxy-27c9fe46-eb26-48b9-9738-9fa4cca1ca98[267683]: 10.100.0.7:55820 [02/Oct/2025:12:25:43.035] listener listener/metadata 0/0/0/608/608 200 1902 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Oct  2 08:25:43 np0005465987 nova_compute[230713]: 2025-10-02 12:25:43.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:44.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.285 2 DEBUG oslo_concurrency.lockutils [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.286 2 DEBUG oslo_concurrency.lockutils [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.300 2 INFO nova.compute.manager [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Detaching volume 739adda1-476e-45a8-9f90-438ba66c8243#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.463 2 INFO nova.virt.block_device [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Attempting to driver detach volume 739adda1-476e-45a8-9f90-438ba66c8243 from mountpoint /dev/vdb#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.470 2 DEBUG nova.virt.libvirt.driver [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Attempting to detach device vdb from instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.471 2 DEBUG nova.virt.libvirt.guest [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-739adda1-476e-45a8-9f90-438ba66c8243">
Oct  2 08:25:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <serial>739adda1-476e-45a8-9f90-438ba66c8243</serial>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:25:44 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.477 2 INFO nova.virt.libvirt.driver [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Successfully detached device vdb from instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 from the persistent domain config.#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.477 2 DEBUG nova.virt.libvirt.driver [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.478 2 DEBUG nova.virt.libvirt.guest [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-739adda1-476e-45a8-9f90-438ba66c8243">
Oct  2 08:25:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <serial>739adda1-476e-45a8-9f90-438ba66c8243</serial>
Oct  2 08:25:44 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct  2 08:25:44 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:25:44 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.575 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759407944.5755877, e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.577 2 DEBUG nova.virt.libvirt.driver [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.579 2 INFO nova.virt.libvirt.driver [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Successfully detached device vdb from instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 from the live domain config.#033[00m
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.781 2 DEBUG nova.objects.instance [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lazy-loading 'flavor' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:44.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:44 np0005465987 nova_compute[230713]: 2025-10-02 12:25:44.888 2 DEBUG oslo_concurrency.lockutils [None req-e86b717c-763c-462e-9d2c-f9b3d558ebce 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.598 2 DEBUG oslo_concurrency.lockutils [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "interface-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-a5a2116f-132d-4969-bd1c-ce09c2b93097" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.599 2 DEBUG oslo_concurrency.lockutils [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "interface-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-a5a2116f-132d-4969-bd1c-ce09c2b93097" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.633 2 DEBUG nova.objects.instance [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lazy-loading 'flavor' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.658 2 DEBUG nova.virt.libvirt.vif [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1922630973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1922630973',id=103,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOQDlxJeDw1n2lNeT07GFQw8tac43jR2Y/OIPYODq3fJ2v4BDr9CzOuh+zzaReBehEO76GEvBdX/fypn1I4RDJoLYBYiQL/9PaGBzrrf1rNcTZY0TsVr6HL2UpV8dqNmw==',key_name='tempest-keypair-1483715208',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:25:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5bedc1d8125465b92c98f9ebeb80ddb',ramdisk_id='',reservation_id='r-ibanttiz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-833632879',owner_user_name='tempest-TaggedAttachmentsTest-833632879-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:25:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39945f35886a49cb93285686405e043a',uuid=e69e3ed0-aca2-475e-9c98-e8d075ea4ce3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.658 2 DEBUG nova.network.os_vif_util [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converting VIF {"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.659 2 DEBUG nova.network.os_vif_util [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.661 2 DEBUG nova.virt.libvirt.guest [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.664 2 DEBUG nova.virt.libvirt.guest [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.666 2 DEBUG nova.virt.libvirt.driver [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Attempting to detach device tapa5a2116f-13 from instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.666 2 DEBUG nova.virt.libvirt.guest [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:c7:4b:b9"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <target dev="tapa5a2116f-13"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.671 2 DEBUG nova.virt.libvirt.guest [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.674 2 DEBUG nova.virt.libvirt.guest [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface>not found in domain: <domain type='kvm' id='44'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <name>instance-00000067</name>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <uuid>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</uuid>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:name>tempest-device-tagging-server-1922630973</nova:name>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:25:39</nova:creationTime>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:user uuid="39945f35886a49cb93285686405e043a">tempest-TaggedAttachmentsTest-833632879-project-member</nova:user>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:project uuid="d5bedc1d8125465b92c98f9ebeb80ddb">tempest-TaggedAttachmentsTest-833632879</nova:project>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:port uuid="79000dd0-8403-4378-a1a4-87315f5019fb">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:port uuid="a5a2116f-132d-4969-bd1c-ce09c2b93097">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.10.10.157" ipVersion="4"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='serial'>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='uuid'>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk' index='2'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config' index='1'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:93:fc:f5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target dev='tap79000dd0-84'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:c7:4b:b9'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target dev='tapa5a2116f-13'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='net1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log' append='off'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log' append='off'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c10,c473</label>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c10,c473</imagelabel>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.676 2 INFO nova.virt.libvirt.driver [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Successfully detached device tapa5a2116f-13 from instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 from the persistent domain config.#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.676 2 DEBUG nova.virt.libvirt.driver [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] (1/8): Attempting to detach device tapa5a2116f-13 with device alias net1 from instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.677 2 DEBUG nova.virt.libvirt.guest [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:c7:4b:b9"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <target dev="tapa5a2116f-13"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:25:45 np0005465987 kernel: tapa5a2116f-13 (unregistering): left promiscuous mode
Oct  2 08:25:45 np0005465987 NetworkManager[44910]: <info>  [1759407945.7257] device (tapa5a2116f-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.740 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759407945.7397208, e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.741 2 DEBUG nova.virt.libvirt.driver [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Start waiting for the detach event from libvirt for device tapa5a2116f-13 with device alias net1 for instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.741 2 DEBUG nova.virt.libvirt.guest [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.770 2 DEBUG nova.virt.libvirt.guest [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface>not found in domain: <domain type='kvm' id='44'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <name>instance-00000067</name>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <uuid>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</uuid>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:name>tempest-device-tagging-server-1922630973</nova:name>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:25:39</nova:creationTime>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:user uuid="39945f35886a49cb93285686405e043a">tempest-TaggedAttachmentsTest-833632879-project-member</nova:user>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:project uuid="d5bedc1d8125465b92c98f9ebeb80ddb">tempest-TaggedAttachmentsTest-833632879</nova:project>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:port uuid="79000dd0-8403-4378-a1a4-87315f5019fb">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:port uuid="a5a2116f-132d-4969-bd1c-ce09c2b93097">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.10.10.157" ipVersion="4"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='serial'>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='uuid'>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk' index='2'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config' index='1'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:25:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:45Z|00347|binding|INFO|Releasing lport a5a2116f-132d-4969-bd1c-ce09c2b93097 from this chassis (sb_readonly=0)
Oct  2 08:25:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:45Z|00348|binding|INFO|Setting lport a5a2116f-132d-4969-bd1c-ce09c2b93097 down in Southbound
Oct  2 08:25:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:45Z|00349|binding|INFO|Removing iface tapa5a2116f-13 ovn-installed in OVS
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:93:fc:f5'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target dev='tap79000dd0-84'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log' append='off'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log' append='off'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c10,c473</label>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c10,c473</imagelabel>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.771 2 INFO nova.virt.libvirt.driver [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Successfully detached device tapa5a2116f-13 from instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 from the live domain config.#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.771 2 DEBUG nova.virt.libvirt.vif [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1922630973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1922630973',id=103,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOQDlxJeDw1n2lNeT07GFQw8tac43jR2Y/OIPYODq3fJ2v4BDr9CzOuh+zzaReBehEO76GEvBdX/fypn1I4RDJoLYBYiQL/9PaGBzrrf1rNcTZY0TsVr6HL2UpV8dqNmw==',key_name='tempest-keypair-1483715208',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:25:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5bedc1d8125465b92c98f9ebeb80ddb',ramdisk_id='',reservation_id='r-ibanttiz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-833632879',owner_user_name='tempest-TaggedAttachmentsTest-833632879-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:25:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39945f35886a49cb93285686405e043a',uuid=e69e3ed0-aca2-475e-9c98-e8d075ea4ce3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.771 2 DEBUG nova.network.os_vif_util [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converting VIF {"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.772 2 DEBUG nova.network.os_vif_util [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.772 2 DEBUG os_vif [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.776 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5a2116f-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:25:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:45.787 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:4b:b9 10.10.10.157'], port_security=['fa:16:3e:c7:4b:b9 10.10.10.157'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.157/24', 'neutron:device_id': 'e69e3ed0-aca2-475e-9c98-e8d075ea4ce3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e62bfe80-48ab-4aba-960a-acf615f1e469', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5bedc1d8125465b92c98f9ebeb80ddb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4217d4cc-2d8f-43a2-b88c-089959cf0d76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a42af78-d20d-4ab0-870c-f0bfb76579fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=a5a2116f-132d-4969-bd1c-ce09c2b93097) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:45.788 138325 INFO neutron.agent.ovn.metadata.agent [-] Port a5a2116f-132d-4969-bd1c-ce09c2b93097 in datapath e62bfe80-48ab-4aba-960a-acf615f1e469 unbound from our chassis#033[00m
Oct  2 08:25:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:45.790 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e62bfe80-48ab-4aba-960a-acf615f1e469, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:45.791 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[487f2118-3a8b-4e11-9371-91e335a727f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:45.791 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469 namespace which is not needed anymore#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.794 2 INFO os_vif [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13')#033[00m
Oct  2 08:25:45 np0005465987 nova_compute[230713]: 2025-10-02 12:25:45.795 2 DEBUG nova.virt.libvirt.guest [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:name>tempest-device-tagging-server-1922630973</nova:name>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:25:45</nova:creationTime>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:user uuid="39945f35886a49cb93285686405e043a">tempest-TaggedAttachmentsTest-833632879-project-member</nova:user>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:project uuid="d5bedc1d8125465b92c98f9ebeb80ddb">tempest-TaggedAttachmentsTest-833632879</nova:project>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    <nova:port uuid="79000dd0-8403-4378-a1a4-87315f5019fb">
Oct  2 08:25:45 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:45 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:25:45 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:25:45 np0005465987 neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469[268158]: [NOTICE]   (268162) : haproxy version is 2.8.14-c23fe91
Oct  2 08:25:45 np0005465987 neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469[268158]: [NOTICE]   (268162) : path to executable is /usr/sbin/haproxy
Oct  2 08:25:45 np0005465987 neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469[268158]: [WARNING]  (268162) : Exiting Master process...
Oct  2 08:25:45 np0005465987 neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469[268158]: [WARNING]  (268162) : Exiting Master process...
Oct  2 08:25:45 np0005465987 neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469[268158]: [ALERT]    (268162) : Current worker (268164) exited with code 143 (Terminated)
Oct  2 08:25:45 np0005465987 neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469[268158]: [WARNING]  (268162) : All workers exited. Exiting... (0)
Oct  2 08:25:45 np0005465987 podman[268289]: 2025-10-02 12:25:45.92427857 +0000 UTC m=+0.053352960 container stop f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:25:45 np0005465987 systemd[1]: libpod-f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f.scope: Deactivated successfully.
Oct  2 08:25:45 np0005465987 podman[268289]: 2025-10-02 12:25:45.954933355 +0000 UTC m=+0.084007755 container died f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:25:45 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f-userdata-shm.mount: Deactivated successfully.
Oct  2 08:25:45 np0005465987 systemd[1]: var-lib-containers-storage-overlay-8bce8f8544c821bfd3c88e9743141380ea6e2ba3538eabcb1ed51c7ad8d117da-merged.mount: Deactivated successfully.
Oct  2 08:25:46 np0005465987 podman[268289]: 2025-10-02 12:25:46.01251373 +0000 UTC m=+0.141588110 container cleanup f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:25:46 np0005465987 systemd[1]: libpod-conmon-f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f.scope: Deactivated successfully.
Oct  2 08:25:46 np0005465987 podman[268320]: 2025-10-02 12:25:46.076212404 +0000 UTC m=+0.041214566 container remove f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.081 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0e51d6-aea4-44a6-85fc-882b1846fddd]: (4, ('Thu Oct  2 12:25:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469 (f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f)\nf11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f\nThu Oct  2 12:25:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469 (f11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f)\nf11f159fb30582c30c0c785e1dd2cd028962ac8c0c6b035671b11852835b324f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.083 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ddb8d2-c8e3-4c0e-a058-b65afa3f0570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.084 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape62bfe80-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:46 np0005465987 nova_compute[230713]: 2025-10-02 12:25:46.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:46 np0005465987 kernel: tape62bfe80-40: left promiscuous mode
Oct  2 08:25:46 np0005465987 nova_compute[230713]: 2025-10-02 12:25:46.093 2 DEBUG nova.compute.manager [req-2016c8cf-44c6-45c3-9f85-f14ae3ad8a69 req-0457c635-4039-4508-9fe8-d62473e7dfa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-unplugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:46 np0005465987 nova_compute[230713]: 2025-10-02 12:25:46.093 2 DEBUG oslo_concurrency.lockutils [req-2016c8cf-44c6-45c3-9f85-f14ae3ad8a69 req-0457c635-4039-4508-9fe8-d62473e7dfa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:46 np0005465987 nova_compute[230713]: 2025-10-02 12:25:46.094 2 DEBUG oslo_concurrency.lockutils [req-2016c8cf-44c6-45c3-9f85-f14ae3ad8a69 req-0457c635-4039-4508-9fe8-d62473e7dfa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:46 np0005465987 nova_compute[230713]: 2025-10-02 12:25:46.094 2 DEBUG oslo_concurrency.lockutils [req-2016c8cf-44c6-45c3-9f85-f14ae3ad8a69 req-0457c635-4039-4508-9fe8-d62473e7dfa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:46 np0005465987 nova_compute[230713]: 2025-10-02 12:25:46.094 2 DEBUG nova.compute.manager [req-2016c8cf-44c6-45c3-9f85-f14ae3ad8a69 req-0457c635-4039-4508-9fe8-d62473e7dfa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] No waiting events found dispatching network-vif-unplugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:46 np0005465987 nova_compute[230713]: 2025-10-02 12:25:46.095 2 WARNING nova.compute.manager [req-2016c8cf-44c6-45c3-9f85-f14ae3ad8a69 req-0457c635-4039-4508-9fe8-d62473e7dfa9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received unexpected event network-vif-unplugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:25:46 np0005465987 nova_compute[230713]: 2025-10-02 12:25:46.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.102 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[35f27d77-7af0-4875-bafc-167b588ea6e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.139 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[07323d1f-ff1a-42f4-b667-139c22c4bd60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.140 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe5e94f-e08f-4d6f-a236-6714ca62c597]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.155 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7e68031b-63bc-4954-85f5-7aab39e0109e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600751, 'reachable_time': 32037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268335, 'error': None, 'target': 'ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:46 np0005465987 systemd[1]: run-netns-ovnmeta\x2de62bfe80\x2d48ab\x2d4aba\x2d960a\x2dacf615f1e469.mount: Deactivated successfully.
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.160 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e62bfe80-48ab-4aba-960a-acf615f1e469 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.160 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c7f2be-7f40-4e55-aef3-e04ffb3541a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:46.190 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:46.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:47 np0005465987 nova_compute[230713]: 2025-10-02 12:25:47.261 2 DEBUG oslo_concurrency.lockutils [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:25:47 np0005465987 nova_compute[230713]: 2025-10-02 12:25:47.261 2 DEBUG oslo_concurrency.lockutils [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquired lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:25:47 np0005465987 nova_compute[230713]: 2025-10-02 12:25:47.262 2 DEBUG nova.network.neutron [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:25:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:48.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:48 np0005465987 nova_compute[230713]: 2025-10-02 12:25:48.368 2 DEBUG nova.compute.manager [req-a6ddde47-c27f-42b4-afda-633529f1017e req-5507f06f-f1b2-4b82-b558-1983d1b1b32b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-plugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:48 np0005465987 nova_compute[230713]: 2025-10-02 12:25:48.368 2 DEBUG oslo_concurrency.lockutils [req-a6ddde47-c27f-42b4-afda-633529f1017e req-5507f06f-f1b2-4b82-b558-1983d1b1b32b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:48 np0005465987 nova_compute[230713]: 2025-10-02 12:25:48.369 2 DEBUG oslo_concurrency.lockutils [req-a6ddde47-c27f-42b4-afda-633529f1017e req-5507f06f-f1b2-4b82-b558-1983d1b1b32b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:48 np0005465987 nova_compute[230713]: 2025-10-02 12:25:48.369 2 DEBUG oslo_concurrency.lockutils [req-a6ddde47-c27f-42b4-afda-633529f1017e req-5507f06f-f1b2-4b82-b558-1983d1b1b32b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:48 np0005465987 nova_compute[230713]: 2025-10-02 12:25:48.369 2 DEBUG nova.compute.manager [req-a6ddde47-c27f-42b4-afda-633529f1017e req-5507f06f-f1b2-4b82-b558-1983d1b1b32b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] No waiting events found dispatching network-vif-plugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:48 np0005465987 nova_compute[230713]: 2025-10-02 12:25:48.369 2 WARNING nova.compute.manager [req-a6ddde47-c27f-42b4-afda-633529f1017e req-5507f06f-f1b2-4b82-b558-1983d1b1b32b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received unexpected event network-vif-plugged-a5a2116f-132d-4969-bd1c-ce09c2b93097 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:25:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:48.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:48 np0005465987 nova_compute[230713]: 2025-10-02 12:25:48.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.017 2 DEBUG nova.compute.manager [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-deleted-a5a2116f-132d-4969-bd1c-ce09c2b93097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.017 2 INFO nova.compute.manager [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Neutron deleted interface a5a2116f-132d-4969-bd1c-ce09c2b93097; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.017 2 DEBUG nova.network.neutron [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating instance_info_cache with network_info: [{"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.063 2 DEBUG nova.objects.instance [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'system_metadata' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.096 2 DEBUG nova.objects.instance [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'flavor' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.131 2 DEBUG nova.virt.libvirt.vif [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1922630973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1922630973',id=103,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOQDlxJeDw1n2lNeT07GFQw8tac43jR2Y/OIPYODq3fJ2v4BDr9CzOuh+zzaReBehEO76GEvBdX/fypn1I4RDJoLYBYiQL/9PaGBzrrf1rNcTZY0TsVr6HL2UpV8dqNmw==',key_name='tempest-keypair-1483715208',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:25:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5bedc1d8125465b92c98f9ebeb80ddb',ramdisk_id='',reservation_id='r-ibanttiz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-833632879',owner_user_name='tempest-TaggedAttachmentsTest-833632879-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:25:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39945f35886a49cb93285686405e043a',uuid=e69e3ed0-aca2-475e-9c98-e8d075ea4ce3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.131 2 DEBUG nova.network.os_vif_util [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.132 2 DEBUG nova.network.os_vif_util [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.134 2 DEBUG nova.virt.libvirt.guest [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.137 2 DEBUG nova.virt.libvirt.guest [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface>not found in domain: <domain type='kvm' id='44'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <name>instance-00000067</name>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <uuid>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</uuid>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:name>tempest-device-tagging-server-1922630973</nova:name>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:25:45</nova:creationTime>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:user uuid="39945f35886a49cb93285686405e043a">tempest-TaggedAttachmentsTest-833632879-project-member</nova:user>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:project uuid="d5bedc1d8125465b92c98f9ebeb80ddb">tempest-TaggedAttachmentsTest-833632879</nova:project>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:port uuid="79000dd0-8403-4378-a1a4-87315f5019fb">
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:25:49 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='serial'>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='uuid'>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk' index='2'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config' index='1'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:93:fc:f5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target dev='tap79000dd0-84'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log' append='off'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log' append='off'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c10,c473</label>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c10,c473</imagelabel>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:25:49 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:25:49 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.138 2 DEBUG nova.virt.libvirt.guest [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.142 2 DEBUG nova.virt.libvirt.guest [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c7:4b:b9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5a2116f-13"/></interface>not found in domain: <domain type='kvm' id='44'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <name>instance-00000067</name>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <uuid>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</uuid>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:name>tempest-device-tagging-server-1922630973</nova:name>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:25:45</nova:creationTime>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:user uuid="39945f35886a49cb93285686405e043a">tempest-TaggedAttachmentsTest-833632879-project-member</nova:user>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:project uuid="d5bedc1d8125465b92c98f9ebeb80ddb">tempest-TaggedAttachmentsTest-833632879</nova:project>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:port uuid="79000dd0-8403-4378-a1a4-87315f5019fb">
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:25:49 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='serial'>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='uuid'>e69e3ed0-aca2-475e-9c98-e8d075ea4ce3</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk' index='2'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_disk.config' index='1'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:93:fc:f5'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target dev='tap79000dd0-84'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log' append='off'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <source path='/dev/pts/1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3/console.log' append='off'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c10,c473</label>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c10,c473</imagelabel>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:25:49 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:25:49 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.143 2 WARNING nova.virt.libvirt.driver [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Detaching interface fa:16:3e:c7:4b:b9 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapa5a2116f-13' not found.#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.143 2 DEBUG nova.virt.libvirt.vif [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1922630973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1922630973',id=103,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOQDlxJeDw1n2lNeT07GFQw8tac43jR2Y/OIPYODq3fJ2v4BDr9CzOuh+zzaReBehEO76GEvBdX/fypn1I4RDJoLYBYiQL/9PaGBzrrf1rNcTZY0TsVr6HL2UpV8dqNmw==',key_name='tempest-keypair-1483715208',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:25:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5bedc1d8125465b92c98f9ebeb80ddb',ramdisk_id='',reservation_id='r-ibanttiz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-833632879',owner_user_name='tempest-TaggedAttachmentsTest-833632879-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:25:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39945f35886a49cb93285686405e043a',uuid=e69e3ed0-aca2-475e-9c98-e8d075ea4ce3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.144 2 DEBUG nova.network.os_vif_util [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "address": "fa:16:3e:c7:4b:b9", "network": {"id": "e62bfe80-48ab-4aba-960a-acf615f1e469", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-484594234", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.157", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5a2116f-13", "ovs_interfaceid": "a5a2116f-132d-4969-bd1c-ce09c2b93097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.144 2 DEBUG nova.network.os_vif_util [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.145 2 DEBUG os_vif [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5a2116f-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.149 2 INFO os_vif [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:4b:b9,bridge_name='br-int',has_traffic_filtering=True,id=a5a2116f-132d-4969-bd1c-ce09c2b93097,network=Network(e62bfe80-48ab-4aba-960a-acf615f1e469),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5a2116f-13')#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.150 2 DEBUG nova.virt.libvirt.guest [req-09205515-0eeb-4ad2-9d30-3948a4b3cc46 req-a92f5ef1-c353-4755-9e5c-3c913f4794f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:name>tempest-device-tagging-server-1922630973</nova:name>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:25:49</nova:creationTime>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:user uuid="39945f35886a49cb93285686405e043a">tempest-TaggedAttachmentsTest-833632879-project-member</nova:user>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:project uuid="d5bedc1d8125465b92c98f9ebeb80ddb">tempest-TaggedAttachmentsTest-833632879</nova:project>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    <nova:port uuid="79000dd0-8403-4378-a1a4-87315f5019fb">
Oct  2 08:25:49 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:25:49 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:25:49 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:25:49 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.658 2 INFO nova.network.neutron [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Port a5a2116f-132d-4969-bd1c-ce09c2b93097 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.660 2 DEBUG nova.network.neutron [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating instance_info_cache with network_info: [{"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.697 2 DEBUG oslo_concurrency.lockutils [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Releasing lock "refresh_cache-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:49 np0005465987 nova_compute[230713]: 2025-10-02 12:25:49.777 2 DEBUG oslo_concurrency.lockutils [None req-2300c6b8-906e-406b-a5a2-640f4279a1e0 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "interface-e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-a5a2116f-132d-4969-bd1c-ce09c2b93097" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:50Z|00350|binding|INFO|Releasing lport dd2034d9-8959-495d-af8e-a08a6bc18b56 from this chassis (sb_readonly=0)
Oct  2 08:25:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:50Z|00351|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:25:50 np0005465987 nova_compute[230713]: 2025-10-02 12:25:50.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005465987 nova_compute[230713]: 2025-10-02 12:25:50.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:25:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:50.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:25:50 np0005465987 nova_compute[230713]: 2025-10-02 12:25:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:50 np0005465987 nova_compute[230713]: 2025-10-02 12:25:50.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:25:50 np0005465987 nova_compute[230713]: 2025-10-02 12:25:50.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.216 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.216 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.217 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.217 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 63cd9316-0da7-4274-b2dc-3501e55e6617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.360 2 DEBUG oslo_concurrency.lockutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.361 2 DEBUG oslo_concurrency.lockutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.361 2 DEBUG oslo_concurrency.lockutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.362 2 DEBUG oslo_concurrency.lockutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.362 2 DEBUG oslo_concurrency.lockutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.363 2 INFO nova.compute.manager [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Terminating instance#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.365 2 DEBUG nova.compute.manager [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:25:51 np0005465987 kernel: tap79000dd0-84 (unregistering): left promiscuous mode
Oct  2 08:25:51 np0005465987 NetworkManager[44910]: <info>  [1759407951.4454] device (tap79000dd0-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:25:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:51Z|00352|binding|INFO|Releasing lport 79000dd0-8403-4378-a1a4-87315f5019fb from this chassis (sb_readonly=0)
Oct  2 08:25:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:51Z|00353|binding|INFO|Setting lport 79000dd0-8403-4378-a1a4-87315f5019fb down in Southbound
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:51Z|00354|binding|INFO|Removing iface tap79000dd0-84 ovn-installed in OVS
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct  2 08:25:51 np0005465987 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000067.scope: Consumed 15.567s CPU time.
Oct  2 08:25:51 np0005465987 systemd-machined[188335]: Machine qemu-44-instance-00000067 terminated.
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.524 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:fc:f5 10.100.0.7'], port_security=['fa:16:3e:93:fc:f5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e69e3ed0-aca2-475e-9c98-e8d075ea4ce3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5bedc1d8125465b92c98f9ebeb80ddb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f10d3d11-b21b-40a8-b011-788ac820525d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7b68bc8-fcf2-4b38-a87f-245c078df7f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=79000dd0-8403-4378-a1a4-87315f5019fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.525 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 79000dd0-8403-4378-a1a4-87315f5019fb in datapath 27c9fe46-eb26-48b9-9738-9fa4cca1ca98 unbound from our chassis#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.526 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27c9fe46-eb26-48b9-9738-9fa4cca1ca98, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.527 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bea02cea-f76e-4c2b-af99-3bccf3619bff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.528 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98 namespace which is not needed anymore#033[00m
Oct  2 08:25:51 np0005465987 kernel: tap79000dd0-84: entered promiscuous mode
Oct  2 08:25:51 np0005465987 NetworkManager[44910]: <info>  [1759407951.5842] manager: (tap79000dd0-84): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Oct  2 08:25:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:51Z|00355|binding|INFO|Claiming lport 79000dd0-8403-4378-a1a4-87315f5019fb for this chassis.
Oct  2 08:25:51 np0005465987 kernel: tap79000dd0-84 (unregistering): left promiscuous mode
Oct  2 08:25:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:51Z|00356|binding|INFO|79000dd0-8403-4378-a1a4-87315f5019fb: Claiming fa:16:3e:93:fc:f5 10.100.0.7
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 systemd-udevd[268339]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.642 2 INFO nova.virt.libvirt.driver [-] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Instance destroyed successfully.#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.642 2 DEBUG nova.objects.instance [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lazy-loading 'resources' on Instance uuid e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:51Z|00357|binding|INFO|Setting lport 79000dd0-8403-4378-a1a4-87315f5019fb ovn-installed in OVS
Oct  2 08:25:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:51Z|00358|if_status|INFO|Dropped 3 log messages in last 444 seconds (most recently, 444 seconds ago) due to excessive rate
Oct  2 08:25:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:51Z|00359|if_status|INFO|Not setting lport 79000dd0-8403-4378-a1a4-87315f5019fb down as sb is readonly
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:25:51Z|00360|binding|INFO|Releasing lport 79000dd0-8403-4378-a1a4-87315f5019fb from this chassis (sb_readonly=0)
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.652 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:fc:f5 10.100.0.7'], port_security=['fa:16:3e:93:fc:f5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e69e3ed0-aca2-475e-9c98-e8d075ea4ce3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5bedc1d8125465b92c98f9ebeb80ddb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f10d3d11-b21b-40a8-b011-788ac820525d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7b68bc8-fcf2-4b38-a87f-245c078df7f7, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=79000dd0-8403-4378-a1a4-87315f5019fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.660 2 DEBUG nova.virt.libvirt.vif [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1922630973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1922630973',id=103,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIOQDlxJeDw1n2lNeT07GFQw8tac43jR2Y/OIPYODq3fJ2v4BDr9CzOuh+zzaReBehEO76GEvBdX/fypn1I4RDJoLYBYiQL/9PaGBzrrf1rNcTZY0TsVr6HL2UpV8dqNmw==',key_name='tempest-keypair-1483715208',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:25:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5bedc1d8125465b92c98f9ebeb80ddb',ramdisk_id='',reservation_id='r-ibanttiz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-833632879',owner_user_name='tempest-TaggedAttachmentsTest-833632879-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:25:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39945f35886a49cb93285686405e043a',uuid=e69e3ed0-aca2-475e-9c98-e8d075ea4ce3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.660 2 DEBUG nova.network.os_vif_util [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converting VIF {"id": "79000dd0-8403-4378-a1a4-87315f5019fb", "address": "fa:16:3e:93:fc:f5", "network": {"id": "27c9fe46-eb26-48b9-9738-9fa4cca1ca98", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-719247213-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5bedc1d8125465b92c98f9ebeb80ddb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79000dd0-84", "ovs_interfaceid": "79000dd0-8403-4378-a1a4-87315f5019fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.661 2 DEBUG nova.network.os_vif_util [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:fc:f5,bridge_name='br-int',has_traffic_filtering=True,id=79000dd0-8403-4378-a1a4-87315f5019fb,network=Network(27c9fe46-eb26-48b9-9738-9fa4cca1ca98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79000dd0-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.661 2 DEBUG os_vif [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:fc:f5,bridge_name='br-int',has_traffic_filtering=True,id=79000dd0-8403-4378-a1a4-87315f5019fb,network=Network(27c9fe46-eb26-48b9-9738-9fa4cca1ca98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79000dd0-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79000dd0-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.670 2 INFO os_vif [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:fc:f5,bridge_name='br-int',has_traffic_filtering=True,id=79000dd0-8403-4378-a1a4-87315f5019fb,network=Network(27c9fe46-eb26-48b9-9738-9fa4cca1ca98),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79000dd0-84')#033[00m
Oct  2 08:25:51 np0005465987 neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98[267677]: [NOTICE]   (267681) : haproxy version is 2.8.14-c23fe91
Oct  2 08:25:51 np0005465987 neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98[267677]: [NOTICE]   (267681) : path to executable is /usr/sbin/haproxy
Oct  2 08:25:51 np0005465987 neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98[267677]: [ALERT]    (267681) : Current worker (267683) exited with code 143 (Terminated)
Oct  2 08:25:51 np0005465987 neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98[267677]: [WARNING]  (267681) : All workers exited. Exiting... (0)
Oct  2 08:25:51 np0005465987 systemd[1]: libpod-cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6.scope: Deactivated successfully.
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.698 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:fc:f5 10.100.0.7'], port_security=['fa:16:3e:93:fc:f5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e69e3ed0-aca2-475e-9c98-e8d075ea4ce3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5bedc1d8125465b92c98f9ebeb80ddb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f10d3d11-b21b-40a8-b011-788ac820525d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7b68bc8-fcf2-4b38-a87f-245c078df7f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=79000dd0-8403-4378-a1a4-87315f5019fb) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:51 np0005465987 podman[268361]: 2025-10-02 12:25:51.700834034 +0000 UTC m=+0.053975307 container died cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:25:51 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6-userdata-shm.mount: Deactivated successfully.
Oct  2 08:25:51 np0005465987 systemd[1]: var-lib-containers-storage-overlay-335ae2d8bc275134815651c707ba108fd87c6b9278691f651c601d8c6a79c030-merged.mount: Deactivated successfully.
Oct  2 08:25:51 np0005465987 podman[268361]: 2025-10-02 12:25:51.73663143 +0000 UTC m=+0.089772703 container cleanup cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:25:51 np0005465987 systemd[1]: libpod-conmon-cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6.scope: Deactivated successfully.
Oct  2 08:25:51 np0005465987 podman[268409]: 2025-10-02 12:25:51.800584511 +0000 UTC m=+0.042223003 container remove cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.806 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8c29ec4f-74d5-4efd-abe1-cf19ed755250]: (4, ('Thu Oct  2 12:25:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98 (cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6)\ncd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6\nThu Oct  2 12:25:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98 (cd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6)\ncd4a6e4a9b76437a910819ac01d4487354a9efc94c401091dc958ffc84f691a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.808 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9df99306-fffa-4788-b7f3-fab425ade23d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.809 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27c9fe46-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 kernel: tap27c9fe46-e0: left promiscuous mode
Oct  2 08:25:51 np0005465987 nova_compute[230713]: 2025-10-02 12:25:51.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.828 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[67990505-57da-470e-8cfc-6a1bb6ba9ee5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.859 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2f1141-0d18-4b94-8cb6-126f9883af07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.860 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8ced78ce-5ba8-445d-af64-2dae143cc056]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.878 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1023ba12-297d-44d1-8cda-95ebe921e3f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597191, 'reachable_time': 43809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268425, 'error': None, 'target': 'ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.880 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27c9fe46-eb26-48b9-9738-9fa4cca1ca98 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.881 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1b04e0-d7a8-47d7-89fe-092d16ec864f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.881 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 79000dd0-8403-4378-a1a4-87315f5019fb in datapath 27c9fe46-eb26-48b9-9738-9fa4cca1ca98 unbound from our chassis#033[00m
Oct  2 08:25:51 np0005465987 systemd[1]: run-netns-ovnmeta\x2d27c9fe46\x2deb26\x2d48b9\x2d9738\x2d9fa4cca1ca98.mount: Deactivated successfully.
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.883 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27c9fe46-eb26-48b9-9738-9fa4cca1ca98, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.884 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a076be-caf1-4481-95a8-d4b131cd2584]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.884 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 79000dd0-8403-4378-a1a4-87315f5019fb in datapath 27c9fe46-eb26-48b9-9738-9fa4cca1ca98 unbound from our chassis#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.885 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27c9fe46-eb26-48b9-9738-9fa4cca1ca98, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:25:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:25:51.886 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c9506ad4-c517-4417-adc6-3436411d4092]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:52 np0005465987 nova_compute[230713]: 2025-10-02 12:25:52.160 2 DEBUG nova.compute.manager [req-6f3cd13e-26ba-4b3b-a0aa-a30163e79a6a req-885ba179-bedc-4240-849e-8fc34f7267de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-unplugged-79000dd0-8403-4378-a1a4-87315f5019fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:52 np0005465987 nova_compute[230713]: 2025-10-02 12:25:52.161 2 DEBUG oslo_concurrency.lockutils [req-6f3cd13e-26ba-4b3b-a0aa-a30163e79a6a req-885ba179-bedc-4240-849e-8fc34f7267de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:52 np0005465987 nova_compute[230713]: 2025-10-02 12:25:52.161 2 DEBUG oslo_concurrency.lockutils [req-6f3cd13e-26ba-4b3b-a0aa-a30163e79a6a req-885ba179-bedc-4240-849e-8fc34f7267de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:52 np0005465987 nova_compute[230713]: 2025-10-02 12:25:52.161 2 DEBUG oslo_concurrency.lockutils [req-6f3cd13e-26ba-4b3b-a0aa-a30163e79a6a req-885ba179-bedc-4240-849e-8fc34f7267de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:52 np0005465987 nova_compute[230713]: 2025-10-02 12:25:52.162 2 DEBUG nova.compute.manager [req-6f3cd13e-26ba-4b3b-a0aa-a30163e79a6a req-885ba179-bedc-4240-849e-8fc34f7267de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] No waiting events found dispatching network-vif-unplugged-79000dd0-8403-4378-a1a4-87315f5019fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:52 np0005465987 nova_compute[230713]: 2025-10-02 12:25:52.162 2 DEBUG nova.compute.manager [req-6f3cd13e-26ba-4b3b-a0aa-a30163e79a6a req-885ba179-bedc-4240-849e-8fc34f7267de d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-unplugged-79000dd0-8403-4378-a1a4-87315f5019fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:25:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:52.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:53 np0005465987 nova_compute[230713]: 2025-10-02 12:25:53.076 2 INFO nova.virt.libvirt.driver [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Deleting instance files /var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_del#033[00m
Oct  2 08:25:53 np0005465987 nova_compute[230713]: 2025-10-02 12:25:53.077 2 INFO nova.virt.libvirt.driver [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Deletion of /var/lib/nova/instances/e69e3ed0-aca2-475e-9c98-e8d075ea4ce3_del complete#033[00m
Oct  2 08:25:53 np0005465987 nova_compute[230713]: 2025-10-02 12:25:53.123 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updating instance_info_cache with network_info: [{"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:53 np0005465987 nova_compute[230713]: 2025-10-02 12:25:53.181 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-63cd9316-0da7-4274-b2dc-3501e55e6617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:53 np0005465987 nova_compute[230713]: 2025-10-02 12:25:53.181 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:25:53 np0005465987 nova_compute[230713]: 2025-10-02 12:25:53.189 2 INFO nova.compute.manager [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Took 1.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:25:53 np0005465987 nova_compute[230713]: 2025-10-02 12:25:53.189 2 DEBUG oslo.service.loopingcall [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:25:53 np0005465987 nova_compute[230713]: 2025-10-02 12:25:53.190 2 DEBUG nova.compute.manager [-] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:25:53 np0005465987 nova_compute[230713]: 2025-10-02 12:25:53.190 2 DEBUG nova.network.neutron [-] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:25:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:54 np0005465987 nova_compute[230713]: 2025-10-02 12:25:54.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:54.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:55 np0005465987 nova_compute[230713]: 2025-10-02 12:25:55.173 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:56 np0005465987 nova_compute[230713]: 2025-10-02 12:25:56.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:56.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:57 np0005465987 nova_compute[230713]: 2025-10-02 12:25:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:57 np0005465987 nova_compute[230713]: 2025-10-02 12:25:57.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:25:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:25:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:25:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:58.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:59 np0005465987 nova_compute[230713]: 2025-10-02 12:25:59.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:59 np0005465987 nova_compute[230713]: 2025-10-02 12:25:59.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:00.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:00.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:00 np0005465987 nova_compute[230713]: 2025-10-02 12:26:00.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:00 np0005465987 nova_compute[230713]: 2025-10-02 12:26:00.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:01 np0005465987 nova_compute[230713]: 2025-10-02 12:26:01.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:02.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:02.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:02 np0005465987 podman[268427]: 2025-10-02 12:26:02.875967914 +0000 UTC m=+0.086650637 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:26:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:04.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:04 np0005465987 nova_compute[230713]: 2025-10-02 12:26:04.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:04.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:06.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.272 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.273 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.273 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.273 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.273 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.639 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407951.6391418, e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.640 2 INFO nova.compute.manager [-] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/627826706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.732 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:06.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:06 np0005465987 nova_compute[230713]: 2025-10-02 12:26:06.934 2 DEBUG nova.compute.manager [None req-21964992-46ff-48f5-8e9b-f758a8f9b2b9 - - - - - -] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:26:07 np0005465987 nova_compute[230713]: 2025-10-02 12:26:07.075 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:26:07 np0005465987 nova_compute[230713]: 2025-10-02 12:26:07.075 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:26:07 np0005465987 nova_compute[230713]: 2025-10-02 12:26:07.227 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:26:07 np0005465987 nova_compute[230713]: 2025-10-02 12:26:07.228 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4268MB free_disk=20.851421356201172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:26:07 np0005465987 nova_compute[230713]: 2025-10-02 12:26:07.228 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:07 np0005465987 nova_compute[230713]: 2025-10-02 12:26:07.228 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:08.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.201 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 63cd9316-0da7-4274-b2dc-3501e55e6617 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.201 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.201 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.202 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.754 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:08.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.882 2 DEBUG nova.compute.manager [req-ff4d213e-34ee-477d-aace-faecf7a14c54 req-322b6e79-32da-464c-bfcc-09559650fccd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-plugged-79000dd0-8403-4378-a1a4-87315f5019fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.883 2 DEBUG oslo_concurrency.lockutils [req-ff4d213e-34ee-477d-aace-faecf7a14c54 req-322b6e79-32da-464c-bfcc-09559650fccd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.883 2 DEBUG oslo_concurrency.lockutils [req-ff4d213e-34ee-477d-aace-faecf7a14c54 req-322b6e79-32da-464c-bfcc-09559650fccd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.884 2 DEBUG oslo_concurrency.lockutils [req-ff4d213e-34ee-477d-aace-faecf7a14c54 req-322b6e79-32da-464c-bfcc-09559650fccd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.884 2 DEBUG nova.compute.manager [req-ff4d213e-34ee-477d-aace-faecf7a14c54 req-322b6e79-32da-464c-bfcc-09559650fccd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] No waiting events found dispatching network-vif-plugged-79000dd0-8403-4378-a1a4-87315f5019fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:26:08 np0005465987 nova_compute[230713]: 2025-10-02 12:26:08.884 2 WARNING nova.compute.manager [req-ff4d213e-34ee-477d-aace-faecf7a14c54 req-322b6e79-32da-464c-bfcc-09559650fccd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received unexpected event network-vif-plugged-79000dd0-8403-4378-a1a4-87315f5019fb for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:26:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4083597903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.247 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.255 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.306 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.414 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.415 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.568 2 DEBUG nova.network.neutron [-] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.668 2 INFO nova.compute.manager [-] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Took 16.48 seconds to deallocate network for instance.#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.709 2 DEBUG nova.compute.manager [req-b4d61081-72cc-41b7-aeed-9f8894ac047e req-402c30da-913e-4211-913f-36b3f7ca64a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Received event network-vif-deleted-79000dd0-8403-4378-a1a4-87315f5019fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.709 2 INFO nova.compute.manager [req-b4d61081-72cc-41b7-aeed-9f8894ac047e req-402c30da-913e-4211-913f-36b3f7ca64a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Neutron deleted interface 79000dd0-8403-4378-a1a4-87315f5019fb; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.709 2 DEBUG nova.network.neutron [req-b4d61081-72cc-41b7-aeed-9f8894ac047e req-402c30da-913e-4211-913f-36b3f7ca64a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.841 2 DEBUG oslo_concurrency.lockutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.841 2 DEBUG oslo_concurrency.lockutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.890 2 DEBUG nova.compute.manager [req-b4d61081-72cc-41b7-aeed-9f8894ac047e req-402c30da-913e-4211-913f-36b3f7ca64a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e69e3ed0-aca2-475e-9c98-e8d075ea4ce3] Detach interface failed, port_id=79000dd0-8403-4378-a1a4-87315f5019fb, reason: Instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:26:09 np0005465987 nova_compute[230713]: 2025-10-02 12:26:09.945 2 DEBUG oslo_concurrency.processutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:10.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1562785191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:10 np0005465987 nova_compute[230713]: 2025-10-02 12:26:10.409 2 DEBUG oslo_concurrency.processutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:10 np0005465987 nova_compute[230713]: 2025-10-02 12:26:10.415 2 DEBUG nova.compute.provider_tree [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:26:10 np0005465987 nova_compute[230713]: 2025-10-02 12:26:10.478 2 DEBUG nova.scheduler.client.report [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:26:10 np0005465987 nova_compute[230713]: 2025-10-02 12:26:10.614 2 DEBUG oslo_concurrency.lockutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:10 np0005465987 nova_compute[230713]: 2025-10-02 12:26:10.798 2 INFO nova.scheduler.client.report [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Deleted allocations for instance e69e3ed0-aca2-475e-9c98-e8d075ea4ce3#033[00m
Oct  2 08:26:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:10.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:11 np0005465987 nova_compute[230713]: 2025-10-02 12:26:11.128 2 DEBUG oslo_concurrency.lockutils [None req-ccd1d8eb-f8bd-4603-af6e-72d9116be73a 39945f35886a49cb93285686405e043a d5bedc1d8125465b92c98f9ebeb80ddb - - default default] Lock "e69e3ed0-aca2-475e-9c98-e8d075ea4ce3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 19.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:11 np0005465987 nova_compute[230713]: 2025-10-02 12:26:11.415 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:11 np0005465987 nova_compute[230713]: 2025-10-02 12:26:11.415 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:26:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e299 e299: 3 total, 3 up, 3 in
Oct  2 08:26:11 np0005465987 nova_compute[230713]: 2025-10-02 12:26:11.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:11 np0005465987 podman[268515]: 2025-10-02 12:26:11.840532178 +0000 UTC m=+0.053338530 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:26:11 np0005465987 podman[268514]: 2025-10-02 12:26:11.860303312 +0000 UTC m=+0.077304779 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:26:11 np0005465987 podman[268516]: 2025-10-02 12:26:11.873333451 +0000 UTC m=+0.082276796 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:26:11 np0005465987 nova_compute[230713]: 2025-10-02 12:26:11.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:12.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:12.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:14.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:14 np0005465987 nova_compute[230713]: 2025-10-02 12:26:14.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:14.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:16.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:16 np0005465987 nova_compute[230713]: 2025-10-02 12:26:16.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:16.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e300 e300: 3 total, 3 up, 3 in
Oct  2 08:26:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:18.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:18.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:19 np0005465987 nova_compute[230713]: 2025-10-02 12:26:19.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:20.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:20.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:21 np0005465987 nova_compute[230713]: 2025-10-02 12:26:21.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:22.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e301 e301: 3 total, 3 up, 3 in
Oct  2 08:26:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:22.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:26:23Z|00361|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:26:23 np0005465987 nova_compute[230713]: 2025-10-02 12:26:23.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:24.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:24 np0005465987 nova_compute[230713]: 2025-10-02 12:26:24.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:24.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:26.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:26 np0005465987 nova_compute[230713]: 2025-10-02 12:26:26.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:26.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:26:27Z|00362|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:26:27 np0005465987 nova_compute[230713]: 2025-10-02 12:26:27.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:26:27Z|00363|binding|INFO|Releasing lport 1befa812-080f-4694-ba8b-9130fe81621d from this chassis (sb_readonly=0)
Oct  2 08:26:27 np0005465987 nova_compute[230713]: 2025-10-02 12:26:27.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 e302: 3 total, 3 up, 3 in
Oct  2 08:26:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:27.953 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:27.953 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:27.954 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:28.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:26:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:28.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.521 2 DEBUG oslo_concurrency.lockutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "63cd9316-0da7-4274-b2dc-3501e55e6617" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.522 2 DEBUG oslo_concurrency.lockutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.522 2 DEBUG oslo_concurrency.lockutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.522 2 DEBUG oslo_concurrency.lockutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.522 2 DEBUG oslo_concurrency.lockutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.523 2 INFO nova.compute.manager [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Terminating instance#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.524 2 DEBUG nova.compute.manager [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:29 np0005465987 kernel: tap442bf395-51 (unregistering): left promiscuous mode
Oct  2 08:26:29 np0005465987 NetworkManager[44910]: <info>  [1759407989.5892] device (tap442bf395-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:26:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:26:29Z|00364|binding|INFO|Releasing lport 442bf395-5165-4681-8962-72823f5bdbd0 from this chassis (sb_readonly=0)
Oct  2 08:26:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:26:29Z|00365|binding|INFO|Setting lport 442bf395-5165-4681-8962-72823f5bdbd0 down in Southbound
Oct  2 08:26:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:26:29Z|00366|binding|INFO|Removing iface tap442bf395-51 ovn-installed in OVS
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:29 np0005465987 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct  2 08:26:29 np0005465987 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000058.scope: Consumed 27.611s CPU time.
Oct  2 08:26:29 np0005465987 systemd-machined[188335]: Machine qemu-39-instance-00000058 terminated.
Oct  2 08:26:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:29.755 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:e2:a1 10.100.0.6'], port_security=['fa:16:3e:72:e2:a1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '63cd9316-0da7-4274-b2dc-3501e55e6617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4035a600-4a5e-41ee-a619-d81e2c993b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '10fff81da7a54740a53a0771ce916329', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3fd9677a-ce06-4783-a778-d114830d9fab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5dc7931-b785-4336-99b8-936a17be87c3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=442bf395-5165-4681-8962-72823f5bdbd0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:29.757 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 442bf395-5165-4681-8962-72823f5bdbd0 in datapath 4035a600-4a5e-41ee-a619-d81e2c993b79 unbound from our chassis#033[00m
Oct  2 08:26:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:29.758 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4035a600-4a5e-41ee-a619-d81e2c993b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:26:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:29.759 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[02d11513-bba1-4e86-b5e9-7c7218ba7a77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:29.759 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 namespace which is not needed anymore#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.766 2 INFO nova.virt.libvirt.driver [-] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Instance destroyed successfully.#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.768 2 DEBUG nova.objects.instance [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lazy-loading 'resources' on Instance uuid 63cd9316-0da7-4274-b2dc-3501e55e6617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.840 2 DEBUG nova.virt.libvirt.vif [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:21:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1359566530',display_name='tempest-ServerActionsTestOtherB-server-1359566530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1359566530',id=88,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:21:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='10fff81da7a54740a53a0771ce916329',ramdisk_id='',reservation_id='r-576mlh6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1686489955',owner_user_name='tempest-ServerActionsTestOtherB-1686489955-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:21:25Z,user_data=None,user_id='25468893d71641a385711fd2982bb00b',uuid=63cd9316-0da7-4274-b2dc-3501e55e6617,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.841 2 DEBUG nova.network.os_vif_util [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converting VIF {"id": "442bf395-5165-4681-8962-72823f5bdbd0", "address": "fa:16:3e:72:e2:a1", "network": {"id": "4035a600-4a5e-41ee-a619-d81e2c993b79", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2080271662-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "10fff81da7a54740a53a0771ce916329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442bf395-51", "ovs_interfaceid": "442bf395-5165-4681-8962-72823f5bdbd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.842 2 DEBUG nova.network.os_vif_util [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:e2:a1,bridge_name='br-int',has_traffic_filtering=True,id=442bf395-5165-4681-8962-72823f5bdbd0,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442bf395-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.843 2 DEBUG os_vif [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:e2:a1,bridge_name='br-int',has_traffic_filtering=True,id=442bf395-5165-4681-8962-72823f5bdbd0,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442bf395-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.846 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap442bf395-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:26:29 np0005465987 nova_compute[230713]: 2025-10-02 12:26:29.858 2 INFO os_vif [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:e2:a1,bridge_name='br-int',has_traffic_filtering=True,id=442bf395-5165-4681-8962-72823f5bdbd0,network=Network(4035a600-4a5e-41ee-a619-d81e2c993b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442bf395-51')#033[00m
Oct  2 08:26:29 np0005465987 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[262064]: [NOTICE]   (262068) : haproxy version is 2.8.14-c23fe91
Oct  2 08:26:29 np0005465987 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[262064]: [NOTICE]   (262068) : path to executable is /usr/sbin/haproxy
Oct  2 08:26:29 np0005465987 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[262064]: [WARNING]  (262068) : Exiting Master process...
Oct  2 08:26:29 np0005465987 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[262064]: [WARNING]  (262068) : Exiting Master process...
Oct  2 08:26:29 np0005465987 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[262064]: [ALERT]    (262068) : Current worker (262070) exited with code 143 (Terminated)
Oct  2 08:26:29 np0005465987 neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79[262064]: [WARNING]  (262068) : All workers exited. Exiting... (0)
Oct  2 08:26:29 np0005465987 systemd[1]: libpod-54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e.scope: Deactivated successfully.
Oct  2 08:26:29 np0005465987 podman[268611]: 2025-10-02 12:26:29.927610075 +0000 UTC m=+0.061998738 container died 54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:26:29 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e-userdata-shm.mount: Deactivated successfully.
Oct  2 08:26:29 np0005465987 systemd[1]: var-lib-containers-storage-overlay-5041ebbf2a70033c3cfc3923be61d1a6fffca1f94c5d340e5fd50df40ce80e08-merged.mount: Deactivated successfully.
Oct  2 08:26:29 np0005465987 podman[268611]: 2025-10-02 12:26:29.963262287 +0000 UTC m=+0.097650920 container cleanup 54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:26:29 np0005465987 systemd[1]: libpod-conmon-54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e.scope: Deactivated successfully.
Oct  2 08:26:30 np0005465987 podman[268661]: 2025-10-02 12:26:30.019520215 +0000 UTC m=+0.034985884 container remove 54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:26:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:30.025 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fb59c233-3a7b-4bb5-90b2-8788812c4ff0]: (4, ('Thu Oct  2 12:26:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e)\n54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e\nThu Oct  2 12:26:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 (54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e)\n54a35aa80dc1a2001233470cec98e307f124e57e8eded5a71ed2353bbb64c48e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:30.026 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[18a439e9-91de-49fa-9753-d8220494f44d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:30.027 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4035a600-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:30 np0005465987 nova_compute[230713]: 2025-10-02 12:26:30.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:30 np0005465987 kernel: tap4035a600-40: left promiscuous mode
Oct  2 08:26:30 np0005465987 nova_compute[230713]: 2025-10-02 12:26:30.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:30.044 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3b516516-de86-4dd7-9f9f-5848b4320975]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:30.081 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[12ccc928-13e0-489d-8f6b-883909d2f90b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:30.082 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e934da4d-e982-4a2f-b518-a9eabb5149c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:30.097 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7c354d-c525-4ed5-80de-472ae6a82209]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566732, 'reachable_time': 15783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268676, 'error': None, 'target': 'ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:30 np0005465987 systemd[1]: run-netns-ovnmeta\x2d4035a600\x2d4a5e\x2d41ee\x2da619\x2dd81e2c993b79.mount: Deactivated successfully.
Oct  2 08:26:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:30.099 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4035a600-4a5e-41ee-a619-d81e2c993b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:26:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:30.100 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[84c4f741-2f2e-49ab-abea-42676400c692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:26:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:30.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:30 np0005465987 nova_compute[230713]: 2025-10-02 12:26:30.409 2 INFO nova.virt.libvirt.driver [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Deleting instance files /var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617_del#033[00m
Oct  2 08:26:30 np0005465987 nova_compute[230713]: 2025-10-02 12:26:30.410 2 INFO nova.virt.libvirt.driver [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Deletion of /var/lib/nova/instances/63cd9316-0da7-4274-b2dc-3501e55e6617_del complete#033[00m
Oct  2 08:26:30 np0005465987 nova_compute[230713]: 2025-10-02 12:26:30.762 2 INFO nova.compute.manager [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:26:30 np0005465987 nova_compute[230713]: 2025-10-02 12:26:30.763 2 DEBUG oslo.service.loopingcall [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:26:30 np0005465987 nova_compute[230713]: 2025-10-02 12:26:30.763 2 DEBUG nova.compute.manager [-] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:26:30 np0005465987 nova_compute[230713]: 2025-10-02 12:26:30.764 2 DEBUG nova.network.neutron [-] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:26:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:30.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:32.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.261 2 DEBUG nova.network.neutron [-] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.275 2 INFO nova.compute.manager [-] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Took 1.51 seconds to deallocate network for instance.#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.317 2 DEBUG oslo_concurrency.lockutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.318 2 DEBUG oslo_concurrency.lockutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.669 2 DEBUG oslo_concurrency.processutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.700 2 DEBUG nova.compute.manager [req-2d670f41-cb46-44b6-ab7c-26f5a3038a8d req-11d60326-baef-422b-9a15-a5b7c3c73b1f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Received event network-vif-deleted-442bf395-5165-4681-8962-72823f5bdbd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.702 2 DEBUG nova.compute.manager [req-b4e7f532-acda-4fc4-adf2-60574f3653d9 req-c6e8b51f-7d01-4c49-a6fa-db35b15a1ca9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Received event network-vif-unplugged-442bf395-5165-4681-8962-72823f5bdbd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.702 2 DEBUG oslo_concurrency.lockutils [req-b4e7f532-acda-4fc4-adf2-60574f3653d9 req-c6e8b51f-7d01-4c49-a6fa-db35b15a1ca9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.702 2 DEBUG oslo_concurrency.lockutils [req-b4e7f532-acda-4fc4-adf2-60574f3653d9 req-c6e8b51f-7d01-4c49-a6fa-db35b15a1ca9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.703 2 DEBUG oslo_concurrency.lockutils [req-b4e7f532-acda-4fc4-adf2-60574f3653d9 req-c6e8b51f-7d01-4c49-a6fa-db35b15a1ca9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.703 2 DEBUG nova.compute.manager [req-b4e7f532-acda-4fc4-adf2-60574f3653d9 req-c6e8b51f-7d01-4c49-a6fa-db35b15a1ca9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] No waiting events found dispatching network-vif-unplugged-442bf395-5165-4681-8962-72823f5bdbd0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:26:32 np0005465987 nova_compute[230713]: 2025-10-02 12:26:32.703 2 WARNING nova.compute.manager [req-b4e7f532-acda-4fc4-adf2-60574f3653d9 req-c6e8b51f-7d01-4c49-a6fa-db35b15a1ca9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Received unexpected event network-vif-unplugged-442bf395-5165-4681-8962-72823f5bdbd0 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:26:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:32.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2143074098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:33 np0005465987 nova_compute[230713]: 2025-10-02 12:26:33.110 2 DEBUG oslo_concurrency.processutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:33 np0005465987 nova_compute[230713]: 2025-10-02 12:26:33.118 2 DEBUG nova.compute.provider_tree [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:26:33 np0005465987 nova_compute[230713]: 2025-10-02 12:26:33.193 2 DEBUG nova.scheduler.client.report [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:26:33 np0005465987 nova_compute[230713]: 2025-10-02 12:26:33.292 2 DEBUG oslo_concurrency.lockutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:33 np0005465987 nova_compute[230713]: 2025-10-02 12:26:33.376 2 INFO nova.scheduler.client.report [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Deleted allocations for instance 63cd9316-0da7-4274-b2dc-3501e55e6617#033[00m
Oct  2 08:26:33 np0005465987 nova_compute[230713]: 2025-10-02 12:26:33.671 2 DEBUG oslo_concurrency.lockutils [None req-a57b2b42-a63b-4811-b001-c7dfeaa22016 25468893d71641a385711fd2982bb00b 10fff81da7a54740a53a0771ce916329 - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:33 np0005465987 podman[268700]: 2025-10-02 12:26:33.833385128 +0000 UTC m=+0.049749231 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:26:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:34.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:34 np0005465987 nova_compute[230713]: 2025-10-02 12:26:34.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:34.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:34 np0005465987 nova_compute[230713]: 2025-10-02 12:26:34.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:35 np0005465987 nova_compute[230713]: 2025-10-02 12:26:35.023 2 DEBUG nova.compute.manager [req-78112a01-75e5-4cc0-a8f1-b5dcc9344b5e req-f813af68-0e3e-4587-9d75-ba328467cacf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Received event network-vif-plugged-442bf395-5165-4681-8962-72823f5bdbd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:26:35 np0005465987 nova_compute[230713]: 2025-10-02 12:26:35.024 2 DEBUG oslo_concurrency.lockutils [req-78112a01-75e5-4cc0-a8f1-b5dcc9344b5e req-f813af68-0e3e-4587-9d75-ba328467cacf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:35 np0005465987 nova_compute[230713]: 2025-10-02 12:26:35.024 2 DEBUG oslo_concurrency.lockutils [req-78112a01-75e5-4cc0-a8f1-b5dcc9344b5e req-f813af68-0e3e-4587-9d75-ba328467cacf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:35 np0005465987 nova_compute[230713]: 2025-10-02 12:26:35.024 2 DEBUG oslo_concurrency.lockutils [req-78112a01-75e5-4cc0-a8f1-b5dcc9344b5e req-f813af68-0e3e-4587-9d75-ba328467cacf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "63cd9316-0da7-4274-b2dc-3501e55e6617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:35 np0005465987 nova_compute[230713]: 2025-10-02 12:26:35.024 2 DEBUG nova.compute.manager [req-78112a01-75e5-4cc0-a8f1-b5dcc9344b5e req-f813af68-0e3e-4587-9d75-ba328467cacf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] No waiting events found dispatching network-vif-plugged-442bf395-5165-4681-8962-72823f5bdbd0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:26:35 np0005465987 nova_compute[230713]: 2025-10-02 12:26:35.025 2 WARNING nova.compute.manager [req-78112a01-75e5-4cc0-a8f1-b5dcc9344b5e req-f813af68-0e3e-4587-9d75-ba328467cacf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Received unexpected event network-vif-plugged-442bf395-5165-4681-8962-72823f5bdbd0 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:26:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:36.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:36.433 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:36 np0005465987 nova_compute[230713]: 2025-10-02 12:26:36.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:36 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:36.436 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:26:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:26:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:26:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:26:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:26:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:36.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:26:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:26:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:26:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:26:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:38.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:38.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:39 np0005465987 nova_compute[230713]: 2025-10-02 12:26:39.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:39 np0005465987 nova_compute[230713]: 2025-10-02 12:26:39.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:40.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:40 np0005465987 nova_compute[230713]: 2025-10-02 12:26:40.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:40.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:42.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:42 np0005465987 podman[268852]: 2025-10-02 12:26:42.862390086 +0000 UTC m=+0.073126614 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:26:42 np0005465987 podman[268853]: 2025-10-02 12:26:42.862611972 +0000 UTC m=+0.067225652 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:26:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:42.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:42 np0005465987 podman[268851]: 2025-10-02 12:26:42.934617225 +0000 UTC m=+0.146321770 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  2 08:26:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:44.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:26:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:26:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:26:44.439 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:44 np0005465987 nova_compute[230713]: 2025-10-02 12:26:44.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:44 np0005465987 nova_compute[230713]: 2025-10-02 12:26:44.763 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407989.7615447, 63cd9316-0da7-4274-b2dc-3501e55e6617 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:26:44 np0005465987 nova_compute[230713]: 2025-10-02 12:26:44.763 2 INFO nova.compute.manager [-] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:26:44 np0005465987 nova_compute[230713]: 2025-10-02 12:26:44.789 2 DEBUG nova.compute.manager [None req-ea5a5a76-8539-43fe-a30f-102720496903 - - - - - -] [instance: 63cd9316-0da7-4274-b2dc-3501e55e6617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:26:44 np0005465987 nova_compute[230713]: 2025-10-02 12:26:44.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:44.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:46.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:46.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:48.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:48.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:48 np0005465987 nova_compute[230713]: 2025-10-02 12:26:48.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:49 np0005465987 nova_compute[230713]: 2025-10-02 12:26:49.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:49 np0005465987 nova_compute[230713]: 2025-10-02 12:26:49.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:50.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:50.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:51 np0005465987 nova_compute[230713]: 2025-10-02 12:26:51.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:51 np0005465987 nova_compute[230713]: 2025-10-02 12:26:51.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:26:52 np0005465987 nova_compute[230713]: 2025-10-02 12:26:52.022 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:26:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:52.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:54 np0005465987 nova_compute[230713]: 2025-10-02 12:26:54.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:54.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:54 np0005465987 nova_compute[230713]: 2025-10-02 12:26:54.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:55 np0005465987 nova_compute[230713]: 2025-10-02 12:26:55.014 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:26:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2132485405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:26:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:26:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2132485405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:26:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:56.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:56.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:58.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:26:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:26:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:58.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:26:58 np0005465987 nova_compute[230713]: 2025-10-02 12:26:58.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:59 np0005465987 nova_compute[230713]: 2025-10-02 12:26:59.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:59 np0005465987 nova_compute[230713]: 2025-10-02 12:26:59.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:59 np0005465987 nova_compute[230713]: 2025-10-02 12:26:59.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:00.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:00.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:00 np0005465987 nova_compute[230713]: 2025-10-02 12:27:00.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:00 np0005465987 nova_compute[230713]: 2025-10-02 12:27:00.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.010 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.010 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.010 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3932410540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.431 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.609 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.610 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4512MB free_disk=20.90090560913086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.610 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:01 np0005465987 nova_compute[230713]: 2025-10-02 12:27:01.611 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:02.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:03 np0005465987 nova_compute[230713]: 2025-10-02 12:27:03.151 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:27:03 np0005465987 nova_compute[230713]: 2025-10-02 12:27:03.152 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:27:03 np0005465987 nova_compute[230713]: 2025-10-02 12:27:03.187 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/414385312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:03 np0005465987 nova_compute[230713]: 2025-10-02 12:27:03.662 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:03 np0005465987 nova_compute[230713]: 2025-10-02 12:27:03.668 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:27:03 np0005465987 nova_compute[230713]: 2025-10-02 12:27:03.703 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:27:03 np0005465987 nova_compute[230713]: 2025-10-02 12:27:03.754 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:27:03 np0005465987 nova_compute[230713]: 2025-10-02 12:27:03.754 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:04 np0005465987 nova_compute[230713]: 2025-10-02 12:27:04.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:04 np0005465987 nova_compute[230713]: 2025-10-02 12:27:04.755 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:04 np0005465987 nova_compute[230713]: 2025-10-02 12:27:04.755 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:04 np0005465987 nova_compute[230713]: 2025-10-02 12:27:04.756 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:27:04 np0005465987 podman[269012]: 2025-10-02 12:27:04.829577569 +0000 UTC m=+0.052436875 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:27:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:04.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:04 np0005465987 nova_compute[230713]: 2025-10-02 12:27:04.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:05 np0005465987 nova_compute[230713]: 2025-10-02 12:27:05.641 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:05 np0005465987 nova_compute[230713]: 2025-10-02 12:27:05.641 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:05 np0005465987 nova_compute[230713]: 2025-10-02 12:27:05.682 2 DEBUG nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:27:05 np0005465987 nova_compute[230713]: 2025-10-02 12:27:05.762 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:05 np0005465987 nova_compute[230713]: 2025-10-02 12:27:05.762 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:05 np0005465987 nova_compute[230713]: 2025-10-02 12:27:05.766 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:27:05 np0005465987 nova_compute[230713]: 2025-10-02 12:27:05.767 2 INFO nova.compute.claims [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:27:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:06.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:27:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:06.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:27:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:08.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:08 np0005465987 nova_compute[230713]: 2025-10-02 12:27:08.643 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:08.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/742325516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.194 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.201 2 DEBUG nova.compute.provider_tree [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.234 2 DEBUG nova.scheduler.client.report [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.272 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.274 2 DEBUG nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.324 2 DEBUG nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.325 2 DEBUG nova.network.neutron [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.348 2 INFO nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.370 2 DEBUG nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.427 2 INFO nova.virt.block_device [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Booting with volume 9c6d2b3e-2a88-49f0-aefb-0738951478c4 at /dev/vda#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.628 2 DEBUG os_brick.utils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.629 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.645 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.646 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[78449cc2-8904-42ef-9e91-96d4286f4338]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.647 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.658 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.658 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[ffad2b91-f8e8-40cb-9516-acd01a4b2b27]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.659 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.669 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.670 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf6761a-b382-468c-b3a6-6ee4e8864db0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.671 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[85a2a485-0428-4195-8a2e-23f94aa96cdb]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.672 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.701 2 DEBUG nova.policy [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '22d56fcd2a4b4851bfd126ae4548ee9b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5533aaac08cd4856af72ef4992bb5e76', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.705 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.706 2 DEBUG os_brick.initiator.connectors.lightos [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.706 2 DEBUG os_brick.initiator.connectors.lightos [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.707 2 DEBUG os_brick.initiator.connectors.lightos [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.707 2 DEBUG os_brick.utils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.707 2 DEBUG nova.virt.block_device [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updating existing volume attachment record: 7b590e49-9c73-46d4-a23e-e0b73ddfbf08 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:27:09 np0005465987 nova_compute[230713]: 2025-10-02 12:27:09.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:10.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:10.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:11 np0005465987 nova_compute[230713]: 2025-10-02 12:27:11.391 2 DEBUG nova.network.neutron [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Successfully created port: 2131d2f6-11eb-40e4-98d9-fded3f4db535 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:27:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:27:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1725604606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:27:12 np0005465987 nova_compute[230713]: 2025-10-02 12:27:12.000 2 DEBUG nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:27:12 np0005465987 nova_compute[230713]: 2025-10-02 12:27:12.003 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:27:12 np0005465987 nova_compute[230713]: 2025-10-02 12:27:12.003 2 INFO nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Creating image(s)#033[00m
Oct  2 08:27:12 np0005465987 nova_compute[230713]: 2025-10-02 12:27:12.004 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:27:12 np0005465987 nova_compute[230713]: 2025-10-02 12:27:12.005 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Ensure instance console log exists: /var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:27:12 np0005465987 nova_compute[230713]: 2025-10-02 12:27:12.005 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:12 np0005465987 nova_compute[230713]: 2025-10-02 12:27:12.006 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:12 np0005465987 nova_compute[230713]: 2025-10-02 12:27:12.006 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:12.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:12.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:13 np0005465987 podman[269062]: 2025-10-02 12:27:13.84856517 +0000 UTC m=+0.056294441 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct  2 08:27:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:13 np0005465987 podman[269061]: 2025-10-02 12:27:13.855538573 +0000 UTC m=+0.063206072 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:27:13 np0005465987 podman[269060]: 2025-10-02 12:27:13.909457747 +0000 UTC m=+0.115417318 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 08:27:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:14.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:14 np0005465987 nova_compute[230713]: 2025-10-02 12:27:14.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:14.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:14 np0005465987 nova_compute[230713]: 2025-10-02 12:27:14.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:15 np0005465987 nova_compute[230713]: 2025-10-02 12:27:15.857 2 DEBUG nova.network.neutron [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Successfully updated port: 2131d2f6-11eb-40e4-98d9-fded3f4db535 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:27:16 np0005465987 nova_compute[230713]: 2025-10-02 12:27:16.208 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:16 np0005465987 nova_compute[230713]: 2025-10-02 12:27:16.208 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquired lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:16 np0005465987 nova_compute[230713]: 2025-10-02 12:27:16.208 2 DEBUG nova.network.neutron [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:27:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:16.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:16 np0005465987 nova_compute[230713]: 2025-10-02 12:27:16.534 2 DEBUG nova.network.neutron [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:27:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:17 np0005465987 nova_compute[230713]: 2025-10-02 12:27:17.749 2 DEBUG nova.network.neutron [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updating instance_info_cache with network_info: [{"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:18.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.752 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Releasing lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.752 2 DEBUG nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Instance network_info: |[{"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.755 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Start _get_guest_xml network_info=[{"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '7b590e49-9c73-46d4-a23e-e0b73ddfbf08', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9c6d2b3e-2a88-49f0-aefb-0738951478c4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9c6d2b3e-2a88-49f0-aefb-0738951478c4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'db5a79b4-af1c-4c62-a8d8-bfc6aab11a19', 'attached_at': '', 'detached_at': '', 'volume_id': '9c6d2b3e-2a88-49f0-aefb-0738951478c4', 'serial': '9c6d2b3e-2a88-49f0-aefb-0738951478c4', 'multiattach': True}, 'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.759 2 WARNING nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.764 2 DEBUG nova.virt.libvirt.host [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.765 2 DEBUG nova.virt.libvirt.host [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.767 2 DEBUG nova.compute.manager [req-28ee88c2-7102-46e8-8d32-fb8d569dc6b9 req-f293d01e-0ba2-48c3-9bfc-27dceee280ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Received event network-changed-2131d2f6-11eb-40e4-98d9-fded3f4db535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.767 2 DEBUG nova.compute.manager [req-28ee88c2-7102-46e8-8d32-fb8d569dc6b9 req-f293d01e-0ba2-48c3-9bfc-27dceee280ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Refreshing instance network info cache due to event network-changed-2131d2f6-11eb-40e4-98d9-fded3f4db535. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.768 2 DEBUG oslo_concurrency.lockutils [req-28ee88c2-7102-46e8-8d32-fb8d569dc6b9 req-f293d01e-0ba2-48c3-9bfc-27dceee280ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.768 2 DEBUG oslo_concurrency.lockutils [req-28ee88c2-7102-46e8-8d32-fb8d569dc6b9 req-f293d01e-0ba2-48c3-9bfc-27dceee280ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.768 2 DEBUG nova.network.neutron [req-28ee88c2-7102-46e8-8d32-fb8d569dc6b9 req-f293d01e-0ba2-48c3-9bfc-27dceee280ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Refreshing network info cache for port 2131d2f6-11eb-40e4-98d9-fded3f4db535 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.773 2 DEBUG nova.virt.libvirt.host [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.773 2 DEBUG nova.virt.libvirt.host [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.774 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.774 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.775 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.775 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.775 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.775 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.775 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.776 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.776 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.776 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.776 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.776 2 DEBUG nova.virt.hardware [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.803 2 DEBUG nova.storage.rbd_utils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image db5a79b4-af1c-4c62-a8d8-bfc6aab11a19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.808 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:19 np0005465987 nova_compute[230713]: 2025-10-02 12:27:19.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:27:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1016150118' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.266 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:20.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.334 2 DEBUG nova.virt.libvirt.vif [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-49052235',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-49052235',id=107,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-t2d6d8nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:09Z,user_data=None,user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=db5a79b4-af1c-4c62-a8d8-bfc6aab11a19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.335 2 DEBUG nova.network.os_vif_util [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.336 2 DEBUG nova.network.os_vif_util [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:67:0c,bridge_name='br-int',has_traffic_filtering=True,id=2131d2f6-11eb-40e4-98d9-fded3f4db535,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2131d2f6-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.337 2 DEBUG nova.objects.instance [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'pci_devices' on Instance uuid db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.375 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <uuid>db5a79b4-af1c-4c62-a8d8-bfc6aab11a19</uuid>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <name>instance-0000006b</name>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <nova:name>tempest-AttachVolumeMultiAttachTest-server-49052235</nova:name>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:27:19</nova:creationTime>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <nova:user uuid="22d56fcd2a4b4851bfd126ae4548ee9b">tempest-AttachVolumeMultiAttachTest-1564585024-project-member</nova:user>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <nova:project uuid="5533aaac08cd4856af72ef4992bb5e76">tempest-AttachVolumeMultiAttachTest-1564585024</nova:project>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <nova:port uuid="2131d2f6-11eb-40e4-98d9-fded3f4db535">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <entry name="serial">db5a79b4-af1c-4c62-a8d8-bfc6aab11a19</entry>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <entry name="uuid">db5a79b4-af1c-4c62-a8d8-bfc6aab11a19</entry>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19_disk.config">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-9c6d2b3e-2a88-49f0-aefb-0738951478c4">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <serial>9c6d2b3e-2a88-49f0-aefb-0738951478c4</serial>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <shareable/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:f2:67:0c"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <target dev="tap2131d2f6-11"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19/console.log" append="off"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:27:20 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:27:20 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:27:20 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:27:20 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.375 2 DEBUG nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Preparing to wait for external event network-vif-plugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.375 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.376 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.376 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.376 2 DEBUG nova.virt.libvirt.vif [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-49052235',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-49052235',id=107,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-t2d6d8nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:09Z,user_data=None,user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=db5a79b4-af1c-4c62-a8d8-bfc6aab11a19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.377 2 DEBUG nova.network.os_vif_util [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.377 2 DEBUG nova.network.os_vif_util [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:67:0c,bridge_name='br-int',has_traffic_filtering=True,id=2131d2f6-11eb-40e4-98d9-fded3f4db535,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2131d2f6-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.378 2 DEBUG os_vif [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:67:0c,bridge_name='br-int',has_traffic_filtering=True,id=2131d2f6-11eb-40e4-98d9-fded3f4db535,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2131d2f6-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.379 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2131d2f6-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2131d2f6-11, col_values=(('external_ids', {'iface-id': '2131d2f6-11eb-40e4-98d9-fded3f4db535', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:67:0c', 'vm-uuid': 'db5a79b4-af1c-4c62-a8d8-bfc6aab11a19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:20 np0005465987 NetworkManager[44910]: <info>  [1759408040.3835] manager: (tap2131d2f6-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.389 2 INFO os_vif [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:67:0c,bridge_name='br-int',has_traffic_filtering=True,id=2131d2f6-11eb-40e4-98d9-fded3f4db535,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2131d2f6-11')#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.464 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.465 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.465 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No VIF found with MAC fa:16:3e:f2:67:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.465 2 INFO nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Using config drive#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.489 2 DEBUG nova.storage.rbd_utils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image db5a79b4-af1c-4c62-a8d8-bfc6aab11a19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:20.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.990 2 INFO nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Creating config drive at /var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19/disk.config#033[00m
Oct  2 08:27:20 np0005465987 nova_compute[230713]: 2025-10-02 12:27:20.996 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvuok3ejl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:21 np0005465987 nova_compute[230713]: 2025-10-02 12:27:21.129 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvuok3ejl" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:21 np0005465987 nova_compute[230713]: 2025-10-02 12:27:21.160 2 DEBUG nova.storage.rbd_utils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image db5a79b4-af1c-4c62-a8d8-bfc6aab11a19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:21 np0005465987 nova_compute[230713]: 2025-10-02 12:27:21.165 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19/disk.config db5a79b4-af1c-4c62-a8d8-bfc6aab11a19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:22 np0005465987 nova_compute[230713]: 2025-10-02 12:27:22.031 2 DEBUG nova.network.neutron [req-28ee88c2-7102-46e8-8d32-fb8d569dc6b9 req-f293d01e-0ba2-48c3-9bfc-27dceee280ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updated VIF entry in instance network info cache for port 2131d2f6-11eb-40e4-98d9-fded3f4db535. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:27:22 np0005465987 nova_compute[230713]: 2025-10-02 12:27:22.031 2 DEBUG nova.network.neutron [req-28ee88c2-7102-46e8-8d32-fb8d569dc6b9 req-f293d01e-0ba2-48c3-9bfc-27dceee280ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updating instance_info_cache with network_info: [{"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:22 np0005465987 nova_compute[230713]: 2025-10-02 12:27:22.099 2 DEBUG oslo_concurrency.processutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19/disk.config db5a79b4-af1c-4c62-a8d8-bfc6aab11a19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.934s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:22 np0005465987 nova_compute[230713]: 2025-10-02 12:27:22.100 2 INFO nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Deleting local config drive /var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19/disk.config because it was imported into RBD.#033[00m
Oct  2 08:27:22 np0005465987 kernel: tap2131d2f6-11: entered promiscuous mode
Oct  2 08:27:22 np0005465987 NetworkManager[44910]: <info>  [1759408042.1543] manager: (tap2131d2f6-11): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Oct  2 08:27:22 np0005465987 nova_compute[230713]: 2025-10-02 12:27:22.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:27:22Z|00367|binding|INFO|Claiming lport 2131d2f6-11eb-40e4-98d9-fded3f4db535 for this chassis.
Oct  2 08:27:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:27:22Z|00368|binding|INFO|2131d2f6-11eb-40e4-98d9-fded3f4db535: Claiming fa:16:3e:f2:67:0c 10.100.0.8
Oct  2 08:27:22 np0005465987 systemd-udevd[269237]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:27:22 np0005465987 systemd-machined[188335]: New machine qemu-45-instance-0000006b.
Oct  2 08:27:22 np0005465987 NetworkManager[44910]: <info>  [1759408042.1936] device (tap2131d2f6-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:27:22 np0005465987 NetworkManager[44910]: <info>  [1759408042.1944] device (tap2131d2f6-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:27:22 np0005465987 nova_compute[230713]: 2025-10-02 12:27:22.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:22 np0005465987 systemd[1]: Started Virtual Machine qemu-45-instance-0000006b.
Oct  2 08:27:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:27:22Z|00369|binding|INFO|Setting lport 2131d2f6-11eb-40e4-98d9-fded3f4db535 ovn-installed in OVS
Oct  2 08:27:22 np0005465987 nova_compute[230713]: 2025-10-02 12:27:22.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:22.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:23 np0005465987 nova_compute[230713]: 2025-10-02 12:27:23.112 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408043.111314, db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:23 np0005465987 nova_compute[230713]: 2025-10-02 12:27:23.113 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] VM Started (Lifecycle Event)#033[00m
Oct  2 08:27:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:24.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:24 np0005465987 nova_compute[230713]: 2025-10-02 12:27:24.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:27:24Z|00370|binding|INFO|Setting lport 2131d2f6-11eb-40e4-98d9-fded3f4db535 up in Southbound
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.778 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:67:0c 10.100.0.8'], port_security=['fa:16:3e:f2:67:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'db5a79b4-af1c-4c62-a8d8-bfc6aab11a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29622381-99e2-43f7-9de1-ba82c9e0ff23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2131d2f6-11eb-40e4-98d9-fded3f4db535) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.779 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2131d2f6-11eb-40e4-98d9-fded3f4db535 in datapath 585473f8-52e4-4e55-96df-8a236d361126 bound to our chassis#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.781 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 585473f8-52e4-4e55-96df-8a236d361126#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.791 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[87ed5019-bce9-4e18-8f81-80775a2cfa6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.792 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap585473f8-51 in ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.794 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap585473f8-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.794 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4134d5-eeca-47ac-959d-41ce70041822]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.794 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9f827bff-6205-4335-9305-939c57aab36f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.804 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3550a0-1499-4cb4-8145-24cc08794790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.834 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9059a056-e17b-4e97-8e43-4aab23924a0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.870 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1092a207-04ee-4a2e-8f31-ff4206765a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.875 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3c6d7a-6b4c-49f7-8c6e-e5172d33a54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 NetworkManager[44910]: <info>  [1759408044.8762] manager: (tap585473f8-50): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.912 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ca93e692-338b-4ef4-b7fe-f6f26a83c280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.917 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4cd94a-3ca4-4200-9505-21fde2fbd480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:24.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:24 np0005465987 NetworkManager[44910]: <info>  [1759408044.9390] device (tap585473f8-50): carrier: link connected
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.945 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[68fccb0d-9608-43d6-ac49-01e643081642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.961 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2760fdb2-bd3e-4239-8f19-7c8e23a94081]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611351, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269313, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.974 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bff20bf0-05fb-4bff-96b5-8a8b5b83abb4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:8e12'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611351, 'tstamp': 611351}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269314, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:24.991 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e33bc9ad-c6c9-4aa1-9d67-ea15accfa380]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611351, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269315, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.021 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd18ee1-f394-4680-bf94-a047dd7ec4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.081 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc6299e-4a16-4ca8-b144-5e64beabbdee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.083 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.083 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.083 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585473f8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:25 np0005465987 nova_compute[230713]: 2025-10-02 12:27:25.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:25 np0005465987 NetworkManager[44910]: <info>  [1759408045.0863] manager: (tap585473f8-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Oct  2 08:27:25 np0005465987 kernel: tap585473f8-50: entered promiscuous mode
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.088 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap585473f8-50, col_values=(('external_ids', {'iface-id': '02b7597d-2fc1-4c56-8603-4dcb0c716c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:25 np0005465987 ovn_controller[129172]: 2025-10-02T12:27:25Z|00371|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct  2 08:27:25 np0005465987 nova_compute[230713]: 2025-10-02 12:27:25.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:25 np0005465987 nova_compute[230713]: 2025-10-02 12:27:25.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.103 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/585473f8-52e4-4e55-96df-8a236d361126.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/585473f8-52e4-4e55-96df-8a236d361126.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.104 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6d38045f-f32c-48fc-a3ba-327e3481e499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.105 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-585473f8-52e4-4e55-96df-8a236d361126
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/585473f8-52e4-4e55-96df-8a236d361126.pid.haproxy
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 585473f8-52e4-4e55-96df-8a236d361126
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:27:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:25.106 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'env', 'PROCESS_TAG=haproxy-585473f8-52e4-4e55-96df-8a236d361126', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/585473f8-52e4-4e55-96df-8a236d361126.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:27:25 np0005465987 nova_compute[230713]: 2025-10-02 12:27:25.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:25 np0005465987 podman[269349]: 2025-10-02 12:27:25.526244999 +0000 UTC m=+0.034602555 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:27:25 np0005465987 podman[269349]: 2025-10-02 12:27:25.919124256 +0000 UTC m=+0.427481822 container create dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:27:26 np0005465987 systemd[1]: Started libpod-conmon-dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656.scope.
Oct  2 08:27:26 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:27:26 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c60905e44ca3a0966343daad94566577b03b90605f574f3c6817a2e785d8d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:27:26 np0005465987 podman[269349]: 2025-10-02 12:27:26.193001197 +0000 UTC m=+0.701358723 container init dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:27:26 np0005465987 podman[269349]: 2025-10-02 12:27:26.198044816 +0000 UTC m=+0.706402342 container start dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.217 2 DEBUG oslo_concurrency.lockutils [req-28ee88c2-7102-46e8-8d32-fb8d569dc6b9 req-f293d01e-0ba2-48c3-9bfc-27dceee280ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:26 np0005465987 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[269365]: [NOTICE]   (269369) : New worker (269371) forked
Oct  2 08:27:26 np0005465987 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[269365]: [NOTICE]   (269369) : Loading success.
Oct  2 08:27:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:26.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.394 2 DEBUG nova.compute.manager [req-6dee44f6-1dc1-4948-b99f-2d14ee2a7976 req-b54300ee-336d-4c86-8473-c4fb8e545ff9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Received event network-vif-plugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.395 2 DEBUG oslo_concurrency.lockutils [req-6dee44f6-1dc1-4948-b99f-2d14ee2a7976 req-b54300ee-336d-4c86-8473-c4fb8e545ff9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.395 2 DEBUG oslo_concurrency.lockutils [req-6dee44f6-1dc1-4948-b99f-2d14ee2a7976 req-b54300ee-336d-4c86-8473-c4fb8e545ff9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.395 2 DEBUG oslo_concurrency.lockutils [req-6dee44f6-1dc1-4948-b99f-2d14ee2a7976 req-b54300ee-336d-4c86-8473-c4fb8e545ff9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.396 2 DEBUG nova.compute.manager [req-6dee44f6-1dc1-4948-b99f-2d14ee2a7976 req-b54300ee-336d-4c86-8473-c4fb8e545ff9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Processing event network-vif-plugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.396 2 DEBUG nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.400 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.403 2 INFO nova.virt.libvirt.driver [-] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Instance spawned successfully.#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.403 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.428 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.431 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.549 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.550 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408043.1116896, db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.550 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.556 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.556 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.556 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.557 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.557 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.558 2 DEBUG nova.virt.libvirt.driver [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.589 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.593 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408046.399784, db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.593 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.626 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.631 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.641 2 INFO nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Took 14.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.641 2 DEBUG nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.689 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.765 2 INFO nova.compute.manager [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Took 21.02 seconds to build instance.#033[00m
Oct  2 08:27:26 np0005465987 nova_compute[230713]: 2025-10-02 12:27:26.836 2 DEBUG oslo_concurrency.lockutils [None req-c1b14447-ccb5-4fd9-a551-0c4b4ffcb819 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:26.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:27.953 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:27.954 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:27.954 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:28.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:28 np0005465987 nova_compute[230713]: 2025-10-02 12:27:28.643 2 DEBUG nova.compute.manager [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Received event network-vif-plugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:28 np0005465987 nova_compute[230713]: 2025-10-02 12:27:28.644 2 DEBUG oslo_concurrency.lockutils [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:28 np0005465987 nova_compute[230713]: 2025-10-02 12:27:28.644 2 DEBUG oslo_concurrency.lockutils [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:28 np0005465987 nova_compute[230713]: 2025-10-02 12:27:28.645 2 DEBUG oslo_concurrency.lockutils [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:28 np0005465987 nova_compute[230713]: 2025-10-02 12:27:28.645 2 DEBUG nova.compute.manager [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] No waiting events found dispatching network-vif-plugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:28 np0005465987 nova_compute[230713]: 2025-10-02 12:27:28.646 2 WARNING nova.compute.manager [req-6b24f93c-0105-4fcc-a608-11251885f20b req-a00f45cd-d397-4528-8a0b-8a2c81ed4c7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Received unexpected event network-vif-plugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:27:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:28.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:29 np0005465987 nova_compute[230713]: 2025-10-02 12:27:29.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:30.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:30 np0005465987 nova_compute[230713]: 2025-10-02 12:27:30.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:27:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:30.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:27:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:32.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:32.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.818169) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408053818198, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2438, "num_deletes": 255, "total_data_size": 5521283, "memory_usage": 5601152, "flush_reason": "Manual Compaction"}
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408053835643, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 3616185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44711, "largest_seqno": 47144, "table_properties": {"data_size": 3606419, "index_size": 6132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20964, "raw_average_key_size": 20, "raw_value_size": 3586573, "raw_average_value_size": 3554, "num_data_blocks": 265, "num_entries": 1009, "num_filter_entries": 1009, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407860, "oldest_key_time": 1759407860, "file_creation_time": 1759408053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 17508 microseconds, and 6505 cpu microseconds.
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.835675) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 3616185 bytes OK
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.835691) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.836881) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.836892) EVENT_LOG_v1 {"time_micros": 1759408053836888, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.836905) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 5510586, prev total WAL file size 5510586, number of live WAL files 2.
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.838017) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(3531KB)], [87(9203KB)]
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408053838087, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13040987, "oldest_snapshot_seqno": -1}
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7176 keys, 11106044 bytes, temperature: kUnknown
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408053909250, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 11106044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11057162, "index_size": 29826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 184744, "raw_average_key_size": 25, "raw_value_size": 10927977, "raw_average_value_size": 1522, "num_data_blocks": 1179, "num_entries": 7176, "num_filter_entries": 7176, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759408053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.909460) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11106044 bytes
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.910902) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.1 rd, 155.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7705, records dropped: 529 output_compression: NoCompression
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.910917) EVENT_LOG_v1 {"time_micros": 1759408053910909, "job": 54, "event": "compaction_finished", "compaction_time_micros": 71221, "compaction_time_cpu_micros": 23441, "output_level": 6, "num_output_files": 1, "total_output_size": 11106044, "num_input_records": 7705, "num_output_records": 7176, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408053911580, "job": 54, "event": "table_file_deletion", "file_number": 89}
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408053913130, "job": 54, "event": "table_file_deletion", "file_number": 87}
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.837879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.913192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.913196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.913198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.913199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:27:33.913201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:34.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:34 np0005465987 nova_compute[230713]: 2025-10-02 12:27:34.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:35 np0005465987 nova_compute[230713]: 2025-10-02 12:27:35.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:35 np0005465987 podman[269380]: 2025-10-02 12:27:35.841576195 +0000 UTC m=+0.053435383 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:27:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:36.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:36.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:38.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:38.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:39 np0005465987 nova_compute[230713]: 2025-10-02 12:27:39.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:40.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:40 np0005465987 nova_compute[230713]: 2025-10-02 12:27:40.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:40.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:42.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:42.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:43 np0005465987 podman[269475]: 2025-10-02 12:27:43.985677937 +0000 UTC m=+0.056486506 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:27:44 np0005465987 podman[269476]: 2025-10-02 12:27:44.016182347 +0000 UTC m=+0.080261641 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 08:27:44 np0005465987 podman[269477]: 2025-10-02 12:27:44.044694532 +0000 UTC m=+0.106919285 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:27:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:44.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:44 np0005465987 nova_compute[230713]: 2025-10-02 12:27:44.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:44.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:45 np0005465987 nova_compute[230713]: 2025-10-02 12:27:45.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005465987 nova_compute[230713]: 2025-10-02 12:27:45.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:45.668 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:45.669 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:27:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:46.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:27:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:27:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:27:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:46.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:27:47Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:67:0c 10.100.0.8
Oct  2 08:27:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:27:47Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:67:0c 10.100.0.8
Oct  2 08:27:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:27:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:27:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:48.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:49 np0005465987 nova_compute[230713]: 2025-10-02 12:27:49.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:49 np0005465987 nova_compute[230713]: 2025-10-02 12:27:49.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:50 np0005465987 nova_compute[230713]: 2025-10-02 12:27:50.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:50.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:52 np0005465987 nova_compute[230713]: 2025-10-02 12:27:52.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:52 np0005465987 nova_compute[230713]: 2025-10-02 12:27:52.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:27:52 np0005465987 nova_compute[230713]: 2025-10-02 12:27:52.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:27:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:52.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:53 np0005465987 nova_compute[230713]: 2025-10-02 12:27:53.388 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:53 np0005465987 nova_compute[230713]: 2025-10-02 12:27:53.389 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:53 np0005465987 nova_compute[230713]: 2025-10-02 12:27:53.389 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:27:53 np0005465987 nova_compute[230713]: 2025-10-02 12:27:53.389 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:27:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:27:54 np0005465987 nova_compute[230713]: 2025-10-02 12:27:54.423 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:54 np0005465987 nova_compute[230713]: 2025-10-02 12:27:54.424 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:54 np0005465987 nova_compute[230713]: 2025-10-02 12:27:54.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:27:54.670 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:54 np0005465987 nova_compute[230713]: 2025-10-02 12:27:54.679 2 DEBUG nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:27:54 np0005465987 nova_compute[230713]: 2025-10-02 12:27:54.955 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:54 np0005465987 nova_compute[230713]: 2025-10-02 12:27:54.956 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:54 np0005465987 nova_compute[230713]: 2025-10-02 12:27:54.971 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:27:54 np0005465987 nova_compute[230713]: 2025-10-02 12:27:54.971 2 INFO nova.compute.claims [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:27:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:54.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:55 np0005465987 nova_compute[230713]: 2025-10-02 12:27:55.262 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:55 np0005465987 nova_compute[230713]: 2025-10-02 12:27:55.262 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:55 np0005465987 nova_compute[230713]: 2025-10-02 12:27:55.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:55 np0005465987 nova_compute[230713]: 2025-10-02 12:27:55.434 2 DEBUG nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:27:55 np0005465987 nova_compute[230713]: 2025-10-02 12:27:55.936 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.031 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:56.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3074040757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.518 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.525 2 DEBUG nova.compute.provider_tree [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.587 2 DEBUG nova.scheduler.client.report [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.740 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updating instance_info_cache with network_info: [{"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.751 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.752 2 DEBUG nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.756 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.764 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.765 2 INFO nova.compute.claims [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.890 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.891 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:27:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:56.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.994 2 DEBUG nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:27:56 np0005465987 nova_compute[230713]: 2025-10-02 12:27:56.995 2 DEBUG nova.network.neutron [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.158 2 DEBUG nova.policy [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a89b71e2513413e922ee6d5d06362b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27a1729bf10548219b90df46839849f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.220 2 INFO nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.274 2 DEBUG nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.344 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.758 2 DEBUG nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.761 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.762 2 INFO nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Creating image(s)#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.800 2 DEBUG nova.storage.rbd_utils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.844 2 DEBUG nova.storage.rbd_utils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3970481960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.874 2 DEBUG nova.storage.rbd_utils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.877 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.906 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.914 2 DEBUG nova.compute.provider_tree [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.947 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.948 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.949 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.949 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.978 2 DEBUG nova.storage.rbd_utils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:57 np0005465987 nova_compute[230713]: 2025-10-02 12:27:57.982 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 af7eb975-4b7b-464b-b407-754d3b8ced72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.015 2 DEBUG nova.scheduler.client.report [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.179 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.180 2 DEBUG nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.322 2 DEBUG nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.323 2 DEBUG nova.network.neutron [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.331 2 DEBUG nova.network.neutron [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Successfully created port: c0eae122-ee9e-4263-afd3-b9dca2b6656f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:27:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.434 2 INFO nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.561 2 DEBUG nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.734 2 DEBUG nova.policy [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '22d56fcd2a4b4851bfd126ae4548ee9b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5533aaac08cd4856af72ef4992bb5e76', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:27:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.964 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 af7eb975-4b7b-464b-b407-754d3b8ced72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.982s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:58 np0005465987 nova_compute[230713]: 2025-10-02 12:27:58.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:27:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:58.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.148 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.150 2 DEBUG nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.151 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.152 2 INFO nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Creating image(s)#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.178 2 DEBUG nova.storage.rbd_utils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.207 2 DEBUG nova.storage.rbd_utils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.235 2 DEBUG nova.storage.rbd_utils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.240 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.336 2 DEBUG nova.storage.rbd_utils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] resizing rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.381 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.383 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.384 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.384 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.414 2 DEBUG nova.storage.rbd_utils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.417 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.516 2 DEBUG nova.objects.instance [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'migration_context' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.743 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.743 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Ensure instance console log exists: /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.744 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.744 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:59 np0005465987 nova_compute[230713]: 2025-10-02 12:27:59.744 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:00.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:00 np0005465987 nova_compute[230713]: 2025-10-02 12:28:00.426 2 DEBUG nova.network.neutron [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Successfully created port: 204ddeb1-bc16-445e-a774-09893c526348 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:28:00 np0005465987 nova_compute[230713]: 2025-10-02 12:28:00.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:00 np0005465987 nova_compute[230713]: 2025-10-02 12:28:00.691 2 DEBUG nova.network.neutron [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Successfully updated port: c0eae122-ee9e-4263-afd3-b9dca2b6656f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:28:00 np0005465987 nova_compute[230713]: 2025-10-02 12:28:00.754 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-af7eb975-4b7b-464b-b407-754d3b8ced72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:00 np0005465987 nova_compute[230713]: 2025-10-02 12:28:00.754 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-af7eb975-4b7b-464b-b407-754d3b8ced72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:00 np0005465987 nova_compute[230713]: 2025-10-02 12:28:00.755 2 DEBUG nova.network.neutron [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:28:00 np0005465987 nova_compute[230713]: 2025-10-02 12:28:00.898 2 DEBUG nova.compute.manager [req-de8d4e15-3b4a-41f9-a3aa-6bdcddd45b7b req-5872c9a1-08bf-4565-aad2-fe7b459ee40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-changed-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:00 np0005465987 nova_compute[230713]: 2025-10-02 12:28:00.898 2 DEBUG nova.compute.manager [req-de8d4e15-3b4a-41f9-a3aa-6bdcddd45b7b req-5872c9a1-08bf-4565-aad2-fe7b459ee40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Refreshing instance network info cache due to event network-changed-c0eae122-ee9e-4263-afd3-b9dca2b6656f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:28:00 np0005465987 nova_compute[230713]: 2025-10-02 12:28:00.898 2 DEBUG oslo_concurrency.lockutils [req-de8d4e15-3b4a-41f9-a3aa-6bdcddd45b7b req-5872c9a1-08bf-4565-aad2-fe7b459ee40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-af7eb975-4b7b-464b-b407-754d3b8ced72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:00.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:01 np0005465987 nova_compute[230713]: 2025-10-02 12:28:01.504 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:01 np0005465987 nova_compute[230713]: 2025-10-02 12:28:01.559 2 DEBUG nova.storage.rbd_utils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] resizing rbd image e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:28:01 np0005465987 nova_compute[230713]: 2025-10-02 12:28:01.831 2 DEBUG nova.network.neutron [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:28:01 np0005465987 nova_compute[230713]: 2025-10-02 12:28:01.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:01 np0005465987 nova_compute[230713]: 2025-10-02 12:28:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:01 np0005465987 nova_compute[230713]: 2025-10-02 12:28:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.067 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.067 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.068 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.068 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.068 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:02.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.386 2 DEBUG nova.objects.instance [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'migration_context' on Instance uuid e744f3b9-ccc1-43b1-b134-e6050b9487eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2427195761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.530 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.622 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.623 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Ensure instance console log exists: /var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.624 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.624 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.625 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.662 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.663 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.782 2 DEBUG nova.network.neutron [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Successfully updated port: 204ddeb1-bc16-445e-a774-09893c526348 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.826 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.827 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4219MB free_disk=20.92772674560547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.827 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.828 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.927 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.928 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquired lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:02 np0005465987 nova_compute[230713]: 2025-10-02 12:28:02.928 2 DEBUG nova.network.neutron [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:28:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:02.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.093 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.094 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance af7eb975-4b7b-464b-b407-754d3b8ced72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.094 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e744f3b9-ccc1-43b1-b134-e6050b9487eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.094 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.094 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.312 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.464 2 DEBUG nova.network.neutron [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:28:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2548813839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.771 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.776 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.807 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.861 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:28:03 np0005465987 nova_compute[230713]: 2025-10-02 12:28:03.862 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:04.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:04 np0005465987 nova_compute[230713]: 2025-10-02 12:28:04.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:04 np0005465987 nova_compute[230713]: 2025-10-02 12:28:04.861 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:04 np0005465987 nova_compute[230713]: 2025-10-02 12:28:04.861 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:04 np0005465987 nova_compute[230713]: 2025-10-02 12:28:04.862 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:28:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:04.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.164 2 DEBUG nova.network.neutron [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Updating instance_info_cache with network_info: [{"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.191 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-af7eb975-4b7b-464b-b407-754d3b8ced72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.191 2 DEBUG nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance network_info: |[{"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.192 2 DEBUG oslo_concurrency.lockutils [req-de8d4e15-3b4a-41f9-a3aa-6bdcddd45b7b req-5872c9a1-08bf-4565-aad2-fe7b459ee40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-af7eb975-4b7b-464b-b407-754d3b8ced72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.192 2 DEBUG nova.network.neutron [req-de8d4e15-3b4a-41f9-a3aa-6bdcddd45b7b req-5872c9a1-08bf-4565-aad2-fe7b459ee40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Refreshing network info cache for port c0eae122-ee9e-4263-afd3-b9dca2b6656f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.195 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Start _get_guest_xml network_info=[{"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.200 2 WARNING nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.206 2 DEBUG nova.virt.libvirt.host [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.207 2 DEBUG nova.virt.libvirt.host [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.209 2 DEBUG nova.virt.libvirt.host [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.210 2 DEBUG nova.virt.libvirt.host [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.211 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.211 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.212 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.212 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.212 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.213 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.213 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.213 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.213 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.214 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.214 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.214 2 DEBUG nova.virt.hardware [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.217 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/879769157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.690 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.717 2 DEBUG nova.storage.rbd_utils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:05 np0005465987 nova_compute[230713]: 2025-10-02 12:28:05.721 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4023743274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.250 2 DEBUG nova.compute.manager [req-eb47a346-7d4c-4ea6-989e-9ca37ef7e760 req-9e32e878-577f-4ea7-9001-880262a74127 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received event network-changed-204ddeb1-bc16-445e-a774-09893c526348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.251 2 DEBUG nova.compute.manager [req-eb47a346-7d4c-4ea6-989e-9ca37ef7e760 req-9e32e878-577f-4ea7-9001-880262a74127 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Refreshing instance network info cache due to event network-changed-204ddeb1-bc16-445e-a774-09893c526348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.251 2 DEBUG oslo_concurrency.lockutils [req-eb47a346-7d4c-4ea6-989e-9ca37ef7e760 req-9e32e878-577f-4ea7-9001-880262a74127 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:28:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.302 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.303 2 DEBUG nova.virt.libvirt.vif [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-375372636',display_name='tempest-ServerDiskConfigTestJSON-server-375372636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-375372636',id=109,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-5tzgb5vs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:57Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=af7eb975-4b7b-464b-b407-754d3b8ced72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.304 2 DEBUG nova.network.os_vif_util [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.304 2 DEBUG nova.network.os_vif_util [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.305 2 DEBUG nova.objects.instance [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.360 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <uuid>af7eb975-4b7b-464b-b407-754d3b8ced72</uuid>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <name>instance-0000006d</name>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-375372636</nova:name>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:28:05</nova:creationTime>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <nova:user uuid="4a89b71e2513413e922ee6d5d06362b1">tempest-ServerDiskConfigTestJSON-1123059068-project-member</nova:user>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <nova:project uuid="27a1729bf10548219b90df46839849f5">tempest-ServerDiskConfigTestJSON-1123059068</nova:project>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <nova:port uuid="c0eae122-ee9e-4263-afd3-b9dca2b6656f">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <entry name="serial">af7eb975-4b7b-464b-b407-754d3b8ced72</entry>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <entry name="uuid">af7eb975-4b7b-464b-b407-754d3b8ced72</entry>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/af7eb975-4b7b-464b-b407-754d3b8ced72_disk">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:37:42:f8"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <target dev="tapc0eae122-ee"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/console.log" append="off"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:28:06 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:28:06 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:28:06 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:28:06 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.361 2 DEBUG nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Preparing to wait for external event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.362 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.362 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.363 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.365 2 DEBUG nova.virt.libvirt.vif [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-375372636',display_name='tempest-ServerDiskConfigTestJSON-server-375372636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-375372636',id=109,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-5tzgb5vs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:57Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=af7eb975-4b7b-464b-b407-754d3b8ced72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.365 2 DEBUG nova.network.os_vif_util [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.368 2 DEBUG nova.network.os_vif_util [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.369 2 DEBUG os_vif [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0eae122-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0eae122-ee, col_values=(('external_ids', {'iface-id': 'c0eae122-ee9e-4263-afd3-b9dca2b6656f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:42:f8', 'vm-uuid': 'af7eb975-4b7b-464b-b407-754d3b8ced72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:06.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:06 np0005465987 NetworkManager[44910]: <info>  [1759408086.3823] manager: (tapc0eae122-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.394 2 INFO os_vif [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee')#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.450 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.451 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.451 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No VIF found with MAC fa:16:3e:37:42:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.452 2 INFO nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Using config drive#033[00m
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.480 2 DEBUG nova.storage.rbd_utils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:06 np0005465987 podman[270144]: 2025-10-02 12:28:06.8444893 +0000 UTC m=+0.061112944 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:28:06 np0005465987 nova_compute[230713]: 2025-10-02 12:28:06.857 2 DEBUG nova.network.neutron [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updating instance_info_cache with network_info: [{"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:06.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.071 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Releasing lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.071 2 DEBUG nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Instance network_info: |[{"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.072 2 DEBUG oslo_concurrency.lockutils [req-eb47a346-7d4c-4ea6-989e-9ca37ef7e760 req-9e32e878-577f-4ea7-9001-880262a74127 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.072 2 DEBUG nova.network.neutron [req-eb47a346-7d4c-4ea6-989e-9ca37ef7e760 req-9e32e878-577f-4ea7-9001-880262a74127 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Refreshing network info cache for port 204ddeb1-bc16-445e-a774-09893c526348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.074 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Start _get_guest_xml network_info=[{"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.078 2 WARNING nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.091 2 DEBUG nova.virt.libvirt.host [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.092 2 DEBUG nova.virt.libvirt.host [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.106 2 DEBUG nova.virt.libvirt.host [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.106 2 DEBUG nova.virt.libvirt.host [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.108 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.108 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.108 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.109 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.109 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.109 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.109 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.110 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.110 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.110 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.110 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.110 2 DEBUG nova.virt.hardware [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.112 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2071055452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.533 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.565 2 DEBUG nova.storage.rbd_utils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:07 np0005465987 nova_compute[230713]: 2025-10-02 12:28:07.570 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/647394247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.007 2 INFO nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Creating config drive at /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.016 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprq5um1ma execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.056 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.059 2 DEBUG nova.virt.libvirt.vif [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=110,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBEmvr2XQXDnaV0WQDbbXt57cEK6okdC4PHEYdjpQBx2HU9OQgvgRTm3sGWmsa/AInUTPV9ABsCq2lJ9PCqfb1WP51XCZeB9QBIxafEy8h788huF0550ajkopZIwmSLpiA==',key_name='tempest-keypair-425033456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-kuclzhj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=e744f3b9-ccc1-43b1-b134-e6050b9487eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.060 2 DEBUG nova.network.os_vif_util [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.061 2 DEBUG nova.network.os_vif_util [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:14:c2,bridge_name='br-int',has_traffic_filtering=True,id=204ddeb1-bc16-445e-a774-09893c526348,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap204ddeb1-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.062 2 DEBUG nova.objects.instance [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'pci_devices' on Instance uuid e744f3b9-ccc1-43b1-b134-e6050b9487eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.160 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <uuid>e744f3b9-ccc1-43b1-b134-e6050b9487eb</uuid>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <name>instance-0000006e</name>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <nova:name>multiattach-server-0</nova:name>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:28:07</nova:creationTime>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <nova:user uuid="22d56fcd2a4b4851bfd126ae4548ee9b">tempest-AttachVolumeMultiAttachTest-1564585024-project-member</nova:user>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <nova:project uuid="5533aaac08cd4856af72ef4992bb5e76">tempest-AttachVolumeMultiAttachTest-1564585024</nova:project>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <nova:port uuid="204ddeb1-bc16-445e-a774-09893c526348">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <entry name="serial">e744f3b9-ccc1-43b1-b134-e6050b9487eb</entry>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <entry name="uuid">e744f3b9-ccc1-43b1-b134-e6050b9487eb</entry>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk.config">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:9d:14:c2"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <target dev="tap204ddeb1-bc"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb/console.log" append="off"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:28:08 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:28:08 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:28:08 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:28:08 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.161 2 DEBUG nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Preparing to wait for external event network-vif-plugged-204ddeb1-bc16-445e-a774-09893c526348 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.161 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.162 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.162 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.162 2 DEBUG nova.virt.libvirt.vif [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=110,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBEmvr2XQXDnaV0WQDbbXt57cEK6okdC4PHEYdjpQBx2HU9OQgvgRTm3sGWmsa/AInUTPV9ABsCq2lJ9PCqfb1WP51XCZeB9QBIxafEy8h788huF0550ajkopZIwmSLpiA==',key_name='tempest-keypair-425033456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-kuclzhj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=e744f3b9-ccc1-43b1-b134-e6050b9487eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.163 2 DEBUG nova.network.os_vif_util [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.163 2 DEBUG nova.network.os_vif_util [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:14:c2,bridge_name='br-int',has_traffic_filtering=True,id=204ddeb1-bc16-445e-a774-09893c526348,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap204ddeb1-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.163 2 DEBUG os_vif [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:14:c2,bridge_name='br-int',has_traffic_filtering=True,id=204ddeb1-bc16-445e-a774-09893c526348,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap204ddeb1-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap204ddeb1-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap204ddeb1-bc, col_values=(('external_ids', {'iface-id': '204ddeb1-bc16-445e-a774-09893c526348', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:14:c2', 'vm-uuid': 'e744f3b9-ccc1-43b1-b134-e6050b9487eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:08 np0005465987 NetworkManager[44910]: <info>  [1759408088.1703] manager: (tap204ddeb1-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.172 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprq5um1ma" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.198 2 DEBUG nova.storage.rbd_utils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.201 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.233 2 INFO os_vif [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:14:c2,bridge_name='br-int',has_traffic_filtering=True,id=204ddeb1-bc16-445e-a774-09893c526348,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap204ddeb1-bc')#033[00m
Oct  2 08:28:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.533 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.534 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.534 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No VIF found with MAC fa:16:3e:9d:14:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.535 2 INFO nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Using config drive#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.573 2 DEBUG nova.storage.rbd_utils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.831 2 DEBUG oslo_concurrency.processutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.832 2 INFO nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Deleting local config drive /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config because it was imported into RBD.#033[00m
Oct  2 08:28:08 np0005465987 NetworkManager[44910]: <info>  [1759408088.8892] manager: (tapc0eae122-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Oct  2 08:28:08 np0005465987 kernel: tapc0eae122-ee: entered promiscuous mode
Oct  2 08:28:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:08Z|00372|binding|INFO|Claiming lport c0eae122-ee9e-4263-afd3-b9dca2b6656f for this chassis.
Oct  2 08:28:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:08Z|00373|binding|INFO|c0eae122-ee9e-4263-afd3-b9dca2b6656f: Claiming fa:16:3e:37:42:f8 10.100.0.14
Oct  2 08:28:08 np0005465987 nova_compute[230713]: 2025-10-02 12:28:08.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.919 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:42:f8 10.100.0.14'], port_security=['fa:16:3e:37:42:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'af7eb975-4b7b-464b-b407-754d3b8ced72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c0eae122-ee9e-4263-afd3-b9dca2b6656f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.920 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c0eae122-ee9e-4263-afd3-b9dca2b6656f in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 bound to our chassis#033[00m
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.922 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 247d774d-0cc8-4ef2-a9b8-c756adae0874#033[00m
Oct  2 08:28:08 np0005465987 systemd-machined[188335]: New machine qemu-46-instance-0000006d.
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.934 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e25846b3-48af-432f-ac1f-dcddabe64307]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.935 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap247d774d-01 in ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.938 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap247d774d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.938 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8396fb-3bf6-4b4b-adf4-c052bd8f7cd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.938 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1902c7-412f-415d-b95e-0aac2770213a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.951 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[b352cd5d-a8b5-4c9e-845e-f225d5e445df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:08 np0005465987 systemd[1]: Started Virtual Machine qemu-46-instance-0000006d.
Oct  2 08:28:08 np0005465987 systemd-udevd[270303]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:28:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:08Z|00374|binding|INFO|Setting lport c0eae122-ee9e-4263-afd3-b9dca2b6656f ovn-installed in OVS
Oct  2 08:28:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:08Z|00375|binding|INFO|Setting lport c0eae122-ee9e-4263-afd3-b9dca2b6656f up in Southbound
Oct  2 08:28:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:08.978 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ab58b229-02e1-4c4c-82e1-b74042b743cd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:08.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:09 np0005465987 NetworkManager[44910]: <info>  [1759408089.0045] device (tapc0eae122-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:28:09 np0005465987 NetworkManager[44910]: <info>  [1759408089.0075] device (tapc0eae122-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.008 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[65bfd3d9-9db4-439d-a35f-429c1714b476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 systemd-udevd[270307]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:28:09 np0005465987 NetworkManager[44910]: <info>  [1759408089.0175] manager: (tap247d774d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.020 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[43ec1d8f-d43a-47ad-88ee-1ca02c308822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.056 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5426c209-556d-4a3e-b86a-478057ccb570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.059 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d2ef75-d801-484c-9089-832934a07875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 NetworkManager[44910]: <info>  [1759408089.0788] device (tap247d774d-00): carrier: link connected
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.083 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf7cb0b-60d1-4035-96b1-249e478afe69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.100 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2a01df6a-b20b-4dd2-8f9a-99003adbb064]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615765, 'reachable_time': 42241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270336, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.114 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dfeeeb9d-1501-4c7d-970b-3821b278087c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:ab18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 615765, 'tstamp': 615765}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270337, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.131 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[163a2bc8-2de1-43a5-a9c7-24083bdfe503]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615765, 'reachable_time': 42241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270338, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.165 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9d7b57-b078-4dc0-9af3-38a013b8e5c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.247 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d318cb9a-b2cc-4643-bdb5-c89464896572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.248 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.248 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.248 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247d774d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:09 np0005465987 NetworkManager[44910]: <info>  [1759408089.2509] manager: (tap247d774d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:09 np0005465987 kernel: tap247d774d-00: entered promiscuous mode
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.256 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap247d774d-00, col_values=(('external_ids', {'iface-id': '04584168-a51c-41f9-9206-d39db8a81566'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:09 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:09Z|00376|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.280 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.281 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[10883817-014c-4972-91e4-94a13f6e6eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.281 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:28:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:09.282 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'env', 'PROCESS_TAG=haproxy-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/247d774d-0cc8-4ef2-a9b8-c756adae0874.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.416 2 DEBUG nova.network.neutron [req-de8d4e15-3b4a-41f9-a3aa-6bdcddd45b7b req-5872c9a1-08bf-4565-aad2-fe7b459ee40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Updated VIF entry in instance network info cache for port c0eae122-ee9e-4263-afd3-b9dca2b6656f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.416 2 DEBUG nova.network.neutron [req-de8d4e15-3b4a-41f9-a3aa-6bdcddd45b7b req-5872c9a1-08bf-4565-aad2-fe7b459ee40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Updating instance_info_cache with network_info: [{"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.434 2 DEBUG oslo_concurrency.lockutils [req-de8d4e15-3b4a-41f9-a3aa-6bdcddd45b7b req-5872c9a1-08bf-4565-aad2-fe7b459ee40c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-af7eb975-4b7b-464b-b407-754d3b8ced72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.569 2 INFO nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Creating config drive at /var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb/disk.config#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.585 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjuw5i1w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.639 2 DEBUG nova.compute.manager [req-3293d7b5-f6c9-4a84-a3a7-49a1022264f1 req-be1fef08-d270-4e69-a2d3-2a8435ec6ad9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.640 2 DEBUG oslo_concurrency.lockutils [req-3293d7b5-f6c9-4a84-a3a7-49a1022264f1 req-be1fef08-d270-4e69-a2d3-2a8435ec6ad9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.640 2 DEBUG oslo_concurrency.lockutils [req-3293d7b5-f6c9-4a84-a3a7-49a1022264f1 req-be1fef08-d270-4e69-a2d3-2a8435ec6ad9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.641 2 DEBUG oslo_concurrency.lockutils [req-3293d7b5-f6c9-4a84-a3a7-49a1022264f1 req-be1fef08-d270-4e69-a2d3-2a8435ec6ad9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.641 2 DEBUG nova.compute.manager [req-3293d7b5-f6c9-4a84-a3a7-49a1022264f1 req-be1fef08-d270-4e69-a2d3-2a8435ec6ad9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Processing event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:09 np0005465987 podman[270418]: 2025-10-02 12:28:09.665846674 +0000 UTC m=+0.072282982 container create 3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:28:09 np0005465987 systemd[1]: Started libpod-conmon-3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114.scope.
Oct  2 08:28:09 np0005465987 podman[270418]: 2025-10-02 12:28:09.631414786 +0000 UTC m=+0.037851184 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.744 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjuw5i1w" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:09 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:28:09 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff3edebc80bb6e8f79970cba4eaec827439a50ad0e7d955a1bfa9b6ae08bc12/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:28:09 np0005465987 podman[270418]: 2025-10-02 12:28:09.78339424 +0000 UTC m=+0.189830638 container init 3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:28:09 np0005465987 podman[270418]: 2025-10-02 12:28:09.793563051 +0000 UTC m=+0.199999369 container start 3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.799 2 DEBUG nova.storage.rbd_utils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] rbd image e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.804 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb/disk.config e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:09 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[270436]: [NOTICE]   (270456) : New worker (270461) forked
Oct  2 08:28:09 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[270436]: [NOTICE]   (270456) : Loading success.
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.911 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408089.9100566, af7eb975-4b7b-464b-b407-754d3b8ced72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.912 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] VM Started (Lifecycle Event)#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.917 2 DEBUG nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.923 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.928 2 INFO nova.virt.libvirt.driver [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance spawned successfully.#033[00m
Oct  2 08:28:09 np0005465987 nova_compute[230713]: 2025-10-02 12:28:09.928 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.091 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.102 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.103 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.105 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.106 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.107 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.108 2 DEBUG nova.virt.libvirt.driver [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.114 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.185 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.186 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408089.9104254, af7eb975-4b7b-464b-b407-754d3b8ced72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.187 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.234 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.236 2 INFO nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Took 12.48 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.238 2 DEBUG nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.242 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408089.9210014, af7eb975-4b7b-464b-b407-754d3b8ced72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.243 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.306 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.312 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.343 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.360 2 INFO nova.compute.manager [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Took 15.46 seconds to build instance.#033[00m
Oct  2 08:28:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:10.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:10 np0005465987 nova_compute[230713]: 2025-10-02 12:28:10.387 2 DEBUG oslo_concurrency.lockutils [None req-28279b3d-7a37-403e-a8ce-a541ec77be34 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e303 e303: 3 total, 3 up, 3 in
Oct  2 08:28:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:11.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:11 np0005465987 nova_compute[230713]: 2025-10-02 12:28:11.350 2 DEBUG nova.network.neutron [req-eb47a346-7d4c-4ea6-989e-9ca37ef7e760 req-9e32e878-577f-4ea7-9001-880262a74127 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updated VIF entry in instance network info cache for port 204ddeb1-bc16-445e-a774-09893c526348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:28:11 np0005465987 nova_compute[230713]: 2025-10-02 12:28:11.352 2 DEBUG nova.network.neutron [req-eb47a346-7d4c-4ea6-989e-9ca37ef7e760 req-9e32e878-577f-4ea7-9001-880262a74127 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updating instance_info_cache with network_info: [{"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:11 np0005465987 nova_compute[230713]: 2025-10-02 12:28:11.418 2 DEBUG oslo_concurrency.lockutils [req-eb47a346-7d4c-4ea6-989e-9ca37ef7e760 req-9e32e878-577f-4ea7-9001-880262a74127 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:11 np0005465987 nova_compute[230713]: 2025-10-02 12:28:11.954 2 DEBUG oslo_concurrency.processutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb/disk.config e744f3b9-ccc1-43b1-b134-e6050b9487eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:11 np0005465987 nova_compute[230713]: 2025-10-02 12:28:11.955 2 INFO nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Deleting local config drive /var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb/disk.config because it was imported into RBD.#033[00m
Oct  2 08:28:11 np0005465987 virtqemud[230442]: End of file while reading data: Input/output error
Oct  2 08:28:11 np0005465987 virtqemud[230442]: End of file while reading data: Input/output error
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.018 2 DEBUG nova.compute.manager [req-8def1249-e7ac-4921-889d-8e8728a31914 req-afe5f60b-97bc-4427-b9f3-1eb91b464030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.019 2 DEBUG oslo_concurrency.lockutils [req-8def1249-e7ac-4921-889d-8e8728a31914 req-afe5f60b-97bc-4427-b9f3-1eb91b464030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.020 2 DEBUG oslo_concurrency.lockutils [req-8def1249-e7ac-4921-889d-8e8728a31914 req-afe5f60b-97bc-4427-b9f3-1eb91b464030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.020 2 DEBUG oslo_concurrency.lockutils [req-8def1249-e7ac-4921-889d-8e8728a31914 req-afe5f60b-97bc-4427-b9f3-1eb91b464030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.021 2 DEBUG nova.compute.manager [req-8def1249-e7ac-4921-889d-8e8728a31914 req-afe5f60b-97bc-4427-b9f3-1eb91b464030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] No waiting events found dispatching network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.022 2 WARNING nova.compute.manager [req-8def1249-e7ac-4921-889d-8e8728a31914 req-afe5f60b-97bc-4427-b9f3-1eb91b464030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received unexpected event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:28:12 np0005465987 kernel: tap204ddeb1-bc: entered promiscuous mode
Oct  2 08:28:12 np0005465987 NetworkManager[44910]: <info>  [1759408092.0411] manager: (tap204ddeb1-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:12Z|00377|binding|INFO|Claiming lport 204ddeb1-bc16-445e-a774-09893c526348 for this chassis.
Oct  2 08:28:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:12Z|00378|binding|INFO|204ddeb1-bc16-445e-a774-09893c526348: Claiming fa:16:3e:9d:14:c2 10.100.0.10
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.066 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:14:c2 10.100.0.10'], port_security=['fa:16:3e:9d:14:c2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e744f3b9-ccc1-43b1-b134-e6050b9487eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a7e36b3-799e-47d8-a152-7f7146431afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=204ddeb1-bc16-445e-a774-09893c526348) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.069 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 204ddeb1-bc16-445e-a774-09893c526348 in datapath 585473f8-52e4-4e55-96df-8a236d361126 bound to our chassis#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.073 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 585473f8-52e4-4e55-96df-8a236d361126#033[00m
Oct  2 08:28:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:12Z|00379|binding|INFO|Setting lport 204ddeb1-bc16-445e-a774-09893c526348 ovn-installed in OVS
Oct  2 08:28:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:12Z|00380|binding|INFO|Setting lport 204ddeb1-bc16-445e-a774-09893c526348 up in Southbound
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:12 np0005465987 systemd-udevd[270503]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.108 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6596471e-bc1b-4867-8ee9-12c257cb7069]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:12 np0005465987 systemd-machined[188335]: New machine qemu-47-instance-0000006e.
Oct  2 08:28:12 np0005465987 systemd[1]: Started Virtual Machine qemu-47-instance-0000006e.
Oct  2 08:28:12 np0005465987 NetworkManager[44910]: <info>  [1759408092.1303] device (tap204ddeb1-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:28:12 np0005465987 NetworkManager[44910]: <info>  [1759408092.1318] device (tap204ddeb1-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.158 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b391e0af-b23f-4beb-983d-528db23493ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.161 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8877ba-8d1f-415a-bbbd-7afbf14fe5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.206 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[35dc5494-04ab-4caa-82df-c7f22805eafb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.229 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[35d4e1b7-313f-473d-8ba7-e8d79e8adecb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611351, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270516, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.246 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0cccdc-d42a-4253-8524-f2c5d4b4e50f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611362, 'tstamp': 611362}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270517, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611365, 'tstamp': 611365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270517, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.247 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:12 np0005465987 nova_compute[230713]: 2025-10-02 12:28:12.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.255 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585473f8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.255 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.256 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap585473f8-50, col_values=(('external_ids', {'iface-id': '02b7597d-2fc1-4c56-8603-4dcb0c716c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:12.256 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:12.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2221379576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:13.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:13 np0005465987 nova_compute[230713]: 2025-10-02 12:28:13.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:13 np0005465987 nova_compute[230713]: 2025-10-02 12:28:13.300 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408093.299349, e744f3b9-ccc1-43b1-b134-e6050b9487eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:13 np0005465987 nova_compute[230713]: 2025-10-02 12:28:13.300 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] VM Started (Lifecycle Event)#033[00m
Oct  2 08:28:13 np0005465987 nova_compute[230713]: 2025-10-02 12:28:13.466 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:13 np0005465987 nova_compute[230713]: 2025-10-02 12:28:13.470 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408093.2999928, e744f3b9-ccc1-43b1-b134-e6050b9487eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:13 np0005465987 nova_compute[230713]: 2025-10-02 12:28:13.470 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:28:13 np0005465987 nova_compute[230713]: 2025-10-02 12:28:13.643 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:13 np0005465987 nova_compute[230713]: 2025-10-02 12:28:13.646 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:13 np0005465987 nova_compute[230713]: 2025-10-02 12:28:13.743 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.172 2 DEBUG nova.compute.manager [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received event network-vif-plugged-204ddeb1-bc16-445e-a774-09893c526348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.172 2 DEBUG oslo_concurrency.lockutils [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.172 2 DEBUG oslo_concurrency.lockutils [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.173 2 DEBUG oslo_concurrency.lockutils [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.173 2 DEBUG nova.compute.manager [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Processing event network-vif-plugged-204ddeb1-bc16-445e-a774-09893c526348 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.173 2 DEBUG nova.compute.manager [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received event network-vif-plugged-204ddeb1-bc16-445e-a774-09893c526348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.174 2 DEBUG oslo_concurrency.lockutils [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.174 2 DEBUG oslo_concurrency.lockutils [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.174 2 DEBUG oslo_concurrency.lockutils [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.174 2 DEBUG nova.compute.manager [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] No waiting events found dispatching network-vif-plugged-204ddeb1-bc16-445e-a774-09893c526348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.175 2 WARNING nova.compute.manager [req-88c3fbb4-1f5a-4041-9352-7434a3b19faa req-c1eb59c6-8157-4ca9-b660-b749ec7a44ca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received unexpected event network-vif-plugged-204ddeb1-bc16-445e-a774-09893c526348 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.175 2 DEBUG nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.180 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.181 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408094.1804209, e744f3b9-ccc1-43b1-b134-e6050b9487eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.182 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.189 2 INFO nova.virt.libvirt.driver [-] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Instance spawned successfully.#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.190 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.218 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.222 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.248 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.250 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.252 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.252 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.252 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.255 2 DEBUG nova.virt.libvirt.driver [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.263 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.368 2 INFO nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Took 15.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.369 2 DEBUG nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:28:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:14.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.553 2 INFO nova.compute.manager [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Took 18.86 seconds to build instance.#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.591 2 DEBUG oslo_concurrency.lockutils [None req-91c9c14d-5926-4f77-86a9-672396a8d278 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:14 np0005465987 nova_compute[230713]: 2025-10-02 12:28:14.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:14 np0005465987 podman[270562]: 2025-10-02 12:28:14.868993839 +0000 UTC m=+0.068952729 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 08:28:14 np0005465987 podman[270561]: 2025-10-02 12:28:14.88098982 +0000 UTC m=+0.073113174 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:28:14 np0005465987 podman[270560]: 2025-10-02 12:28:14.901773142 +0000 UTC m=+0.106582766 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:28:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:15.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:15 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct  2 08:28:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:16.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:16 np0005465987 nova_compute[230713]: 2025-10-02 12:28:16.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:17.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005465987 NetworkManager[44910]: <info>  [1759408098.2069] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Oct  2 08:28:18 np0005465987 NetworkManager[44910]: <info>  [1759408098.2078] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:18Z|00381|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct  2 08:28:18 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:18Z|00382|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.334 2 INFO nova.compute.manager [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Rebuilding instance#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:18.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.645 2 DEBUG nova.compute.manager [req-a297417d-b478-40ff-8e4d-341a8552c408 req-e3c2e464-9c2d-4b8c-bbdf-7ff3889995cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received event network-changed-204ddeb1-bc16-445e-a774-09893c526348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.646 2 DEBUG nova.compute.manager [req-a297417d-b478-40ff-8e4d-341a8552c408 req-e3c2e464-9c2d-4b8c-bbdf-7ff3889995cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Refreshing instance network info cache due to event network-changed-204ddeb1-bc16-445e-a774-09893c526348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.646 2 DEBUG oslo_concurrency.lockutils [req-a297417d-b478-40ff-8e4d-341a8552c408 req-e3c2e464-9c2d-4b8c-bbdf-7ff3889995cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.647 2 DEBUG oslo_concurrency.lockutils [req-a297417d-b478-40ff-8e4d-341a8552c408 req-e3c2e464-9c2d-4b8c-bbdf-7ff3889995cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.647 2 DEBUG nova.network.neutron [req-a297417d-b478-40ff-8e4d-341a8552c408 req-e3c2e464-9c2d-4b8c-bbdf-7ff3889995cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Refreshing network info cache for port 204ddeb1-bc16-445e-a774-09893c526348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.803 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.831 2 DEBUG nova.compute.manager [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.893 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_requests' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.919 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.933 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'resources' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.951 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'migration_context' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.967 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:28:18 np0005465987 nova_compute[230713]: 2025-10-02 12:28:18.970 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:28:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:19.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:19 np0005465987 nova_compute[230713]: 2025-10-02 12:28:19.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:20.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:21.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:22.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:23.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:23 np0005465987 nova_compute[230713]: 2025-10-02 12:28:23.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:23 np0005465987 nova_compute[230713]: 2025-10-02 12:28:23.564 2 DEBUG nova.network.neutron [req-a297417d-b478-40ff-8e4d-341a8552c408 req-e3c2e464-9c2d-4b8c-bbdf-7ff3889995cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updated VIF entry in instance network info cache for port 204ddeb1-bc16-445e-a774-09893c526348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:28:23 np0005465987 nova_compute[230713]: 2025-10-02 12:28:23.564 2 DEBUG nova.network.neutron [req-a297417d-b478-40ff-8e4d-341a8552c408 req-e3c2e464-9c2d-4b8c-bbdf-7ff3889995cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updating instance_info_cache with network_info: [{"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:23 np0005465987 nova_compute[230713]: 2025-10-02 12:28:23.698 2 DEBUG oslo_concurrency.lockutils [req-a297417d-b478-40ff-8e4d-341a8552c408 req-e3c2e464-9c2d-4b8c-bbdf-7ff3889995cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:24.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:24 np0005465987 nova_compute[230713]: 2025-10-02 12:28:24.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:25.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e304 e304: 3 total, 3 up, 3 in
Oct  2 08:28:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:26.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:27.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:27Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:42:f8 10.100.0.14
Oct  2 08:28:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:27Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:42:f8 10.100.0.14
Oct  2 08:28:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:27.954 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:27.954 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:27.956 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:28 np0005465987 nova_compute[230713]: 2025-10-02 12:28:28.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 08:28:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:28.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 08:28:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:29 np0005465987 nova_compute[230713]: 2025-10-02 12:28:29.017 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:28:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:29.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:29 np0005465987 nova_compute[230713]: 2025-10-02 12:28:29.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:29 np0005465987 nova_compute[230713]: 2025-10-02 12:28:29.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:29.699 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:29.701 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:28:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:28:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:30.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:28:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:30Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:14:c2 10.100.0.10
Oct  2 08:28:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:30Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:14:c2 10.100.0.10
Oct  2 08:28:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:31.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:32.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:33.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:33 np0005465987 nova_compute[230713]: 2025-10-02 12:28:33.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 e305: 3 total, 3 up, 3 in
Oct  2 08:28:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:34 np0005465987 kernel: tapc0eae122-ee (unregistering): left promiscuous mode
Oct  2 08:28:34 np0005465987 NetworkManager[44910]: <info>  [1759408114.3007] device (tapc0eae122-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:28:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:34Z|00383|binding|INFO|Releasing lport c0eae122-ee9e-4263-afd3-b9dca2b6656f from this chassis (sb_readonly=0)
Oct  2 08:28:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:34Z|00384|binding|INFO|Setting lport c0eae122-ee9e-4263-afd3-b9dca2b6656f down in Southbound
Oct  2 08:28:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:34Z|00385|binding|INFO|Removing iface tapc0eae122-ee ovn-installed in OVS
Oct  2 08:28:34 np0005465987 nova_compute[230713]: 2025-10-02 12:28:34.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:34 np0005465987 nova_compute[230713]: 2025-10-02 12:28:34.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:34 np0005465987 nova_compute[230713]: 2025-10-02 12:28:34.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:34 np0005465987 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct  2 08:28:34 np0005465987 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006d.scope: Consumed 14.905s CPU time.
Oct  2 08:28:34 np0005465987 systemd-machined[188335]: Machine qemu-46-instance-0000006d terminated.
Oct  2 08:28:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:34.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:34.489 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:42:f8 10.100.0.14'], port_security=['fa:16:3e:37:42:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'af7eb975-4b7b-464b-b407-754d3b8ced72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c0eae122-ee9e-4263-afd3-b9dca2b6656f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:34.490 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c0eae122-ee9e-4263-afd3-b9dca2b6656f in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis#033[00m
Oct  2 08:28:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:34.491 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:28:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:34.492 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cac962-544f-4509-96b1-71742032f04c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:34.493 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace which is not needed anymore#033[00m
Oct  2 08:28:34 np0005465987 nova_compute[230713]: 2025-10-02 12:28:34.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:34 np0005465987 nova_compute[230713]: 2025-10-02 12:28:34.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:34 np0005465987 nova_compute[230713]: 2025-10-02 12:28:34.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:34 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[270436]: [NOTICE]   (270456) : haproxy version is 2.8.14-c23fe91
Oct  2 08:28:34 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[270436]: [NOTICE]   (270456) : path to executable is /usr/sbin/haproxy
Oct  2 08:28:34 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[270436]: [WARNING]  (270456) : Exiting Master process...
Oct  2 08:28:34 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[270436]: [ALERT]    (270456) : Current worker (270461) exited with code 143 (Terminated)
Oct  2 08:28:34 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[270436]: [WARNING]  (270456) : All workers exited. Exiting... (0)
Oct  2 08:28:34 np0005465987 systemd[1]: libpod-3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114.scope: Deactivated successfully.
Oct  2 08:28:34 np0005465987 podman[270655]: 2025-10-02 12:28:34.70826118 +0000 UTC m=+0.111294994 container died 3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:28:34 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114-userdata-shm.mount: Deactivated successfully.
Oct  2 08:28:34 np0005465987 systemd[1]: var-lib-containers-storage-overlay-3ff3edebc80bb6e8f79970cba4eaec827439a50ad0e7d955a1bfa9b6ae08bc12-merged.mount: Deactivated successfully.
Oct  2 08:28:34 np0005465987 podman[270655]: 2025-10-02 12:28:34.928595637 +0000 UTC m=+0.331629471 container cleanup 3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:28:34 np0005465987 systemd[1]: libpod-conmon-3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114.scope: Deactivated successfully.
Oct  2 08:28:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:35.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.042 2 INFO nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance shutdown successfully after 16 seconds.#033[00m
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.049 2 INFO nova.virt.libvirt.driver [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance destroyed successfully.#033[00m
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.057 2 INFO nova.virt.libvirt.driver [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance destroyed successfully.#033[00m
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.059 2 DEBUG nova.virt.libvirt.vif [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-375372636',display_name='tempest-ServerDiskConfigTestJSON-server-375372636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-375372636',id=109,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-5tzgb5vs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:17Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=af7eb975-4b7b-464b-b407-754d3b8ced72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.059 2 DEBUG nova.network.os_vif_util [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.061 2 DEBUG nova.network.os_vif_util [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.062 2 DEBUG os_vif [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:35 np0005465987 podman[270689]: 2025-10-02 12:28:35.065175608 +0000 UTC m=+0.117571848 container remove 3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0eae122-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:35.071 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b82ce0-f6c9-4508-898c-65fab4af7a62]: (4, ('Thu Oct  2 12:28:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114)\n3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114\nThu Oct  2 12:28:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114)\n3e1488827929e2f05703766a766f5eba5f977d5c4e9f44544d2d7682efa78114\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.072 2 INFO os_vif [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee')#033[00m
Oct  2 08:28:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:35.073 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7f16faeb-f047-4ae0-995f-957eb82a9390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:35.074 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:35 np0005465987 kernel: tap247d774d-00: left promiscuous mode
Oct  2 08:28:35 np0005465987 nova_compute[230713]: 2025-10-02 12:28:35.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:35.100 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4d80accf-bffc-4b18-9dc6-5f076b7358c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:35.132 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7c794425-75d3-4da6-aacb-b090f44b7777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:35.133 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[873bfa0f-8eaf-444a-b69d-bede9611b283]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:35.148 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c001b2b8-5ca2-4f19-997e-653bddcef6a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615757, 'reachable_time': 18053, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270722, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:35 np0005465987 systemd[1]: run-netns-ovnmeta\x2d247d774d\x2d0cc8\x2d4ef2\x2da9b8\x2dc756adae0874.mount: Deactivated successfully.
Oct  2 08:28:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:35.151 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:28:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:35.151 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[2cae5460-5aa6-488d-94b7-31b5f94203a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:36.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:36 np0005465987 nova_compute[230713]: 2025-10-02 12:28:36.464 2 DEBUG nova.compute.manager [req-ea4b2a07-bb7d-4162-8b5b-50451b78af0c req-e977c9d5-ea34-4527-be77-de3ea87d25b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-unplugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:36 np0005465987 nova_compute[230713]: 2025-10-02 12:28:36.464 2 DEBUG oslo_concurrency.lockutils [req-ea4b2a07-bb7d-4162-8b5b-50451b78af0c req-e977c9d5-ea34-4527-be77-de3ea87d25b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:36 np0005465987 nova_compute[230713]: 2025-10-02 12:28:36.465 2 DEBUG oslo_concurrency.lockutils [req-ea4b2a07-bb7d-4162-8b5b-50451b78af0c req-e977c9d5-ea34-4527-be77-de3ea87d25b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:36 np0005465987 nova_compute[230713]: 2025-10-02 12:28:36.465 2 DEBUG oslo_concurrency.lockutils [req-ea4b2a07-bb7d-4162-8b5b-50451b78af0c req-e977c9d5-ea34-4527-be77-de3ea87d25b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:36 np0005465987 nova_compute[230713]: 2025-10-02 12:28:36.465 2 DEBUG nova.compute.manager [req-ea4b2a07-bb7d-4162-8b5b-50451b78af0c req-e977c9d5-ea34-4527-be77-de3ea87d25b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] No waiting events found dispatching network-vif-unplugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:36 np0005465987 nova_compute[230713]: 2025-10-02 12:28:36.465 2 WARNING nova.compute.manager [req-ea4b2a07-bb7d-4162-8b5b-50451b78af0c req-e977c9d5-ea34-4527-be77-de3ea87d25b3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received unexpected event network-vif-unplugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 08:28:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:37.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:37 np0005465987 nova_compute[230713]: 2025-10-02 12:28:37.523 2 INFO nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Deleting instance files /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72_del#033[00m
Oct  2 08:28:37 np0005465987 nova_compute[230713]: 2025-10-02 12:28:37.523 2 INFO nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Deletion of /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72_del complete#033[00m
Oct  2 08:28:37 np0005465987 podman[270724]: 2025-10-02 12:28:37.835342573 +0000 UTC m=+0.048439175 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.240 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.240 2 INFO nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Creating image(s)#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.263 2 DEBUG nova.storage.rbd_utils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.291 2 DEBUG nova.storage.rbd_utils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.316 2 DEBUG nova.storage.rbd_utils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.320 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.389 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.390 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.391 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.391 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.418 2 DEBUG nova.storage.rbd_utils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:38 np0005465987 nova_compute[230713]: 2025-10-02 12:28:38.422 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 af7eb975-4b7b-464b-b407-754d3b8ced72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:38.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:39.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:39 np0005465987 nova_compute[230713]: 2025-10-02 12:28:39.243 2 DEBUG nova.compute.manager [req-1e1c3810-7136-4b7a-9336-60b2aef12267 req-614b7eb0-22cf-44e8-a85f-c2ce42c28b25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:39 np0005465987 nova_compute[230713]: 2025-10-02 12:28:39.244 2 DEBUG oslo_concurrency.lockutils [req-1e1c3810-7136-4b7a-9336-60b2aef12267 req-614b7eb0-22cf-44e8-a85f-c2ce42c28b25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:39 np0005465987 nova_compute[230713]: 2025-10-02 12:28:39.244 2 DEBUG oslo_concurrency.lockutils [req-1e1c3810-7136-4b7a-9336-60b2aef12267 req-614b7eb0-22cf-44e8-a85f-c2ce42c28b25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:39 np0005465987 nova_compute[230713]: 2025-10-02 12:28:39.245 2 DEBUG oslo_concurrency.lockutils [req-1e1c3810-7136-4b7a-9336-60b2aef12267 req-614b7eb0-22cf-44e8-a85f-c2ce42c28b25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:39 np0005465987 nova_compute[230713]: 2025-10-02 12:28:39.245 2 DEBUG nova.compute.manager [req-1e1c3810-7136-4b7a-9336-60b2aef12267 req-614b7eb0-22cf-44e8-a85f-c2ce42c28b25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] No waiting events found dispatching network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:39 np0005465987 nova_compute[230713]: 2025-10-02 12:28:39.245 2 WARNING nova.compute.manager [req-1e1c3810-7136-4b7a-9336-60b2aef12267 req-614b7eb0-22cf-44e8-a85f-c2ce42c28b25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received unexpected event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 08:28:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:39 np0005465987 nova_compute[230713]: 2025-10-02 12:28:39.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:39.718 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:40 np0005465987 nova_compute[230713]: 2025-10-02 12:28:40.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:40.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:40 np0005465987 nova_compute[230713]: 2025-10-02 12:28:40.988 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 af7eb975-4b7b-464b-b407-754d3b8ced72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:41.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.092 2 DEBUG nova.storage.rbd_utils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] resizing rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.674 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.675 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Ensure instance console log exists: /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.675 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.675 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.676 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.677 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Start _get_guest_xml network_info=[{"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.682 2 WARNING nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.695 2 DEBUG nova.virt.libvirt.host [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.696 2 DEBUG nova.virt.libvirt.host [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.699 2 DEBUG nova.virt.libvirt.host [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.700 2 DEBUG nova.virt.libvirt.host [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.701 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.701 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.701 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.701 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.702 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.702 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.702 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.702 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.702 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.703 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.703 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.703 2 DEBUG nova.virt.hardware [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:28:41 np0005465987 nova_compute[230713]: 2025-10-02 12:28:41.703 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:42 np0005465987 nova_compute[230713]: 2025-10-02 12:28:42.217 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:42.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4001133659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:42 np0005465987 nova_compute[230713]: 2025-10-02 12:28:42.670 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:42 np0005465987 nova_compute[230713]: 2025-10-02 12:28:42.695 2 DEBUG nova.storage.rbd_utils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:42 np0005465987 nova_compute[230713]: 2025-10-02 12:28:42.700 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:43.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/529136277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.307 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.311 2 DEBUG nova.virt.libvirt.vif [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-375372636',display_name='tempest-ServerDiskConfigTestJSON-server-375372636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-375372636',id=109,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-5tzgb5vs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:37Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=af7eb975-4b7b-464b-b407-754d3b8ced72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.312 2 DEBUG nova.network.os_vif_util [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.313 2 DEBUG nova.network.os_vif_util [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.318 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <uuid>af7eb975-4b7b-464b-b407-754d3b8ced72</uuid>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <name>instance-0000006d</name>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-375372636</nova:name>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:28:41</nova:creationTime>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <nova:user uuid="4a89b71e2513413e922ee6d5d06362b1">tempest-ServerDiskConfigTestJSON-1123059068-project-member</nova:user>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <nova:project uuid="27a1729bf10548219b90df46839849f5">tempest-ServerDiskConfigTestJSON-1123059068</nova:project>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="db05f54c-61f8-42d6-a1e2-da3219a77b12"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <nova:port uuid="c0eae122-ee9e-4263-afd3-b9dca2b6656f">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <entry name="serial">af7eb975-4b7b-464b-b407-754d3b8ced72</entry>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <entry name="uuid">af7eb975-4b7b-464b-b407-754d3b8ced72</entry>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/af7eb975-4b7b-464b-b407-754d3b8ced72_disk">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:37:42:f8"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <target dev="tapc0eae122-ee"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/console.log" append="off"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:28:43 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:28:43 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:28:43 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:28:43 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.321 2 DEBUG nova.compute.manager [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Preparing to wait for external event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.322 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.322 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.323 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.324 2 DEBUG nova.virt.libvirt.vif [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-375372636',display_name='tempest-ServerDiskConfigTestJSON-server-375372636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-375372636',id=109,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-5tzgb5vs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:37Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=af7eb975-4b7b-464b-b407-754d3b8ced72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.325 2 DEBUG nova.network.os_vif_util [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.326 2 DEBUG nova.network.os_vif_util [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.327 2 DEBUG os_vif [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0eae122-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0eae122-ee, col_values=(('external_ids', {'iface-id': 'c0eae122-ee9e-4263-afd3-b9dca2b6656f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:42:f8', 'vm-uuid': 'af7eb975-4b7b-464b-b407-754d3b8ced72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:43 np0005465987 NetworkManager[44910]: <info>  [1759408123.3404] manager: (tapc0eae122-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.348 2 INFO os_vif [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee')#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.990 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.991 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.992 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No VIF found with MAC fa:16:3e:37:42:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:28:43 np0005465987 nova_compute[230713]: 2025-10-02 12:28:43.992 2 INFO nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Using config drive#033[00m
Oct  2 08:28:44 np0005465987 nova_compute[230713]: 2025-10-02 12:28:44.019 2 DEBUG nova.storage.rbd_utils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:44 np0005465987 nova_compute[230713]: 2025-10-02 12:28:44.230 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:44 np0005465987 nova_compute[230713]: 2025-10-02 12:28:44.401 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'keypairs' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:44.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:44 np0005465987 nova_compute[230713]: 2025-10-02 12:28:44.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:45 np0005465987 podman[270997]: 2025-10-02 12:28:45.857462347 +0000 UTC m=+0.060947299 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:28:45 np0005465987 podman[270998]: 2025-10-02 12:28:45.887632368 +0000 UTC m=+0.079970803 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:28:45 np0005465987 podman[270996]: 2025-10-02 12:28:45.897445378 +0000 UTC m=+0.106630077 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:28:46 np0005465987 nova_compute[230713]: 2025-10-02 12:28:46.419 2 INFO nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Creating config drive at /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config#033[00m
Oct  2 08:28:46 np0005465987 nova_compute[230713]: 2025-10-02 12:28:46.426 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq92kbg6u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:46.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:46 np0005465987 nova_compute[230713]: 2025-10-02 12:28:46.568 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq92kbg6u" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:46 np0005465987 nova_compute[230713]: 2025-10-02 12:28:46.598 2 DEBUG nova.storage.rbd_utils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:46 np0005465987 nova_compute[230713]: 2025-10-02 12:28:46.601 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:47.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:48 np0005465987 nova_compute[230713]: 2025-10-02 12:28:48.089 2 DEBUG oslo_concurrency.processutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config af7eb975-4b7b-464b-b407-754d3b8ced72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:48 np0005465987 nova_compute[230713]: 2025-10-02 12:28:48.090 2 INFO nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Deleting local config drive /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72/disk.config because it was imported into RBD.#033[00m
Oct  2 08:28:48 np0005465987 kernel: tapc0eae122-ee: entered promiscuous mode
Oct  2 08:28:48 np0005465987 NetworkManager[44910]: <info>  [1759408128.1454] manager: (tapc0eae122-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Oct  2 08:28:48 np0005465987 nova_compute[230713]: 2025-10-02 12:28:48.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:48Z|00386|binding|INFO|Claiming lport c0eae122-ee9e-4263-afd3-b9dca2b6656f for this chassis.
Oct  2 08:28:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:48Z|00387|binding|INFO|c0eae122-ee9e-4263-afd3-b9dca2b6656f: Claiming fa:16:3e:37:42:f8 10.100.0.14
Oct  2 08:28:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:48Z|00388|binding|INFO|Setting lport c0eae122-ee9e-4263-afd3-b9dca2b6656f ovn-installed in OVS
Oct  2 08:28:48 np0005465987 nova_compute[230713]: 2025-10-02 12:28:48.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005465987 nova_compute[230713]: 2025-10-02 12:28:48.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005465987 systemd-udevd[271108]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:28:48 np0005465987 systemd-machined[188335]: New machine qemu-48-instance-0000006d.
Oct  2 08:28:48 np0005465987 NetworkManager[44910]: <info>  [1759408128.1923] device (tapc0eae122-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:28:48 np0005465987 systemd[1]: Started Virtual Machine qemu-48-instance-0000006d.
Oct  2 08:28:48 np0005465987 NetworkManager[44910]: <info>  [1759408128.1931] device (tapc0eae122-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:28:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:48Z|00389|binding|INFO|Setting lport c0eae122-ee9e-4263-afd3-b9dca2b6656f up in Southbound
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.225 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:42:f8 10.100.0.14'], port_security=['fa:16:3e:37:42:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'af7eb975-4b7b-464b-b407-754d3b8ced72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c0eae122-ee9e-4263-afd3-b9dca2b6656f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.226 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c0eae122-ee9e-4263-afd3-b9dca2b6656f in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 bound to our chassis#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.228 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 247d774d-0cc8-4ef2-a9b8-c756adae0874#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.239 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1a284a-b435-4470-a799-be2e293535d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.240 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap247d774d-01 in ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.243 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap247d774d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.243 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc60b2e-0338-4b38-8453-2faa256a4d7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.243 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3549cd8d-1899-4e2a-b7d2-edf9830c3146]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.256 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[7e320d22-f8a5-4ddf-8d48-f5c6c51bca5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.283 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[648da25b-c3c3-4c58-9f0a-75984106cd16]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.308 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7a42fa36-1e00-4cb8-bb0e-8c6340757ac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.314 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b26425cc-eda9-4253-be01-e82f6be7b13a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 NetworkManager[44910]: <info>  [1759408128.3161] manager: (tap247d774d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/195)
Oct  2 08:28:48 np0005465987 nova_compute[230713]: 2025-10-02 12:28:48.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.346 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[52778c40-31c0-40da-aa60-95cb136ebc72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.348 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[71d8325f-a2d4-460a-a8e1-b2e9794d3a28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 NetworkManager[44910]: <info>  [1759408128.3729] device (tap247d774d-00): carrier: link connected
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.380 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[48a0d7c3-aa68-4f3c-92dd-11cba61a74fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.398 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[391b0a4c-3672-4726-ae34-a7cf17584d26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619694, 'reachable_time': 26758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271144, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.414 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d46e7195-37ae-414a-ada7-73c9c576fa57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:ab18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619694, 'tstamp': 619694}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271145, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.428 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4b40a4-f802-46d0-95b4-e14e0328b060]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619694, 'reachable_time': 26758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271146, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.455 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bf2fa8-0b18-44c0-b4a6-b1570c5ee149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.508 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[37295a46-7272-494c-b504-cedcc0fcff97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.509 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.510 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.510 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247d774d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:48 np0005465987 nova_compute[230713]: 2025-10-02 12:28:48.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005465987 NetworkManager[44910]: <info>  [1759408128.5127] manager: (tap247d774d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Oct  2 08:28:48 np0005465987 kernel: tap247d774d-00: entered promiscuous mode
Oct  2 08:28:48 np0005465987 nova_compute[230713]: 2025-10-02 12:28:48.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.515 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap247d774d-00, col_values=(('external_ids', {'iface-id': '04584168-a51c-41f9-9206-d39db8a81566'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:28:48Z|00390|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct  2 08:28:48 np0005465987 nova_compute[230713]: 2025-10-02 12:28:48.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.548 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.549 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[53a41200-be01-4ca5-8201-ff6e2eab1517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.550 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:28:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:28:48.551 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'env', 'PROCESS_TAG=haproxy-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/247d774d-0cc8-4ef2-a9b8-c756adae0874.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:28:48 np0005465987 podman[271196]: 2025-10-02 12:28:48.946162093 +0000 UTC m=+0.095314855 container create b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:28:48 np0005465987 podman[271196]: 2025-10-02 12:28:48.886719307 +0000 UTC m=+0.035872049 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:28:48 np0005465987 systemd[1]: Started libpod-conmon-b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2.scope.
Oct  2 08:28:49 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:28:49 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/845e228c3cef215757702f37c9113a60f6a416e74255e5f9a1fa44314c2acd1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:28:49 np0005465987 podman[271196]: 2025-10-02 12:28:49.057256021 +0000 UTC m=+0.206408813 container init b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 08:28:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:49.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:49 np0005465987 podman[271196]: 2025-10-02 12:28:49.064358987 +0000 UTC m=+0.213511739 container start b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 08:28:49 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[271230]: [NOTICE]   (271234) : New worker (271236) forked
Oct  2 08:28:49 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[271230]: [NOTICE]   (271234) : Loading success.
Oct  2 08:28:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:49 np0005465987 nova_compute[230713]: 2025-10-02 12:28:49.582 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408114.5802617, af7eb975-4b7b-464b-b407-754d3b8ced72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:49 np0005465987 nova_compute[230713]: 2025-10-02 12:28:49.584 2 INFO nova.compute.manager [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:28:49 np0005465987 nova_compute[230713]: 2025-10-02 12:28:49.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:49 np0005465987 nova_compute[230713]: 2025-10-02 12:28:49.876 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for af7eb975-4b7b-464b-b407-754d3b8ced72 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:28:49 np0005465987 nova_compute[230713]: 2025-10-02 12:28:49.876 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408129.8755999, af7eb975-4b7b-464b-b407-754d3b8ced72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:49 np0005465987 nova_compute[230713]: 2025-10-02 12:28:49.877 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] VM Started (Lifecycle Event)#033[00m
Oct  2 08:28:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:50.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:51.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.235 2 DEBUG nova.compute.manager [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.236 2 DEBUG oslo_concurrency.lockutils [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.236 2 DEBUG oslo_concurrency.lockutils [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.236 2 DEBUG oslo_concurrency.lockutils [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.236 2 DEBUG nova.compute.manager [req-f91dd0cb-b8ee-4742-878a-fcac02835484 req-3db37f59-6dac-4e37-be23-14a89af787bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Processing event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.237 2 DEBUG nova.compute.manager [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.241 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.244 2 INFO nova.virt.libvirt.driver [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance spawned successfully.#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.245 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.263 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.264 2 DEBUG nova.compute.manager [None req-054537f3-557c-425b-95fe-89937047310e - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.268 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:51 np0005465987 nova_compute[230713]: 2025-10-02 12:28:51.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:28:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:52.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:53.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.387 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.388 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408129.8788898, af7eb975-4b7b-464b-b407-754d3b8ced72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.389 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.395 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.396 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.396 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.397 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.398 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.398 2 DEBUG nova.virt.libvirt.driver [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.566 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.569 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408131.2404356, af7eb975-4b7b-464b-b407-754d3b8ced72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.569 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.674 2 DEBUG nova.compute.manager [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.693 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.697 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.936 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.937 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:53 np0005465987 nova_compute[230713]: 2025-10-02 12:28:53.937 2 DEBUG nova.objects.instance [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:28:54 np0005465987 nova_compute[230713]: 2025-10-02 12:28:54.331 2 DEBUG oslo_concurrency.lockutils [None req-330a77c4-34ff-4c50-ad97-d164181b8c56 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:54.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:54 np0005465987 nova_compute[230713]: 2025-10-02 12:28:54.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:28:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:55.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:28:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:28:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/987223863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:28:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:28:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/987223863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.391 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.392 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.392 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.654 2 DEBUG nova.compute.manager [req-d93e655d-88ef-4444-8e79-61d9399a8a2a req-8f3226cc-84f5-453b-a00c-d6c73537f05a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.655 2 DEBUG oslo_concurrency.lockutils [req-d93e655d-88ef-4444-8e79-61d9399a8a2a req-8f3226cc-84f5-453b-a00c-d6c73537f05a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.655 2 DEBUG oslo_concurrency.lockutils [req-d93e655d-88ef-4444-8e79-61d9399a8a2a req-8f3226cc-84f5-453b-a00c-d6c73537f05a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.656 2 DEBUG oslo_concurrency.lockutils [req-d93e655d-88ef-4444-8e79-61d9399a8a2a req-8f3226cc-84f5-453b-a00c-d6c73537f05a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.656 2 DEBUG nova.compute.manager [req-d93e655d-88ef-4444-8e79-61d9399a8a2a req-8f3226cc-84f5-453b-a00c-d6c73537f05a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] No waiting events found dispatching network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.656 2 WARNING nova.compute.manager [req-d93e655d-88ef-4444-8e79-61d9399a8a2a req-8f3226cc-84f5-453b-a00c-d6c73537f05a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received unexpected event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.894 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.895 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.895 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:28:55 np0005465987 nova_compute[230713]: 2025-10-02 12:28:55.896 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:56 np0005465987 nova_compute[230713]: 2025-10-02 12:28:56.908 2 DEBUG nova.compute.manager [req-4e8662bf-08af-419d-8bf1-aadc8dadfd8c req-30b5775d-ef25-4b5f-a179-0153c29fd0fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received event network-changed-204ddeb1-bc16-445e-a774-09893c526348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:56 np0005465987 nova_compute[230713]: 2025-10-02 12:28:56.910 2 DEBUG nova.compute.manager [req-4e8662bf-08af-419d-8bf1-aadc8dadfd8c req-30b5775d-ef25-4b5f-a179-0153c29fd0fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Refreshing instance network info cache due to event network-changed-204ddeb1-bc16-445e-a774-09893c526348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:28:56 np0005465987 nova_compute[230713]: 2025-10-02 12:28:56.910 2 DEBUG oslo_concurrency.lockutils [req-4e8662bf-08af-419d-8bf1-aadc8dadfd8c req-30b5775d-ef25-4b5f-a179-0153c29fd0fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:56 np0005465987 nova_compute[230713]: 2025-10-02 12:28:56.911 2 DEBUG oslo_concurrency.lockutils [req-4e8662bf-08af-419d-8bf1-aadc8dadfd8c req-30b5775d-ef25-4b5f-a179-0153c29fd0fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:56 np0005465987 nova_compute[230713]: 2025-10-02 12:28:56.911 2 DEBUG nova.network.neutron [req-4e8662bf-08af-419d-8bf1-aadc8dadfd8c req-30b5775d-ef25-4b5f-a179-0153c29fd0fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Refreshing network info cache for port 204ddeb1-bc16-445e-a774-09893c526348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:28:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:57.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:58 np0005465987 nova_compute[230713]: 2025-10-02 12:28:58.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:58 np0005465987 nova_compute[230713]: 2025-10-02 12:28:58.442 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updating instance_info_cache with network_info: [{"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:28:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:59.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:59 np0005465987 nova_compute[230713]: 2025-10-02 12:28:59.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:01.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:01 np0005465987 nova_compute[230713]: 2025-10-02 12:29:01.134 2 DEBUG nova.network.neutron [req-4e8662bf-08af-419d-8bf1-aadc8dadfd8c req-30b5775d-ef25-4b5f-a179-0153c29fd0fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updated VIF entry in instance network info cache for port 204ddeb1-bc16-445e-a774-09893c526348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:29:01 np0005465987 nova_compute[230713]: 2025-10-02 12:29:01.134 2 DEBUG nova.network.neutron [req-4e8662bf-08af-419d-8bf1-aadc8dadfd8c req-30b5775d-ef25-4b5f-a179-0153c29fd0fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updating instance_info_cache with network_info: [{"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.139 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.139 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.140 2 DEBUG oslo_concurrency.lockutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.140 2 DEBUG oslo_concurrency.lockutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.140 2 DEBUG oslo_concurrency.lockutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.140 2 DEBUG oslo_concurrency.lockutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.141 2 DEBUG oslo_concurrency.lockutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.142 2 INFO nova.compute.manager [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Terminating instance#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.142 2 DEBUG nova.compute.manager [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.143 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.143 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.143 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.168 2 DEBUG oslo_concurrency.lockutils [req-4e8662bf-08af-419d-8bf1-aadc8dadfd8c req-30b5775d-ef25-4b5f-a179-0153c29fd0fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.209 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:29:02 np0005465987 kernel: tapc0eae122-ee (unregistering): left promiscuous mode
Oct  2 08:29:02 np0005465987 NetworkManager[44910]: <info>  [1759408142.2491] device (tapc0eae122-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:29:02 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:02Z|00391|binding|INFO|Releasing lport c0eae122-ee9e-4263-afd3-b9dca2b6656f from this chassis (sb_readonly=0)
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:02Z|00392|binding|INFO|Setting lport c0eae122-ee9e-4263-afd3-b9dca2b6656f down in Southbound
Oct  2 08:29:02 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:02Z|00393|binding|INFO|Removing iface tapc0eae122-ee ovn-installed in OVS
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.278 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:42:f8 10.100.0.14'], port_security=['fa:16:3e:37:42:f8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'af7eb975-4b7b-464b-b407-754d3b8ced72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c0eae122-ee9e-4263-afd3-b9dca2b6656f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.279 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c0eae122-ee9e-4263-afd3-b9dca2b6656f in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.281 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.282 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d55320-37a5-4bb0-a819-b177ec7b3785]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.283 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace which is not needed anymore#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005465987 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct  2 08:29:02 np0005465987 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006d.scope: Consumed 11.954s CPU time.
Oct  2 08:29:02 np0005465987 systemd-machined[188335]: Machine qemu-48-instance-0000006d terminated.
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.379 2 INFO nova.virt.libvirt.driver [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Instance destroyed successfully.#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.379 2 DEBUG nova.objects.instance [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'resources' on Instance uuid af7eb975-4b7b-464b-b407-754d3b8ced72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.426 2 DEBUG nova.virt.libvirt.vif [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-375372636',display_name='tempest-ServerDiskConfigTestJSON-server-375372636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-375372636',id=109,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-5tzgb5vs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:28:54Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=af7eb975-4b7b-464b-b407-754d3b8ced72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.426 2 DEBUG nova.network.os_vif_util [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "address": "fa:16:3e:37:42:f8", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0eae122-ee", "ovs_interfaceid": "c0eae122-ee9e-4263-afd3-b9dca2b6656f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.427 2 DEBUG nova.network.os_vif_util [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.427 2 DEBUG os_vif [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0eae122-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.435 2 INFO os_vif [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:42:f8,bridge_name='br-int',has_traffic_filtering=True,id=c0eae122-ee9e-4263-afd3-b9dca2b6656f,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0eae122-ee')#033[00m
Oct  2 08:29:02 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[271230]: [NOTICE]   (271234) : haproxy version is 2.8.14-c23fe91
Oct  2 08:29:02 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[271230]: [NOTICE]   (271234) : path to executable is /usr/sbin/haproxy
Oct  2 08:29:02 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[271230]: [WARNING]  (271234) : Exiting Master process...
Oct  2 08:29:02 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[271230]: [WARNING]  (271234) : Exiting Master process...
Oct  2 08:29:02 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[271230]: [ALERT]    (271234) : Current worker (271236) exited with code 143 (Terminated)
Oct  2 08:29:02 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[271230]: [WARNING]  (271234) : All workers exited. Exiting... (0)
Oct  2 08:29:02 np0005465987 systemd[1]: libpod-b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2.scope: Deactivated successfully.
Oct  2 08:29:02 np0005465987 podman[271275]: 2025-10-02 12:29:02.45751424 +0000 UTC m=+0.066290797 container died b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:29:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:02.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:02 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2-userdata-shm.mount: Deactivated successfully.
Oct  2 08:29:02 np0005465987 systemd[1]: var-lib-containers-storage-overlay-845e228c3cef215757702f37c9113a60f6a416e74255e5f9a1fa44314c2acd1d-merged.mount: Deactivated successfully.
Oct  2 08:29:02 np0005465987 podman[271275]: 2025-10-02 12:29:02.493184482 +0000 UTC m=+0.101961029 container cleanup b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:29:02 np0005465987 systemd[1]: libpod-conmon-b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2.scope: Deactivated successfully.
Oct  2 08:29:02 np0005465987 podman[271330]: 2025-10-02 12:29:02.556024412 +0000 UTC m=+0.040780814 container remove b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.561 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c990b806-fb38-42d0-9dcb-fa8edca6afc4]: (4, ('Thu Oct  2 12:29:02 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2)\nb79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2\nThu Oct  2 12:29:02 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (b79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2)\nb79b2637a81c63d68b795d3430a34f5187f606eb276115fb99c1dad0b38ff5f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.563 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[29cd5272-1023-4128-ab32-b8e63bfac383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.565 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005465987 kernel: tap247d774d-00: left promiscuous mode
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.585 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[56786cf9-9259-42b8-badd-cf1875ec4cea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.619 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[50fb9b08-9330-4aee-bae5-e447133a92bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.620 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[48edd139-c583-48ac-a2d3-af704f607f38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.636 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[727b5138-f818-47cd-900a-2b9cd553dc39]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619687, 'reachable_time': 39681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271345, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.638 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:29:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:02.638 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2038c2-d85a-474c-94c5-2f8653cfa423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005465987 systemd[1]: run-netns-ovnmeta\x2d247d774d\x2d0cc8\x2d4ef2\x2da9b8\x2dc756adae0874.mount: Deactivated successfully.
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.745 2 DEBUG nova.compute.manager [req-503e06c1-b04f-46d4-b69d-8bb3650f5a06 req-aaeb58d8-2a98-4c2d-a9ec-3009b5cb0f30 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-unplugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.746 2 DEBUG oslo_concurrency.lockutils [req-503e06c1-b04f-46d4-b69d-8bb3650f5a06 req-aaeb58d8-2a98-4c2d-a9ec-3009b5cb0f30 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.746 2 DEBUG oslo_concurrency.lockutils [req-503e06c1-b04f-46d4-b69d-8bb3650f5a06 req-aaeb58d8-2a98-4c2d-a9ec-3009b5cb0f30 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.746 2 DEBUG oslo_concurrency.lockutils [req-503e06c1-b04f-46d4-b69d-8bb3650f5a06 req-aaeb58d8-2a98-4c2d-a9ec-3009b5cb0f30 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.746 2 DEBUG nova.compute.manager [req-503e06c1-b04f-46d4-b69d-8bb3650f5a06 req-aaeb58d8-2a98-4c2d-a9ec-3009b5cb0f30 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] No waiting events found dispatching network-vif-unplugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:02 np0005465987 nova_compute[230713]: 2025-10-02 12:29:02.746 2 DEBUG nova.compute.manager [req-503e06c1-b04f-46d4-b69d-8bb3650f5a06 req-aaeb58d8-2a98-4c2d-a9ec-3009b5cb0f30 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-unplugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.031 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.031 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.032 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.032 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:03.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.207 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.207 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.207 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.208 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.208 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/338760089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.698 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.836 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.836 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.839 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.839 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.842 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.842 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.994 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.995 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4071MB free_disk=20.892078399658203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:03 np0005465987 nova_compute[230713]: 2025-10-02 12:29:03.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:04 np0005465987 nova_compute[230713]: 2025-10-02 12:29:04.455 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:29:04 np0005465987 nova_compute[230713]: 2025-10-02 12:29:04.456 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance af7eb975-4b7b-464b-b407-754d3b8ced72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:29:04 np0005465987 nova_compute[230713]: 2025-10-02 12:29:04.457 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e744f3b9-ccc1-43b1-b134-e6050b9487eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:29:04 np0005465987 nova_compute[230713]: 2025-10-02 12:29:04.457 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:29:04 np0005465987 nova_compute[230713]: 2025-10-02 12:29:04.458 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:29:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:04 np0005465987 nova_compute[230713]: 2025-10-02 12:29:04.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:04 np0005465987 nova_compute[230713]: 2025-10-02 12:29:04.677 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:04 np0005465987 nova_compute[230713]: 2025-10-02 12:29:04.915 2 DEBUG oslo_concurrency.lockutils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:04 np0005465987 nova_compute[230713]: 2025-10-02 12:29:04.916 2 DEBUG oslo_concurrency.lockutils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:05.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3903986672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.111 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.116 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.168 2 INFO nova.virt.libvirt.driver [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Deleting instance files /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72_del#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.169 2 INFO nova.virt.libvirt.driver [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Deletion of /var/lib/nova/instances/af7eb975-4b7b-464b-b407-754d3b8ced72_del complete#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.203 2 DEBUG nova.objects.instance [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'flavor' on Instance uuid e744f3b9-ccc1-43b1-b134-e6050b9487eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.400 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.652 2 DEBUG nova.compute.manager [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.653 2 DEBUG oslo_concurrency.lockutils [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.654 2 DEBUG oslo_concurrency.lockutils [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.654 2 DEBUG oslo_concurrency.lockutils [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.655 2 DEBUG nova.compute.manager [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] No waiting events found dispatching network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.655 2 WARNING nova.compute.manager [req-8aaa4362-b731-40a3-8ee6-c01875cfe8f7 req-ffb623ae-7354-47eb-8e4e-9d010d02a8af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received unexpected event network-vif-plugged-c0eae122-ee9e-4263-afd3-b9dca2b6656f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.676 2 INFO nova.compute.manager [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Took 3.53 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.677 2 DEBUG oslo.service.loopingcall [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.678 2 DEBUG nova.compute.manager [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.678 2 DEBUG nova.network.neutron [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.793 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.794 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:05 np0005465987 nova_compute[230713]: 2025-10-02 12:29:05.937 2 DEBUG oslo_concurrency.lockutils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:06.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:06 np0005465987 nova_compute[230713]: 2025-10-02 12:29:06.727 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:06 np0005465987 nova_compute[230713]: 2025-10-02 12:29:06.728 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:06 np0005465987 nova_compute[230713]: 2025-10-02 12:29:06.728 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:29:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:07.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:29:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:29:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:29:07 np0005465987 nova_compute[230713]: 2025-10-02 12:29:07.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:08.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:08 np0005465987 podman[271524]: 2025-10-02 12:29:08.845372695 +0000 UTC m=+0.064437855 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:29:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:09.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:09 np0005465987 nova_compute[230713]: 2025-10-02 12:29:09.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:10.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:11.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.094 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.095 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.284 2 DEBUG oslo_concurrency.lockutils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.284 2 DEBUG oslo_concurrency.lockutils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.285 2 INFO nova.compute.manager [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Attaching volume 2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3 to /dev/vdb#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.374 2 DEBUG nova.network.neutron [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.415 2 DEBUG nova.compute.manager [req-aa5d02ae-c21b-4731-b540-0f389dd08309 req-405e8531-5cd9-4bb9-b7eb-138f414128bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Received event network-vif-deleted-c0eae122-ee9e-4263-afd3-b9dca2b6656f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.415 2 INFO nova.compute.manager [req-aa5d02ae-c21b-4731-b540-0f389dd08309 req-405e8531-5cd9-4bb9-b7eb-138f414128bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Neutron deleted interface c0eae122-ee9e-4263-afd3-b9dca2b6656f; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.416 2 DEBUG nova.network.neutron [req-aa5d02ae-c21b-4731-b540-0f389dd08309 req-405e8531-5cd9-4bb9-b7eb-138f414128bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.559 2 DEBUG os_brick.utils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.560 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.592 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.592 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[affa84ac-a2eb-4c8b-8088-59e142a613c2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.594 2 DEBUG nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.594 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.602 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.603 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[c82e569f-330e-419a-b50b-49a19901d9b5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.605 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.613 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.614 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[d991150a-33b1-4556-ab3f-cc894c2e81f3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.616 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[243b5b49-6cda-4bb8-bc10-c7c4abd8917e]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.617 2 DEBUG oslo_concurrency.processutils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.652 2 DEBUG oslo_concurrency.processutils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.655 2 DEBUG os_brick.initiator.connectors.lightos [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.655 2 DEBUG os_brick.initiator.connectors.lightos [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.656 2 DEBUG os_brick.initiator.connectors.lightos [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.656 2 DEBUG os_brick.utils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:29:11 np0005465987 nova_compute[230713]: 2025-10-02 12:29:11.656 2 DEBUG nova.virt.block_device [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updating existing volume attachment record: c3babeec-00a9-4c5c-89b3-1d5c0a4bcc96 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:29:12 np0005465987 nova_compute[230713]: 2025-10-02 12:29:12.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:12 np0005465987 nova_compute[230713]: 2025-10-02 12:29:12.465 2 INFO nova.compute.manager [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Took 6.79 seconds to deallocate network for instance.#033[00m
Oct  2 08:29:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:12.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:12 np0005465987 nova_compute[230713]: 2025-10-02 12:29:12.693 2 DEBUG nova.compute.manager [req-aa5d02ae-c21b-4731-b540-0f389dd08309 req-405e8531-5cd9-4bb9-b7eb-138f414128bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Detach interface failed, port_id=c0eae122-ee9e-4263-afd3-b9dca2b6656f, reason: Instance af7eb975-4b7b-464b-b407-754d3b8ced72 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:29:12 np0005465987 nova_compute[230713]: 2025-10-02 12:29:12.715 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:12 np0005465987 nova_compute[230713]: 2025-10-02 12:29:12.716 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:12 np0005465987 nova_compute[230713]: 2025-10-02 12:29:12.728 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:29:12 np0005465987 nova_compute[230713]: 2025-10-02 12:29:12.729 2 INFO nova.compute.claims [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:29:12 np0005465987 nova_compute[230713]: 2025-10-02 12:29:12.734 2 DEBUG oslo_concurrency.lockutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:13.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:29:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/884128648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.401 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.712 2 DEBUG nova.objects.instance [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'flavor' on Instance uuid e744f3b9-ccc1-43b1-b134-e6050b9487eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.751 2 DEBUG nova.virt.libvirt.driver [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Attempting to attach volume 2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.755 2 DEBUG nova.virt.libvirt.guest [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:29:13 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:29:13 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3">
Oct  2 08:29:13 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:13 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:13 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:13 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:29:13 np0005465987 nova_compute[230713]:  <auth username="openstack">
Oct  2 08:29:13 np0005465987 nova_compute[230713]:    <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:29:13 np0005465987 nova_compute[230713]:  </auth>
Oct  2 08:29:13 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:29:13 np0005465987 nova_compute[230713]:  <serial>2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3</serial>
Oct  2 08:29:13 np0005465987 nova_compute[230713]:  <shareable/>
Oct  2 08:29:13 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:29:13 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:29:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/953025965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.933 2 DEBUG nova.virt.libvirt.driver [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.933 2 DEBUG nova.virt.libvirt.driver [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.933 2 DEBUG nova.virt.libvirt.driver [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.934 2 DEBUG nova.virt.libvirt.driver [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] No VIF found with MAC fa:16:3e:9d:14:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.937 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:13 np0005465987 nova_compute[230713]: 2025-10-02 12:29:13.942 2 DEBUG nova.compute.provider_tree [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.023 2 DEBUG nova.scheduler.client.report [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.073 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.074 2 DEBUG nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.077 2 DEBUG oslo_concurrency.lockutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.215 2 DEBUG nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.216 2 DEBUG nova.network.neutron [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.260 2 INFO nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.285 2 DEBUG nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.319 2 DEBUG oslo_concurrency.processutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.449 2 DEBUG nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.450 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.451 2 INFO nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Creating image(s)#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.482 2 DEBUG nova.storage.rbd_utils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image fced40d2-4fc7-4939-9fbb-bdb61d750526_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.512 2 DEBUG nova.storage.rbd_utils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image fced40d2-4fc7-4939-9fbb-bdb61d750526_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.537 2 DEBUG nova.storage.rbd_utils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image fced40d2-4fc7-4939-9fbb-bdb61d750526_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.542 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.637 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.638 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.638 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.639 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.668 2 DEBUG nova.storage.rbd_utils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image fced40d2-4fc7-4939-9fbb-bdb61d750526_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.672 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 fced40d2-4fc7-4939-9fbb-bdb61d750526_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.704 2 DEBUG nova.policy [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a89b71e2513413e922ee6d5d06362b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27a1729bf10548219b90df46839849f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.711 2 DEBUG oslo_concurrency.lockutils [None req-77f47506-ac0f-458f-a447-89a343abe26f 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2173427479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.761 2 DEBUG oslo_concurrency.processutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.770 2 DEBUG nova.compute.provider_tree [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.807 2 DEBUG nova.scheduler.client.report [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.881 2 DEBUG oslo_concurrency.lockutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.979 2 INFO nova.scheduler.client.report [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Deleted allocations for instance af7eb975-4b7b-464b-b407-754d3b8ced72#033[00m
Oct  2 08:29:14 np0005465987 nova_compute[230713]: 2025-10-02 12:29:14.992 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 fced40d2-4fc7-4939-9fbb-bdb61d750526_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.080 2 DEBUG nova.storage.rbd_utils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] resizing rbd image fced40d2-4fc7-4939-9fbb-bdb61d750526_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:29:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:15.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.188 2 DEBUG nova.objects.instance [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'migration_context' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.193 2 DEBUG oslo_concurrency.lockutils [None req-2a2476ea-9c39-47e6-88f5-5dfba5ccca57 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "af7eb975-4b7b-464b-b407-754d3b8ced72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.220 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.220 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Ensure instance console log exists: /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.221 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.221 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.221 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:15.421 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:15.423 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:29:15 np0005465987 nova_compute[230713]: 2025-10-02 12:29:15.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:16 np0005465987 podman[271807]: 2025-10-02 12:29:16.314123532 +0000 UTC m=+0.062565563 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:29:16 np0005465987 podman[271806]: 2025-10-02 12:29:16.337065814 +0000 UTC m=+0.079691925 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:29:16 np0005465987 podman[271805]: 2025-10-02 12:29:16.343669906 +0000 UTC m=+0.098669728 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:29:16 np0005465987 nova_compute[230713]: 2025-10-02 12:29:16.474 2 DEBUG nova.network.neutron [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Successfully created port: 7cb2acba-67d6-4041-97fe-10e2d80dcd21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:29:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:16.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:29:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:29:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:17.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:17 np0005465987 nova_compute[230713]: 2025-10-02 12:29:17.378 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408142.3774717, af7eb975-4b7b-464b-b407-754d3b8ced72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:17 np0005465987 nova_compute[230713]: 2025-10-02 12:29:17.379 2 INFO nova.compute.manager [-] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:29:17 np0005465987 nova_compute[230713]: 2025-10-02 12:29:17.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:29:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:18.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:29:18 np0005465987 nova_compute[230713]: 2025-10-02 12:29:18.965 2 DEBUG nova.compute.manager [None req-7b472132-e6ce-4a36-99ab-3588d8038595 - - - - - -] [instance: af7eb975-4b7b-464b-b407-754d3b8ced72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:19.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:19.425 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:19 np0005465987 nova_compute[230713]: 2025-10-02 12:29:19.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:19 np0005465987 nova_compute[230713]: 2025-10-02 12:29:19.747 2 DEBUG nova.network.neutron [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Successfully updated port: 7cb2acba-67d6-4041-97fe-10e2d80dcd21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:29:19 np0005465987 nova_compute[230713]: 2025-10-02 12:29:19.805 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:29:19 np0005465987 nova_compute[230713]: 2025-10-02 12:29:19.806 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:29:19 np0005465987 nova_compute[230713]: 2025-10-02 12:29:19.806 2 DEBUG nova.network.neutron [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:29:20 np0005465987 nova_compute[230713]: 2025-10-02 12:29:20.058 2 DEBUG nova.compute.manager [req-49e7858e-bbe6-49d1-b8e2-434216090d39 req-14c61a79-3e09-4553-81a1-bf752f243741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-changed-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:20 np0005465987 nova_compute[230713]: 2025-10-02 12:29:20.058 2 DEBUG nova.compute.manager [req-49e7858e-bbe6-49d1-b8e2-434216090d39 req-14c61a79-3e09-4553-81a1-bf752f243741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Refreshing instance network info cache due to event network-changed-7cb2acba-67d6-4041-97fe-10e2d80dcd21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:29:20 np0005465987 nova_compute[230713]: 2025-10-02 12:29:20.059 2 DEBUG oslo_concurrency.lockutils [req-49e7858e-bbe6-49d1-b8e2-434216090d39 req-14c61a79-3e09-4553-81a1-bf752f243741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:29:20 np0005465987 nova_compute[230713]: 2025-10-02 12:29:20.197 2 DEBUG nova.network.neutron [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:29:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:20.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:21.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.012 2 DEBUG oslo_concurrency.lockutils [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.012 2 DEBUG oslo_concurrency.lockutils [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.062 2 INFO nova.compute.manager [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Detaching volume 2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.343 2 INFO nova.virt.block_device [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Attempting to driver detach volume 2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3 from mountpoint /dev/vdb#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.351 2 DEBUG nova.virt.libvirt.driver [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Attempting to detach device vdb from instance e744f3b9-ccc1-43b1-b134-e6050b9487eb from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.351 2 DEBUG nova.virt.libvirt.guest [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3">
Oct  2 08:29:22 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <serial>2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3</serial>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <shareable/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:29:22 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.361 2 INFO nova.virt.libvirt.driver [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully detached device vdb from instance e744f3b9-ccc1-43b1-b134-e6050b9487eb from the persistent domain config.#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.362 2 DEBUG nova.virt.libvirt.driver [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e744f3b9-ccc1-43b1-b134-e6050b9487eb from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.362 2 DEBUG nova.virt.libvirt.guest [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3">
Oct  2 08:29:22 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <serial>2a3104a8-1c9f-4fbb-9fab-8943dbcc7eb3</serial>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <shareable/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:29:22 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:29:22 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.419 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759408162.4190943, e744f3b9-ccc1-43b1-b134-e6050b9487eb => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.421 2 DEBUG nova.virt.libvirt.driver [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e744f3b9-ccc1-43b1-b134-e6050b9487eb _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.423 2 INFO nova.virt.libvirt.driver [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully detached device vdb from instance e744f3b9-ccc1-43b1-b134-e6050b9487eb from the live domain config.#033[00m
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:22.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:22 np0005465987 nova_compute[230713]: 2025-10-02 12:29:22.967 2 DEBUG nova.network.neutron [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating instance_info_cache with network_info: [{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.034 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.035 2 DEBUG nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance network_info: |[{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.035 2 DEBUG oslo_concurrency.lockutils [req-49e7858e-bbe6-49d1-b8e2-434216090d39 req-14c61a79-3e09-4553-81a1-bf752f243741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.035 2 DEBUG nova.network.neutron [req-49e7858e-bbe6-49d1-b8e2-434216090d39 req-14c61a79-3e09-4553-81a1-bf752f243741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Refreshing network info cache for port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.039 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Start _get_guest_xml network_info=[{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.043 2 WARNING nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.048 2 DEBUG nova.virt.libvirt.host [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.049 2 DEBUG nova.virt.libvirt.host [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.056 2 DEBUG nova.virt.libvirt.host [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.057 2 DEBUG nova.virt.libvirt.host [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.058 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.058 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.059 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.059 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.060 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.060 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.060 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.060 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.061 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.061 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.061 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.061 2 DEBUG nova.virt.hardware [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.064 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:23.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.117 2 DEBUG nova.objects.instance [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'flavor' on Instance uuid e744f3b9-ccc1-43b1-b134-e6050b9487eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.169 2 DEBUG oslo_concurrency.lockutils [None req-e9aff1f8-20d8-4944-90d7-71c026b7f808 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:29:23 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3678077927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.594 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.622 2 DEBUG nova.storage.rbd_utils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image fced40d2-4fc7-4939-9fbb-bdb61d750526_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:23 np0005465987 nova_compute[230713]: 2025-10-02 12:29:23.626 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:29:24 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1670572574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.123 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.125 2 DEBUG nova.virt.libvirt.vif [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-475120527',display_name='tempest-ServerDiskConfigTestJSON-server-475120527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-475120527',id=113,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-8s6rikzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:14Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=fced40d2-4fc7-4939-9fbb-bdb61d750526,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.125 2 DEBUG nova.network.os_vif_util [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.126 2 DEBUG nova.network.os_vif_util [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.127 2 DEBUG nova.objects.instance [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.151 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <uuid>fced40d2-4fc7-4939-9fbb-bdb61d750526</uuid>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <name>instance-00000071</name>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-475120527</nova:name>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:29:23</nova:creationTime>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <nova:user uuid="4a89b71e2513413e922ee6d5d06362b1">tempest-ServerDiskConfigTestJSON-1123059068-project-member</nova:user>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <nova:project uuid="27a1729bf10548219b90df46839849f5">tempest-ServerDiskConfigTestJSON-1123059068</nova:project>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <nova:port uuid="7cb2acba-67d6-4041-97fe-10e2d80dcd21">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <entry name="serial">fced40d2-4fc7-4939-9fbb-bdb61d750526</entry>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <entry name="uuid">fced40d2-4fc7-4939-9fbb-bdb61d750526</entry>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/fced40d2-4fc7-4939-9fbb-bdb61d750526_disk">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/fced40d2-4fc7-4939-9fbb-bdb61d750526_disk.config">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:53:9c:95"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <target dev="tap7cb2acba-67"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/console.log" append="off"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:29:24 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:29:24 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:29:24 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:29:24 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.152 2 DEBUG nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Preparing to wait for external event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.152 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.153 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.153 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.153 2 DEBUG nova.virt.libvirt.vif [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-475120527',display_name='tempest-ServerDiskConfigTestJSON-server-475120527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-475120527',id=113,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-8s6rikzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:14Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=fced40d2-4fc7-4939-9fbb-bdb61d750526,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.154 2 DEBUG nova.network.os_vif_util [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.154 2 DEBUG nova.network.os_vif_util [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.154 2 DEBUG os_vif [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.155 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.156 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.158 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7cb2acba-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7cb2acba-67, col_values=(('external_ids', {'iface-id': '7cb2acba-67d6-4041-97fe-10e2d80dcd21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:9c:95', 'vm-uuid': 'fced40d2-4fc7-4939-9fbb-bdb61d750526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:24 np0005465987 NetworkManager[44910]: <info>  [1759408164.1614] manager: (tap7cb2acba-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.167 2 INFO os_vif [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67')#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.232 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.232 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.232 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No VIF found with MAC fa:16:3e:53:9c:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.233 2 INFO nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Using config drive#033[00m
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.261 2 DEBUG nova.storage.rbd_utils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image fced40d2-4fc7-4939-9fbb-bdb61d750526_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:29:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:24.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:29:24 np0005465987 nova_compute[230713]: 2025-10-02 12:29:24.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:25.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:26.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:27 np0005465987 nova_compute[230713]: 2025-10-02 12:29:27.052 2 INFO nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Creating config drive at /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/disk.config#033[00m
Oct  2 08:29:27 np0005465987 nova_compute[230713]: 2025-10-02 12:29:27.056 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpegilx10a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:27.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:27 np0005465987 nova_compute[230713]: 2025-10-02 12:29:27.198 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpegilx10a" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:27 np0005465987 nova_compute[230713]: 2025-10-02 12:29:27.226 2 DEBUG nova.storage.rbd_utils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] rbd image fced40d2-4fc7-4939-9fbb-bdb61d750526_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:27 np0005465987 nova_compute[230713]: 2025-10-02 12:29:27.229 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/disk.config fced40d2-4fc7-4939-9fbb-bdb61d750526_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:27 np0005465987 nova_compute[230713]: 2025-10-02 12:29:27.628 2 DEBUG nova.network.neutron [req-49e7858e-bbe6-49d1-b8e2-434216090d39 req-14c61a79-3e09-4553-81a1-bf752f243741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updated VIF entry in instance network info cache for port 7cb2acba-67d6-4041-97fe-10e2d80dcd21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:29:27 np0005465987 nova_compute[230713]: 2025-10-02 12:29:27.629 2 DEBUG nova.network.neutron [req-49e7858e-bbe6-49d1-b8e2-434216090d39 req-14c61a79-3e09-4553-81a1-bf752f243741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating instance_info_cache with network_info: [{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:27 np0005465987 nova_compute[230713]: 2025-10-02 12:29:27.654 2 DEBUG oslo_concurrency.lockutils [req-49e7858e-bbe6-49d1-b8e2-434216090d39 req-14c61a79-3e09-4553-81a1-bf752f243741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:27.955 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:27.955 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:27.956 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:28 np0005465987 nova_compute[230713]: 2025-10-02 12:29:28.062 2 DEBUG oslo_concurrency.processutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/disk.config fced40d2-4fc7-4939-9fbb-bdb61d750526_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.833s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:28 np0005465987 nova_compute[230713]: 2025-10-02 12:29:28.063 2 INFO nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Deleting local config drive /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/disk.config because it was imported into RBD.#033[00m
Oct  2 08:29:28 np0005465987 kernel: tap7cb2acba-67: entered promiscuous mode
Oct  2 08:29:28 np0005465987 NetworkManager[44910]: <info>  [1759408168.1271] manager: (tap7cb2acba-67): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Oct  2 08:29:28 np0005465987 nova_compute[230713]: 2025-10-02 12:29:28.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:28Z|00394|binding|INFO|Claiming lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 for this chassis.
Oct  2 08:29:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:28Z|00395|binding|INFO|7cb2acba-67d6-4041-97fe-10e2d80dcd21: Claiming fa:16:3e:53:9c:95 10.100.0.10
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.184 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:9c:95 10.100.0.10'], port_security=['fa:16:3e:53:9c:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fced40d2-4fc7-4939-9fbb-bdb61d750526', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=7cb2acba-67d6-4041-97fe-10e2d80dcd21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:28Z|00396|binding|INFO|Setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 ovn-installed in OVS
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.186 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 bound to our chassis#033[00m
Oct  2 08:29:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:28Z|00397|binding|INFO|Setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 up in Southbound
Oct  2 08:29:28 np0005465987 nova_compute[230713]: 2025-10-02 12:29:28.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.187 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 247d774d-0cc8-4ef2-a9b8-c756adae0874#033[00m
Oct  2 08:29:28 np0005465987 nova_compute[230713]: 2025-10-02 12:29:28.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:28 np0005465987 systemd-udevd[272028]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.198 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e32385bb-fc6a-4497-86da-33fe182ff179]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.200 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap247d774d-01 in ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.201 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap247d774d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.201 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[621d03fb-a064-4760-85ed-afcbd7ae0804]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.202 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[44074a76-8c25-4550-ba28-bc8220074898]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 NetworkManager[44910]: <info>  [1759408168.2113] device (tap7cb2acba-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:29:28 np0005465987 NetworkManager[44910]: <info>  [1759408168.2122] device (tap7cb2acba-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.211 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[2a33a03d-9159-4884-b2bf-1b79327d11c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 systemd-machined[188335]: New machine qemu-49-instance-00000071.
Oct  2 08:29:28 np0005465987 systemd[1]: Started Virtual Machine qemu-49-instance-00000071.
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.235 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0445ee8f-74fb-4107-8d16-9e5505f9e761]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.267 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[47d061d9-2c4f-4aaa-80eb-38291beae857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 NetworkManager[44910]: <info>  [1759408168.2784] manager: (tap247d774d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.277 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3eedf2-9810-42c5-b209-ed00104a1c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 systemd-udevd[272034]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.315 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a80ea1e9-4b56-4a27-bf4a-42781d6e9239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.318 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[70e0aa0f-fbda-4fc9-b391-3364cb8f7656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 NetworkManager[44910]: <info>  [1759408168.3456] device (tap247d774d-00): carrier: link connected
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.350 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d9efa8-8e19-4804-a2b1-7283329b6f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.365 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6d151280-e238-463d-8117-3285e91973a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623692, 'reachable_time': 23513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272063, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.379 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[83f99c3c-5130-4c44-b87f-2aee4a6be169]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:ab18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 623692, 'tstamp': 623692}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272064, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.394 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dc0b1a3f-4ffd-4637-bcc7-c90dbf38f133]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623692, 'reachable_time': 23513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272065, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.422 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[80b55999-9985-464f-8720-16f8183961b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.479 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa90fb9-6c98-491e-9b8f-736f980740a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.481 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.481 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.482 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247d774d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:28 np0005465987 NetworkManager[44910]: <info>  [1759408168.4853] manager: (tap247d774d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Oct  2 08:29:28 np0005465987 kernel: tap247d774d-00: entered promiscuous mode
Oct  2 08:29:28 np0005465987 nova_compute[230713]: 2025-10-02 12:29:28.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.490 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap247d774d-00, col_values=(('external_ids', {'iface-id': '04584168-a51c-41f9-9206-d39db8a81566'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:28Z|00398|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct  2 08:29:28 np0005465987 nova_compute[230713]: 2025-10-02 12:29:28.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.494 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.495 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6e9173-d428-4087-8b42-3cdcf413c859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.496 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:28.496 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'env', 'PROCESS_TAG=haproxy-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/247d774d-0cc8-4ef2-a9b8-c756adae0874.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:29:28 np0005465987 nova_compute[230713]: 2025-10-02 12:29:28.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:28 np0005465987 podman[272139]: 2025-10-02 12:29:28.855388958 +0000 UTC m=+0.023732294 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:29:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:29.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:29 np0005465987 podman[272139]: 2025-10-02 12:29:29.132035975 +0000 UTC m=+0.300379291 container create 77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:29 np0005465987 systemd[1]: Started libpod-conmon-77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204.scope.
Oct  2 08:29:29 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:29:29 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ab7c1ae76e3798249e601dcf1ea96cfd0833c531af09cdd10d334b83ff2120/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:29:29 np0005465987 podman[272139]: 2025-10-02 12:29:29.27784789 +0000 UTC m=+0.446191246 container init 77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:29:29 np0005465987 podman[272139]: 2025-10-02 12:29:29.288339039 +0000 UTC m=+0.456682355 container start 77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:29:29 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[272155]: [NOTICE]   (272159) : New worker (272161) forked
Oct  2 08:29:29 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[272155]: [NOTICE]   (272159) : Loading success.
Oct  2 08:29:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.349 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408169.3486006, fced40d2-4fc7-4939-9fbb-bdb61d750526 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.350 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] VM Started (Lifecycle Event)#033[00m
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.583 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.588 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408169.3500233, fced40d2-4fc7-4939-9fbb-bdb61d750526 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.589 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.629 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.634 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.671 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:29:29 np0005465987 nova_compute[230713]: 2025-10-02 12:29:29.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.212 2 DEBUG nova.compute.manager [req-8195303d-5c96-442c-ae21-f05b3e1b4d2b req-3aa85d0e-ef17-4b56-886f-2f4ab883ac13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.212 2 DEBUG oslo_concurrency.lockutils [req-8195303d-5c96-442c-ae21-f05b3e1b4d2b req-3aa85d0e-ef17-4b56-886f-2f4ab883ac13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.213 2 DEBUG oslo_concurrency.lockutils [req-8195303d-5c96-442c-ae21-f05b3e1b4d2b req-3aa85d0e-ef17-4b56-886f-2f4ab883ac13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.213 2 DEBUG oslo_concurrency.lockutils [req-8195303d-5c96-442c-ae21-f05b3e1b4d2b req-3aa85d0e-ef17-4b56-886f-2f4ab883ac13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.213 2 DEBUG nova.compute.manager [req-8195303d-5c96-442c-ae21-f05b3e1b4d2b req-3aa85d0e-ef17-4b56-886f-2f4ab883ac13 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Processing event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.214 2 DEBUG nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.217 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408170.2173526, fced40d2-4fc7-4939-9fbb-bdb61d750526 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.218 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.219 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.223 2 INFO nova.virt.libvirt.driver [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance spawned successfully.#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.223 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.248 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.252 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.281 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.282 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.283 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.283 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.284 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.284 2 DEBUG nova.virt.libvirt.driver [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.294 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.353 2 INFO nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Took 15.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.354 2 DEBUG nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.423 2 INFO nova.compute.manager [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Took 17.74 seconds to build instance.#033[00m
Oct  2 08:29:30 np0005465987 nova_compute[230713]: 2025-10-02 12:29:30.458 2 DEBUG oslo_concurrency.lockutils [None req-096c8b3b-a27c-4d58-9249-9d87adff8f39 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:30.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:31.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:32 np0005465987 nova_compute[230713]: 2025-10-02 12:29:32.410 2 DEBUG nova.compute.manager [req-dc0ccb37-1f78-459a-a13f-2dbffbbba3af req-452bbb28-1bbc-462a-b3d9-e96e3c26f0e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:32 np0005465987 nova_compute[230713]: 2025-10-02 12:29:32.411 2 DEBUG oslo_concurrency.lockutils [req-dc0ccb37-1f78-459a-a13f-2dbffbbba3af req-452bbb28-1bbc-462a-b3d9-e96e3c26f0e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:32 np0005465987 nova_compute[230713]: 2025-10-02 12:29:32.412 2 DEBUG oslo_concurrency.lockutils [req-dc0ccb37-1f78-459a-a13f-2dbffbbba3af req-452bbb28-1bbc-462a-b3d9-e96e3c26f0e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:32 np0005465987 nova_compute[230713]: 2025-10-02 12:29:32.412 2 DEBUG oslo_concurrency.lockutils [req-dc0ccb37-1f78-459a-a13f-2dbffbbba3af req-452bbb28-1bbc-462a-b3d9-e96e3c26f0e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:32 np0005465987 nova_compute[230713]: 2025-10-02 12:29:32.412 2 DEBUG nova.compute.manager [req-dc0ccb37-1f78-459a-a13f-2dbffbbba3af req-452bbb28-1bbc-462a-b3d9-e96e3c26f0e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:32 np0005465987 nova_compute[230713]: 2025-10-02 12:29:32.413 2 WARNING nova.compute.manager [req-dc0ccb37-1f78-459a-a13f-2dbffbbba3af req-452bbb28-1bbc-462a-b3d9-e96e3c26f0e5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:29:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:32.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:33.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:34 np0005465987 nova_compute[230713]: 2025-10-02 12:29:34.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:34.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:34 np0005465987 nova_compute[230713]: 2025-10-02 12:29:34.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:35.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:36.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:37.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:38.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:39.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:39 np0005465987 nova_compute[230713]: 2025-10-02 12:29:39.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:39 np0005465987 nova_compute[230713]: 2025-10-02 12:29:39.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:39 np0005465987 podman[272171]: 2025-10-02 12:29:39.844470696 +0000 UTC m=+0.057405152 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:29:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:40.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:41.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:42.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:42 np0005465987 nova_compute[230713]: 2025-10-02 12:29:42.849 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:29:42 np0005465987 nova_compute[230713]: 2025-10-02 12:29:42.850 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:29:42 np0005465987 nova_compute[230713]: 2025-10-02 12:29:42.850 2 DEBUG nova.network.neutron [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:29:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:29:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:43.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:29:44 np0005465987 nova_compute[230713]: 2025-10-02 12:29:44.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:44.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:44 np0005465987 nova_compute[230713]: 2025-10-02 12:29:44.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:45.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:45 np0005465987 nova_compute[230713]: 2025-10-02 12:29:45.296 2 DEBUG nova.network.neutron [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating instance_info_cache with network_info: [{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:45 np0005465987 nova_compute[230713]: 2025-10-02 12:29:45.343 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:45Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:9c:95 10.100.0.10
Oct  2 08:29:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:45Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:9c:95 10.100.0.10
Oct  2 08:29:45 np0005465987 nova_compute[230713]: 2025-10-02 12:29:45.547 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:29:45 np0005465987 nova_compute[230713]: 2025-10-02 12:29:45.548 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Creating file /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/bfff58e5d722452a91a8703c3c438a6e.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:29:45 np0005465987 nova_compute[230713]: 2025-10-02 12:29:45.549 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/bfff58e5d722452a91a8703c3c438a6e.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:45 np0005465987 nova_compute[230713]: 2025-10-02 12:29:45.988 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/bfff58e5d722452a91a8703c3c438a6e.tmp" returned: 1 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:45 np0005465987 nova_compute[230713]: 2025-10-02 12:29:45.990 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526/bfff58e5d722452a91a8703c3c438a6e.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:29:45 np0005465987 nova_compute[230713]: 2025-10-02 12:29:45.990 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Creating directory /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:29:45 np0005465987 nova_compute[230713]: 2025-10-02 12:29:45.991 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:46 np0005465987 nova_compute[230713]: 2025-10-02 12:29:46.233 2 DEBUG oslo_concurrency.processutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/fced40d2-4fc7-4939-9fbb-bdb61d750526" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:46 np0005465987 nova_compute[230713]: 2025-10-02 12:29:46.237 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:29:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:46.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:46 np0005465987 podman[272193]: 2025-10-02 12:29:46.845688179 +0000 UTC m=+0.061476954 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:29:46 np0005465987 podman[272194]: 2025-10-02 12:29:46.845756871 +0000 UTC m=+0.057244617 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:29:46 np0005465987 podman[272192]: 2025-10-02 12:29:46.865810353 +0000 UTC m=+0.086742369 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  2 08:29:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:47.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:48.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:48 np0005465987 kernel: tap7cb2acba-67 (unregistering): left promiscuous mode
Oct  2 08:29:48 np0005465987 NetworkManager[44910]: <info>  [1759408188.7841] device (tap7cb2acba-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:29:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:48Z|00399|binding|INFO|Releasing lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 from this chassis (sb_readonly=0)
Oct  2 08:29:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:48Z|00400|binding|INFO|Setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 down in Southbound
Oct  2 08:29:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:48Z|00401|binding|INFO|Removing iface tap7cb2acba-67 ovn-installed in OVS
Oct  2 08:29:48 np0005465987 nova_compute[230713]: 2025-10-02 12:29:48.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:48 np0005465987 nova_compute[230713]: 2025-10-02 12:29:48.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:48.806 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:9c:95 10.100.0.10'], port_security=['fa:16:3e:53:9c:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fced40d2-4fc7-4939-9fbb-bdb61d750526', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=7cb2acba-67d6-4041-97fe-10e2d80dcd21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:48.807 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis#033[00m
Oct  2 08:29:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:48.808 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:29:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:48.810 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdac35f-134a-48ee-a473-3673980fed54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:48.810 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace which is not needed anymore#033[00m
Oct  2 08:29:48 np0005465987 nova_compute[230713]: 2025-10-02 12:29:48.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:48 np0005465987 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct  2 08:29:48 np0005465987 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000071.scope: Consumed 14.432s CPU time.
Oct  2 08:29:48 np0005465987 systemd-machined[188335]: Machine qemu-49-instance-00000071 terminated.
Oct  2 08:29:48 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[272155]: [NOTICE]   (272159) : haproxy version is 2.8.14-c23fe91
Oct  2 08:29:48 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[272155]: [NOTICE]   (272159) : path to executable is /usr/sbin/haproxy
Oct  2 08:29:48 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[272155]: [WARNING]  (272159) : Exiting Master process...
Oct  2 08:29:48 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[272155]: [ALERT]    (272159) : Current worker (272161) exited with code 143 (Terminated)
Oct  2 08:29:48 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[272155]: [WARNING]  (272159) : All workers exited. Exiting... (0)
Oct  2 08:29:48 np0005465987 systemd[1]: libpod-77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204.scope: Deactivated successfully.
Oct  2 08:29:48 np0005465987 podman[272277]: 2025-10-02 12:29:48.941841145 +0000 UTC m=+0.047575791 container died 77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:29:48 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204-userdata-shm.mount: Deactivated successfully.
Oct  2 08:29:48 np0005465987 systemd[1]: var-lib-containers-storage-overlay-c0ab7c1ae76e3798249e601dcf1ea96cfd0833c531af09cdd10d334b83ff2120-merged.mount: Deactivated successfully.
Oct  2 08:29:48 np0005465987 podman[272277]: 2025-10-02 12:29:48.975829031 +0000 UTC m=+0.081563677 container cleanup 77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:29:48 np0005465987 systemd[1]: libpod-conmon-77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204.scope: Deactivated successfully.
Oct  2 08:29:49 np0005465987 kernel: tap7cb2acba-67: entered promiscuous mode
Oct  2 08:29:49 np0005465987 systemd-udevd[272257]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00402|binding|INFO|Claiming lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 for this chassis.
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00403|binding|INFO|7cb2acba-67d6-4041-97fe-10e2d80dcd21: Claiming fa:16:3e:53:9c:95 10.100.0.10
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 NetworkManager[44910]: <info>  [1759408189.0270] manager: (tap7cb2acba-67): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Oct  2 08:29:49 np0005465987 kernel: tap7cb2acba-67 (unregistering): left promiscuous mode
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.039 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:9c:95 10.100.0.10'], port_security=['fa:16:3e:53:9c:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fced40d2-4fc7-4939-9fbb-bdb61d750526', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=7cb2acba-67d6-4041-97fe-10e2d80dcd21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00404|binding|INFO|Setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 ovn-installed in OVS
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00405|binding|INFO|Setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 up in Southbound
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00406|binding|INFO|Releasing lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 from this chassis (sb_readonly=1)
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00407|if_status|INFO|Not setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 down as sb is readonly
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00408|binding|INFO|Removing iface tap7cb2acba-67 ovn-installed in OVS
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00409|binding|INFO|Releasing lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 from this chassis (sb_readonly=0)
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00410|binding|INFO|Setting lport 7cb2acba-67d6-4041-97fe-10e2d80dcd21 down in Southbound
Oct  2 08:29:49 np0005465987 podman[272308]: 2025-10-02 12:29:49.052152313 +0000 UTC m=+0.053471204 container remove 77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.057 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:9c:95 10.100.0.10'], port_security=['fa:16:3e:53:9c:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fced40d2-4fc7-4939-9fbb-bdb61d750526', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=7cb2acba-67d6-4041-97fe-10e2d80dcd21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.057 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e0e9d4-ff96-4e36-98c1-21b5485e25fe]: (4, ('Thu Oct  2 12:29:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204)\n77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204\nThu Oct  2 12:29:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204)\n77d1f45409e9700db95c5c3dbe526bdfa2c7344476841518bd6e8c10a9af8204\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.060 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e354c5-eb02-4c18-99c6-b1146fa4d377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.061 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 kernel: tap247d774d-00: left promiscuous mode
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.082 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f9aad35d-7e5f-4754-a4ad-4a2301384723]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.116 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c28d725d-9228-4213-a3ee-ac705165eae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.117 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[859c6109-828a-4c66-9502-6530b0f236b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.132 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[58ce38ec-b767-4efa-9caf-595cede33007]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623683, 'reachable_time': 21121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272329, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:49 np0005465987 systemd[1]: run-netns-ovnmeta\x2d247d774d\x2d0cc8\x2d4ef2\x2da9b8\x2dc756adae0874.mount: Deactivated successfully.
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.134 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.134 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[be156e9e-0ce9-413e-bad5-8025560b2612]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.135 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.136 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.138 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb94985-9f8f-4d05-9435-78ad10ef5cc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.139 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.140 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:29:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:29:49.141 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9d540dca-fa61-4417-9441-779a652ed3a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:49.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.254 2 INFO nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.260 2 INFO nova.virt.libvirt.driver [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Instance destroyed successfully.#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.261 2 DEBUG nova.virt.libvirt.vif [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-475120527',display_name='tempest-ServerDiskConfigTestJSON-server-475120527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-475120527',id=113,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-8s6rikzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:29:41Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=fced40d2-4fc7-4939-9fbb-bdb61d750526,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:53:9c:95"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.262 2 DEBUG nova.network.os_vif_util [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:53:9c:95"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.263 2 DEBUG nova.network.os_vif_util [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.263 2 DEBUG os_vif [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb2acba-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.271 2 INFO os_vif [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67')#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.275 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.276 2 DEBUG nova.virt.libvirt.driver [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.390 2 DEBUG nova.compute.manager [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.390 2 DEBUG oslo_concurrency.lockutils [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.391 2 DEBUG oslo_concurrency.lockutils [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.391 2 DEBUG oslo_concurrency.lockutils [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.391 2 DEBUG nova.compute.manager [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.391 2 WARNING nova.compute.manager [req-1daa0814-95f4-4890-9341-23e6a297b4fa req-a8040f2a-33fc-4cd9-9448-a98c0d772cbd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:29:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:29:49Z|00411|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.744 2 DEBUG neutronclient.v2_0.client [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.870 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.871 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:49 np0005465987 nova_compute[230713]: 2025-10-02 12:29:49.871 2 DEBUG oslo_concurrency.lockutils [None req-f9fd740f-b6e4-463e-844c-bbd919ff201e 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:50.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.588 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.588 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.589 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.589 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.589 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.589 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.590 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.590 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.590 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.590 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.590 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.591 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.591 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.591 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.591 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.592 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.592 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.592 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.592 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.592 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.593 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.593 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.593 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.593 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-unplugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.594 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.594 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.594 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.594 2 DEBUG oslo_concurrency.lockutils [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.594 2 DEBUG nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:51 np0005465987 nova_compute[230713]: 2025-10-02 12:29:51.595 2 WARNING nova.compute.manager [req-aa42d8e8-49b5-4249-92d9-151b6709ab11 req-31f711bd-a4a4-4d26-bdce-1e4bb5b50a34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:29:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:52.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:53 np0005465987 nova_compute[230713]: 2025-10-02 12:29:53.000 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:54 np0005465987 nova_compute[230713]: 2025-10-02 12:29:54.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:54 np0005465987 nova_compute[230713]: 2025-10-02 12:29:54.275 2 DEBUG nova.compute.manager [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-changed-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:54 np0005465987 nova_compute[230713]: 2025-10-02 12:29:54.276 2 DEBUG nova.compute.manager [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Refreshing instance network info cache due to event network-changed-7cb2acba-67d6-4041-97fe-10e2d80dcd21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:29:54 np0005465987 nova_compute[230713]: 2025-10-02 12:29:54.276 2 DEBUG oslo_concurrency.lockutils [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:29:54 np0005465987 nova_compute[230713]: 2025-10-02 12:29:54.276 2 DEBUG oslo_concurrency.lockutils [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:29:54 np0005465987 nova_compute[230713]: 2025-10-02 12:29:54.277 2 DEBUG nova.network.neutron [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Refreshing network info cache for port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:29:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:29:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:54.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:29:54 np0005465987 nova_compute[230713]: 2025-10-02 12:29:54.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:54 np0005465987 nova_compute[230713]: 2025-10-02 12:29:54.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:54 np0005465987 nova_compute[230713]: 2025-10-02 12:29:54.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:29:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:55.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:29:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4039073892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:29:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:29:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4039073892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:29:56 np0005465987 nova_compute[230713]: 2025-10-02 12:29:56.061 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:29:56 np0005465987 nova_compute[230713]: 2025-10-02 12:29:56.061 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:29:56 np0005465987 nova_compute[230713]: 2025-10-02 12:29:56.061 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:29:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:56.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:57.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:58 np0005465987 nova_compute[230713]: 2025-10-02 12:29:58.206 2 DEBUG nova.network.neutron [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updated VIF entry in instance network info cache for port 7cb2acba-67d6-4041-97fe-10e2d80dcd21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:29:58 np0005465987 nova_compute[230713]: 2025-10-02 12:29:58.207 2 DEBUG nova.network.neutron [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating instance_info_cache with network_info: [{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:58 np0005465987 nova_compute[230713]: 2025-10-02 12:29:58.246 2 DEBUG oslo_concurrency.lockutils [req-34fa30ba-da32-443f-ac36-89f180404cc0 req-64b1597b-b273-40b4-a5b7-f18b8d19fa92 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:58.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:29:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e306 e306: 3 total, 3 up, 3 in
Oct  2 08:29:59 np0005465987 nova_compute[230713]: 2025-10-02 12:29:59.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:59 np0005465987 nova_compute[230713]: 2025-10-02 12:29:59.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:00 np0005465987 nova_compute[230713]: 2025-10-02 12:30:00.123 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updating instance_info_cache with network_info: [{"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:00 np0005465987 nova_compute[230713]: 2025-10-02 12:30:00.159 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:00 np0005465987 nova_compute[230713]: 2025-10-02 12:30:00.159 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:30:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 08:30:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:00.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:01.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:01 np0005465987 nova_compute[230713]: 2025-10-02 12:30:01.481 2 DEBUG nova.compute.manager [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:01 np0005465987 nova_compute[230713]: 2025-10-02 12:30:01.482 2 DEBUG oslo_concurrency.lockutils [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:01 np0005465987 nova_compute[230713]: 2025-10-02 12:30:01.483 2 DEBUG oslo_concurrency.lockutils [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:01 np0005465987 nova_compute[230713]: 2025-10-02 12:30:01.483 2 DEBUG oslo_concurrency.lockutils [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:01 np0005465987 nova_compute[230713]: 2025-10-02 12:30:01.483 2 DEBUG nova.compute.manager [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:01 np0005465987 nova_compute[230713]: 2025-10-02 12:30:01.484 2 WARNING nova.compute.manager [req-fdee89ed-46f4-4ad8-822c-88b3789c0723 req-280724e0-0f11-427f-88e4-e6614241b19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:30:01 np0005465987 nova_compute[230713]: 2025-10-02 12:30:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:01 np0005465987 nova_compute[230713]: 2025-10-02 12:30:01.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:02.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:02 np0005465987 nova_compute[230713]: 2025-10-02 12:30:02.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:02 np0005465987 nova_compute[230713]: 2025-10-02 12:30:02.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.152 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.153 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.153 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:03.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.153 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.154 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/11603897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.560 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.693 2 DEBUG nova.compute.manager [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.694 2 DEBUG oslo_concurrency.lockutils [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.694 2 DEBUG oslo_concurrency.lockutils [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.694 2 DEBUG oslo_concurrency.lockutils [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.695 2 DEBUG nova.compute.manager [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] No waiting events found dispatching network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.695 2 WARNING nova.compute.manager [req-8553af1f-a42f-43e9-864d-508c6f3ec8fd req-c48de254-bab1-469b-9367-38b25b36a05e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Received unexpected event network-vif-plugged-7cb2acba-67d6-4041-97fe-10e2d80dcd21 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.730 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.731 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.736 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.737 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.742 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.742 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.918 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.919 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4075MB free_disk=20.784637451171875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.919 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:03 np0005465987 nova_compute[230713]: 2025-10-02 12:30:03.920 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.052 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408189.0522485, fced40d2-4fc7-4939-9fbb-bdb61d750526 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.053 2 INFO nova.compute.manager [-] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.134 2 DEBUG oslo_concurrency.lockutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "fced40d2-4fc7-4939-9fbb-bdb61d750526" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.135 2 DEBUG oslo_concurrency.lockutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.135 2 DEBUG nova.compute.manager [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Going to confirm migration 15 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.136 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Migration for instance fced40d2-4fc7-4939-9fbb-bdb61d750526 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.151 2 DEBUG nova.compute.manager [None req-1a602f2d-89af-4d22-9dba-02f9bb297c1d - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.155 2 DEBUG nova.compute.manager [None req-1a602f2d-89af-4d22-9dba-02f9bb297c1d - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.257 2 INFO nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating resource usage from migration 11ca3034-a0f1-41aa-9c5f-383e5dc24043#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.258 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Starting to track outgoing migration 11ca3034-a0f1-41aa-9c5f-383e5dc24043 with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.262 2 INFO nova.compute.manager [None req-1a602f2d-89af-4d22-9dba-02f9bb297c1d - - - - - -] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.291 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.291 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e744f3b9-ccc1-43b1-b134-e6050b9487eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.291 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Migration 11ca3034-a0f1-41aa-9c5f-383e5dc24043 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.292 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.292 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:30:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.551 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:30:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:04.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.605 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.606 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.650 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.673 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.741 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.838 2 DEBUG neutronclient.v2_0.client [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 7cb2acba-67d6-4041-97fe-10e2d80dcd21 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.839 2 DEBUG oslo_concurrency.lockutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.839 2 DEBUG oslo_concurrency.lockutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.840 2 DEBUG nova.network.neutron [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:30:04 np0005465987 nova_compute[230713]: 2025-10-02 12:30:04.840 2 DEBUG nova.objects.instance [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'info_cache' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2917355383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:05.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:05 np0005465987 nova_compute[230713]: 2025-10-02 12:30:05.175 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:05 np0005465987 nova_compute[230713]: 2025-10-02 12:30:05.182 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:05 np0005465987 nova_compute[230713]: 2025-10-02 12:30:05.214 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:05 np0005465987 nova_compute[230713]: 2025-10-02 12:30:05.277 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:30:05 np0005465987 nova_compute[230713]: 2025-10-02 12:30:05.277 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:06 np0005465987 nova_compute[230713]: 2025-10-02 12:30:06.278 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:06 np0005465987 nova_compute[230713]: 2025-10-02 12:30:06.279 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:06 np0005465987 nova_compute[230713]: 2025-10-02 12:30:06.280 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:06 np0005465987 nova_compute[230713]: 2025-10-02 12:30:06.280 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:30:06 np0005465987 nova_compute[230713]: 2025-10-02 12:30:06.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:06.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:06 np0005465987 nova_compute[230713]: 2025-10-02 12:30:06.937 2 DEBUG nova.network.neutron [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: fced40d2-4fc7-4939-9fbb-bdb61d750526] Updating instance_info_cache with network_info: [{"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:06 np0005465987 nova_compute[230713]: 2025-10-02 12:30:06.972 2 DEBUG oslo_concurrency.lockutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-fced40d2-4fc7-4939-9fbb-bdb61d750526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:06 np0005465987 nova_compute[230713]: 2025-10-02 12:30:06.973 2 DEBUG nova.objects.instance [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'migration_context' on Instance uuid fced40d2-4fc7-4939-9fbb-bdb61d750526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:07 np0005465987 nova_compute[230713]: 2025-10-02 12:30:07.117 2 DEBUG nova.storage.rbd_utils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] removing snapshot(nova-resize) on rbd image(fced40d2-4fc7-4939-9fbb-bdb61d750526_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:30:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:07.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e307 e307: 3 total, 3 up, 3 in
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.468 2 DEBUG nova.virt.libvirt.vif [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-475120527',display_name='tempest-ServerDiskConfigTestJSON-server-475120527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-475120527',id=113,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-8s6rikzx',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:01Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=fced40d2-4fc7-4939-9fbb-bdb61d750526,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.468 2 DEBUG nova.network.os_vif_util [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "address": "fa:16:3e:53:9c:95", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7cb2acba-67", "ovs_interfaceid": "7cb2acba-67d6-4041-97fe-10e2d80dcd21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.469 2 DEBUG nova.network.os_vif_util [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.469 2 DEBUG os_vif [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7cb2acba-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.473 2 INFO os_vif [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:9c:95,bridge_name='br-int',has_traffic_filtering=True,id=7cb2acba-67d6-4041-97fe-10e2d80dcd21,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7cb2acba-67')#033[00m
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.473 2 DEBUG oslo_concurrency.lockutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.473 2 DEBUG oslo_concurrency.lockutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:08 np0005465987 nova_compute[230713]: 2025-10-02 12:30:08.709 2 DEBUG oslo_concurrency.processutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3492956652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:09.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:09 np0005465987 nova_compute[230713]: 2025-10-02 12:30:09.170 2 DEBUG oslo_concurrency.processutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:09 np0005465987 nova_compute[230713]: 2025-10-02 12:30:09.177 2 DEBUG nova.compute.provider_tree [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:09 np0005465987 nova_compute[230713]: 2025-10-02 12:30:09.243 2 DEBUG nova.scheduler.client.report [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:09 np0005465987 nova_compute[230713]: 2025-10-02 12:30:09.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:09 np0005465987 nova_compute[230713]: 2025-10-02 12:30:09.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:09 np0005465987 nova_compute[230713]: 2025-10-02 12:30:09.999 2 DEBUG oslo_concurrency.lockutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:10 np0005465987 nova_compute[230713]: 2025-10-02 12:30:10.290 2 INFO nova.scheduler.client.report [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Deleted allocation for migration 11ca3034-a0f1-41aa-9c5f-383e5dc24043#033[00m
Oct  2 08:30:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:10.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:10 np0005465987 nova_compute[230713]: 2025-10-02 12:30:10.660 2 DEBUG oslo_concurrency.lockutils [None req-f6e8d2f3-68e1-47f4-8abd-4efc108a812a 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "fced40d2-4fc7-4939-9fbb-bdb61d750526" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 6.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:10 np0005465987 podman[272434]: 2025-10-02 12:30:10.875939297 +0000 UTC m=+0.073452673 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:30:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:11.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000055s ======
Oct  2 08:30:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:12.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Oct  2 08:30:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e308 e308: 3 total, 3 up, 3 in
Oct  2 08:30:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:13.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.576267) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213576309, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1935, "num_deletes": 257, "total_data_size": 4283926, "memory_usage": 4343792, "flush_reason": "Manual Compaction"}
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213592010, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 2812386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47150, "largest_seqno": 49079, "table_properties": {"data_size": 2804580, "index_size": 4620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17011, "raw_average_key_size": 20, "raw_value_size": 2788564, "raw_average_value_size": 3311, "num_data_blocks": 202, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408053, "oldest_key_time": 1759408053, "file_creation_time": 1759408213, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 15828 microseconds, and 7466 cpu microseconds.
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.592091) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 2812386 bytes OK
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.592110) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.594097) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.594109) EVENT_LOG_v1 {"time_micros": 1759408213594105, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.594124) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 4275236, prev total WAL file size 4275236, number of live WAL files 2.
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.595279) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353035' seq:72057594037927935, type:22 .. '6C6F676D0031373537' seq:0, type:0; will stop at (end)
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(2746KB)], [90(10MB)]
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213595340, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 13918430, "oldest_snapshot_seqno": -1}
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 7487 keys, 13779403 bytes, temperature: kUnknown
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213713654, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 13779403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13725658, "index_size": 33921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 192374, "raw_average_key_size": 25, "raw_value_size": 13588273, "raw_average_value_size": 1814, "num_data_blocks": 1350, "num_entries": 7487, "num_filter_entries": 7487, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759408213, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.713955) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 13779403 bytes
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.715584) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.5 rd, 116.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.6 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(9.8) write-amplify(4.9) OK, records in: 8018, records dropped: 531 output_compression: NoCompression
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.715609) EVENT_LOG_v1 {"time_micros": 1759408213715597, "job": 56, "event": "compaction_finished", "compaction_time_micros": 118436, "compaction_time_cpu_micros": 44230, "output_level": 6, "num_output_files": 1, "total_output_size": 13779403, "num_input_records": 8018, "num_output_records": 7487, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213716323, "job": 56, "event": "table_file_deletion", "file_number": 92}
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408213718966, "job": 56, "event": "table_file_deletion", "file_number": 90}
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.595103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.719010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.719014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.719016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.719017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:13 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:13.719018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:14 np0005465987 nova_compute[230713]: 2025-10-02 12:30:14.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:14.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:14 np0005465987 nova_compute[230713]: 2025-10-02 12:30:14.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:15.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:15.740 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:15.740 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:30:15 np0005465987 nova_compute[230713]: 2025-10-02 12:30:15.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:17 np0005465987 podman[272554]: 2025-10-02 12:30:17.008528554 +0000 UTC m=+0.065584106 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Oct  2 08:30:17 np0005465987 podman[272553]: 2025-10-02 12:30:17.009437839 +0000 UTC m=+0.069242267 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.license=GPLv2)
Oct  2 08:30:17 np0005465987 podman[272552]: 2025-10-02 12:30:17.099829168 +0000 UTC m=+0.160410848 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:30:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:17.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:17 np0005465987 podman[272689]: 2025-10-02 12:30:17.408542328 +0000 UTC m=+0.065672010 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 08:30:17 np0005465987 podman[272689]: 2025-10-02 12:30:17.531263628 +0000 UTC m=+0.188393280 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 08:30:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:30:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:30:18 np0005465987 nova_compute[230713]: 2025-10-02 12:30:18.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:18.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e309 e309: 3 total, 3 up, 3 in
Oct  2 08:30:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:18.744 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:18 np0005465987 nova_compute[230713]: 2025-10-02 12:30:18.960 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:30:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:30:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:19.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:19 np0005465987 nova_compute[230713]: 2025-10-02 12:30:19.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:19 np0005465987 nova_compute[230713]: 2025-10-02 12:30:19.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:30:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:30:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:30:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:20.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:21.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:22.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:23.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:24 np0005465987 nova_compute[230713]: 2025-10-02 12:30:24.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:24.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:24 np0005465987 nova_compute[230713]: 2025-10-02 12:30:24.690 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquiring lock "d165ea69-9e02-466c-b154-0ce6f22e0de0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:24 np0005465987 nova_compute[230713]: 2025-10-02 12:30:24.691 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:24 np0005465987 nova_compute[230713]: 2025-10-02 12:30:24.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:24 np0005465987 nova_compute[230713]: 2025-10-02 12:30:24.724 2 DEBUG nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:30:24 np0005465987 nova_compute[230713]: 2025-10-02 12:30:24.902 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:24 np0005465987 nova_compute[230713]: 2025-10-02 12:30:24.903 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:24 np0005465987 nova_compute[230713]: 2025-10-02 12:30:24.919 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:30:24 np0005465987 nova_compute[230713]: 2025-10-02 12:30:24.933 2 INFO nova.compute.claims [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:30:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:25.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:25 np0005465987 nova_compute[230713]: 2025-10-02 12:30:25.316 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3446968154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:25 np0005465987 nova_compute[230713]: 2025-10-02 12:30:25.787 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:25 np0005465987 nova_compute[230713]: 2025-10-02 12:30:25.794 2 DEBUG nova.compute.provider_tree [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:25 np0005465987 nova_compute[230713]: 2025-10-02 12:30:25.813 2 DEBUG nova.scheduler.client.report [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:25 np0005465987 nova_compute[230713]: 2025-10-02 12:30:25.865 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:25 np0005465987 nova_compute[230713]: 2025-10-02 12:30:25.866 2 DEBUG nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:30:25 np0005465987 nova_compute[230713]: 2025-10-02 12:30:25.951 2 DEBUG nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:30:25 np0005465987 nova_compute[230713]: 2025-10-02 12:30:25.952 2 DEBUG nova.network.neutron [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:30:25 np0005465987 nova_compute[230713]: 2025-10-02 12:30:25.993 2 INFO nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.024 2 DEBUG nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.138 2 DEBUG nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.139 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.140 2 INFO nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Creating image(s)#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.170 2 DEBUG nova.storage.rbd_utils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] rbd image d165ea69-9e02-466c-b154-0ce6f22e0de0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.200 2 DEBUG nova.storage.rbd_utils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] rbd image d165ea69-9e02-466c-b154-0ce6f22e0de0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.238 2 DEBUG nova.storage.rbd_utils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] rbd image d165ea69-9e02-466c-b154-0ce6f22e0de0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.244 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:30:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.329 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.331 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.332 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.332 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.370 2 DEBUG nova.storage.rbd_utils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] rbd image d165ea69-9e02-466c-b154-0ce6f22e0de0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.374 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d165ea69-9e02-466c-b154-0ce6f22e0de0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.405 2 DEBUG nova.policy [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '88b7b954ac234f7299c3b2da84d2ca46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba4a52d51214482ab1572bac703ce258', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:30:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:26.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.718 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d165ea69-9e02-466c-b154-0ce6f22e0de0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.794 2 DEBUG nova.storage.rbd_utils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] resizing rbd image d165ea69-9e02-466c-b154-0ce6f22e0de0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.935 2 DEBUG nova.objects.instance [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lazy-loading 'migration_context' on Instance uuid d165ea69-9e02-466c-b154-0ce6f22e0de0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.965 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.965 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Ensure instance console log exists: /var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.966 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.967 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:26 np0005465987 nova_compute[230713]: 2025-10-02 12:30:26.967 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:27.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:27.956 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:27.957 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:27.957 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:28.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:29 np0005465987 nova_compute[230713]: 2025-10-02 12:30:29.161 2 DEBUG nova.network.neutron [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Successfully created port: 671f489a-86a6-4bf4-8876-8d1c36d54095 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:30:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:29.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:29 np0005465987 nova_compute[230713]: 2025-10-02 12:30:29.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:29 np0005465987 nova_compute[230713]: 2025-10-02 12:30:29.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:30.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:30 np0005465987 nova_compute[230713]: 2025-10-02 12:30:30.847 2 DEBUG nova.network.neutron [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Successfully updated port: 671f489a-86a6-4bf4-8876-8d1c36d54095 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:30:30 np0005465987 nova_compute[230713]: 2025-10-02 12:30:30.881 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquiring lock "refresh_cache-d165ea69-9e02-466c-b154-0ce6f22e0de0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:30 np0005465987 nova_compute[230713]: 2025-10-02 12:30:30.881 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquired lock "refresh_cache-d165ea69-9e02-466c-b154-0ce6f22e0de0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:30 np0005465987 nova_compute[230713]: 2025-10-02 12:30:30.881 2 DEBUG nova.network.neutron [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:30:31 np0005465987 nova_compute[230713]: 2025-10-02 12:30:31.190 2 DEBUG nova.network.neutron [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:30:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:31.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:31 np0005465987 nova_compute[230713]: 2025-10-02 12:30:31.255 2 DEBUG nova.compute.manager [req-77e978a0-a08f-4d24-9c16-a337c4432ac6 req-c5ccbf3c-5fcc-48c4-986e-991669ae7534 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Received event network-changed-671f489a-86a6-4bf4-8876-8d1c36d54095 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:31 np0005465987 nova_compute[230713]: 2025-10-02 12:30:31.255 2 DEBUG nova.compute.manager [req-77e978a0-a08f-4d24-9c16-a337c4432ac6 req-c5ccbf3c-5fcc-48c4-986e-991669ae7534 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Refreshing instance network info cache due to event network-changed-671f489a-86a6-4bf4-8876-8d1c36d54095. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:30:31 np0005465987 nova_compute[230713]: 2025-10-02 12:30:31.256 2 DEBUG oslo_concurrency.lockutils [req-77e978a0-a08f-4d24-9c16-a337c4432ac6 req-c5ccbf3c-5fcc-48c4-986e-991669ae7534 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d165ea69-9e02-466c-b154-0ce6f22e0de0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:32.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e310 e310: 3 total, 3 up, 3 in
Oct  2 08:30:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:33.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.756 2 DEBUG nova.network.neutron [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Updating instance_info_cache with network_info: [{"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.801 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Releasing lock "refresh_cache-d165ea69-9e02-466c-b154-0ce6f22e0de0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.802 2 DEBUG nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Instance network_info: |[{"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.802 2 DEBUG oslo_concurrency.lockutils [req-77e978a0-a08f-4d24-9c16-a337c4432ac6 req-c5ccbf3c-5fcc-48c4-986e-991669ae7534 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d165ea69-9e02-466c-b154-0ce6f22e0de0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.802 2 DEBUG nova.network.neutron [req-77e978a0-a08f-4d24-9c16-a337c4432ac6 req-c5ccbf3c-5fcc-48c4-986e-991669ae7534 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Refreshing network info cache for port 671f489a-86a6-4bf4-8876-8d1c36d54095 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.805 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Start _get_guest_xml network_info=[{"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.810 2 WARNING nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.818 2 DEBUG nova.virt.libvirt.host [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.819 2 DEBUG nova.virt.libvirt.host [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.825 2 DEBUG nova.virt.libvirt.host [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.826 2 DEBUG nova.virt.libvirt.host [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.828 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.828 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.829 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.829 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.829 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.829 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.830 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.830 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.830 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.830 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.831 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.831 2 DEBUG nova.virt.hardware [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:30:33 np0005465987 nova_compute[230713]: 2025-10-02 12:30:33.834 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:34 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/155291311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.366 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.388 2 DEBUG nova.storage.rbd_utils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] rbd image d165ea69-9e02-466c-b154-0ce6f22e0de0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.392 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:34.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:34 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4275958504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.818 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.821 2 DEBUG nova.virt.libvirt.vif [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1172209500',display_name='tempest-ServerAddressesTestJSON-server-1172209500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1172209500',id=116,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba4a52d51214482ab1572bac703ce258',ramdisk_id='',reservation_id='r-69ogaut1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-201681457',owner_user_name='tempest-ServerAddressesTestJSON-201681457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:26Z,user_data=None,user_id='88b7b954ac234f7299c3b2da84d2ca46',uuid=d165ea69-9e02-466c-b154-0ce6f22e0de0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.822 2 DEBUG nova.network.os_vif_util [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Converting VIF {"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.823 2 DEBUG nova.network.os_vif_util [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:d9:af,bridge_name='br-int',has_traffic_filtering=True,id=671f489a-86a6-4bf4-8876-8d1c36d54095,network=Network(bf0eb31a-0898-42f3-9273-cdabe175e99a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap671f489a-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.824 2 DEBUG nova.objects.instance [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lazy-loading 'pci_devices' on Instance uuid d165ea69-9e02-466c-b154-0ce6f22e0de0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.977 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <uuid>d165ea69-9e02-466c-b154-0ce6f22e0de0</uuid>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <name>instance-00000074</name>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerAddressesTestJSON-server-1172209500</nova:name>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:30:33</nova:creationTime>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <nova:user uuid="88b7b954ac234f7299c3b2da84d2ca46">tempest-ServerAddressesTestJSON-201681457-project-member</nova:user>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <nova:project uuid="ba4a52d51214482ab1572bac703ce258">tempest-ServerAddressesTestJSON-201681457</nova:project>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <nova:port uuid="671f489a-86a6-4bf4-8876-8d1c36d54095">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <entry name="serial">d165ea69-9e02-466c-b154-0ce6f22e0de0</entry>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <entry name="uuid">d165ea69-9e02-466c-b154-0ce6f22e0de0</entry>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d165ea69-9e02-466c-b154-0ce6f22e0de0_disk">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d165ea69-9e02-466c-b154-0ce6f22e0de0_disk.config">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:b1:d9:af"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <target dev="tap671f489a-86"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0/console.log" append="off"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:30:34 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:30:34 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:30:34 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:30:34 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.979 2 DEBUG nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Preparing to wait for external event network-vif-plugged-671f489a-86a6-4bf4-8876-8d1c36d54095 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.979 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquiring lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.980 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.980 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.981 2 DEBUG nova.virt.libvirt.vif [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1172209500',display_name='tempest-ServerAddressesTestJSON-server-1172209500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1172209500',id=116,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba4a52d51214482ab1572bac703ce258',ramdisk_id='',reservation_id='r-69ogaut1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-201681457',owner_user_name='tempest-ServerAddressesTestJSON-201681457-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:26Z,user_data=None,user_id='88b7b954ac234f7299c3b2da84d2ca46',uuid=d165ea69-9e02-466c-b154-0ce6f22e0de0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.981 2 DEBUG nova.network.os_vif_util [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Converting VIF {"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.982 2 DEBUG nova.network.os_vif_util [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:d9:af,bridge_name='br-int',has_traffic_filtering=True,id=671f489a-86a6-4bf4-8876-8d1c36d54095,network=Network(bf0eb31a-0898-42f3-9273-cdabe175e99a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap671f489a-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.982 2 DEBUG os_vif [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:d9:af,bridge_name='br-int',has_traffic_filtering=True,id=671f489a-86a6-4bf4-8876-8d1c36d54095,network=Network(bf0eb31a-0898-42f3-9273-cdabe175e99a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap671f489a-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671f489a-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap671f489a-86, col_values=(('external_ids', {'iface-id': '671f489a-86a6-4bf4-8876-8d1c36d54095', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:d9:af', 'vm-uuid': 'd165ea69-9e02-466c-b154-0ce6f22e0de0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:34 np0005465987 NetworkManager[44910]: <info>  [1759408234.9898] manager: (tap671f489a-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:34 np0005465987 nova_compute[230713]: 2025-10-02 12:30:34.997 2 INFO os_vif [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:d9:af,bridge_name='br-int',has_traffic_filtering=True,id=671f489a-86a6-4bf4-8876-8d1c36d54095,network=Network(bf0eb31a-0898-42f3-9273-cdabe175e99a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap671f489a-86')#033[00m
Oct  2 08:30:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:35.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:35 np0005465987 nova_compute[230713]: 2025-10-02 12:30:35.296 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:35 np0005465987 nova_compute[230713]: 2025-10-02 12:30:35.296 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:35 np0005465987 nova_compute[230713]: 2025-10-02 12:30:35.297 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] No VIF found with MAC fa:16:3e:b1:d9:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:30:35 np0005465987 nova_compute[230713]: 2025-10-02 12:30:35.297 2 INFO nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Using config drive#033[00m
Oct  2 08:30:35 np0005465987 nova_compute[230713]: 2025-10-02 12:30:35.326 2 DEBUG nova.storage.rbd_utils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] rbd image d165ea69-9e02-466c-b154-0ce6f22e0de0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/241820598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:36.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:36 np0005465987 nova_compute[230713]: 2025-10-02 12:30:36.689 2 INFO nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Creating config drive at /var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0/disk.config#033[00m
Oct  2 08:30:36 np0005465987 nova_compute[230713]: 2025-10-02 12:30:36.698 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp970izhcw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:36 np0005465987 nova_compute[230713]: 2025-10-02 12:30:36.764 2 DEBUG nova.network.neutron [req-77e978a0-a08f-4d24-9c16-a337c4432ac6 req-c5ccbf3c-5fcc-48c4-986e-991669ae7534 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Updated VIF entry in instance network info cache for port 671f489a-86a6-4bf4-8876-8d1c36d54095. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:30:36 np0005465987 nova_compute[230713]: 2025-10-02 12:30:36.765 2 DEBUG nova.network.neutron [req-77e978a0-a08f-4d24-9c16-a337c4432ac6 req-c5ccbf3c-5fcc-48c4-986e-991669ae7534 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Updating instance_info_cache with network_info: [{"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:36 np0005465987 nova_compute[230713]: 2025-10-02 12:30:36.806 2 DEBUG oslo_concurrency.lockutils [req-77e978a0-a08f-4d24-9c16-a337c4432ac6 req-c5ccbf3c-5fcc-48c4-986e-991669ae7534 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d165ea69-9e02-466c-b154-0ce6f22e0de0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:36 np0005465987 nova_compute[230713]: 2025-10-02 12:30:36.834 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp970izhcw" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:36 np0005465987 nova_compute[230713]: 2025-10-02 12:30:36.861 2 DEBUG nova.storage.rbd_utils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] rbd image d165ea69-9e02-466c-b154-0ce6f22e0de0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:36 np0005465987 nova_compute[230713]: 2025-10-02 12:30:36.864 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0/disk.config d165ea69-9e02-466c-b154-0ce6f22e0de0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:37 np0005465987 nova_compute[230713]: 2025-10-02 12:30:37.021 2 DEBUG oslo_concurrency.processutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0/disk.config d165ea69-9e02-466c-b154-0ce6f22e0de0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:37 np0005465987 nova_compute[230713]: 2025-10-02 12:30:37.022 2 INFO nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Deleting local config drive /var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0/disk.config because it was imported into RBD.#033[00m
Oct  2 08:30:37 np0005465987 kernel: tap671f489a-86: entered promiscuous mode
Oct  2 08:30:37 np0005465987 NetworkManager[44910]: <info>  [1759408237.0651] manager: (tap671f489a-86): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Oct  2 08:30:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:30:37Z|00412|binding|INFO|Claiming lport 671f489a-86a6-4bf4-8876-8d1c36d54095 for this chassis.
Oct  2 08:30:37 np0005465987 nova_compute[230713]: 2025-10-02 12:30:37.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:30:37Z|00413|binding|INFO|671f489a-86a6-4bf4-8876-8d1c36d54095: Claiming fa:16:3e:b1:d9:af 10.100.0.6
Oct  2 08:30:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:30:37Z|00414|binding|INFO|Setting lport 671f489a-86a6-4bf4-8876-8d1c36d54095 ovn-installed in OVS
Oct  2 08:30:37 np0005465987 nova_compute[230713]: 2025-10-02 12:30:37.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:30:37Z|00415|binding|INFO|Setting lport 671f489a-86a6-4bf4-8876-8d1c36d54095 up in Southbound
Oct  2 08:30:37 np0005465987 nova_compute[230713]: 2025-10-02 12:30:37.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.093 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:d9:af 10.100.0.6'], port_security=['fa:16:3e:b1:d9:af 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd165ea69-9e02-466c-b154-0ce6f22e0de0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf0eb31a-0898-42f3-9273-cdabe175e99a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba4a52d51214482ab1572bac703ce258', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19dd29af-28c2-4be2-86ac-87aeb7db6b46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=98db9ff4-6650-4ac2-aa53-c6a046f703f2, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=671f489a-86a6-4bf4-8876-8d1c36d54095) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.095 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 671f489a-86a6-4bf4-8876-8d1c36d54095 in datapath bf0eb31a-0898-42f3-9273-cdabe175e99a bound to our chassis#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.097 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf0eb31a-0898-42f3-9273-cdabe175e99a#033[00m
Oct  2 08:30:37 np0005465987 systemd-udevd[273319]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.108 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3335e825-88da-45df-b1f2-b8fd1b239c98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.109 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf0eb31a-01 in ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:30:37 np0005465987 systemd-machined[188335]: New machine qemu-50-instance-00000074.
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.110 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf0eb31a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.110 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[42815a73-27de-465c-8d2e-e01001379471]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.111 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[45818746-46bf-4029-9342-1dff25b30910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 NetworkManager[44910]: <info>  [1759408237.1174] device (tap671f489a-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:30:37 np0005465987 NetworkManager[44910]: <info>  [1759408237.1184] device (tap671f489a-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:30:37 np0005465987 systemd[1]: Started Virtual Machine qemu-50-instance-00000074.
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.124 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[77d5bfcc-3eef-4636-b6b2-ad1a63981858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.147 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[046fdf86-5857-46c3-9dc8-c2b6a8d87892]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.173 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[2b120ebb-feff-40ad-af53-d20d1af52271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 NetworkManager[44910]: <info>  [1759408237.1793] manager: (tapbf0eb31a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/204)
Oct  2 08:30:37 np0005465987 systemd-udevd[273322]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.180 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b75a221d-b388-4b25-8b2b-cbd92870cc83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.209 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf1b7b2-9e11-4c4f-bb65-8a9d52471f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.212 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[38ff4e49-17a7-4ce6-855a-31384c41084a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:37.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:37 np0005465987 NetworkManager[44910]: <info>  [1759408237.2294] device (tapbf0eb31a-00): carrier: link connected
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.234 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[aab76821-67e9-490e-a96b-2bfdb27b8eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.249 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[afecac11-f889-41d3-add4-600513954da5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf0eb31a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:b4:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630580, 'reachable_time': 35112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273351, 'error': None, 'target': 'ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.262 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fc09e66a-bc88-4e2c-b171-8d2434196f1e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:b49c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630580, 'tstamp': 630580}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273352, 'error': None, 'target': 'ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.277 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0744bac5-1f32-4075-b288-2d8ab6517646]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf0eb31a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:b4:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630580, 'reachable_time': 35112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273353, 'error': None, 'target': 'ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.312 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e95f94fd-f0b3-4357-8303-86dffabd30da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.370 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[841ffcc6-f601-44dd-88ab-9a0f5bfe6b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.372 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0eb31a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.372 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.372 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf0eb31a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:37 np0005465987 kernel: tapbf0eb31a-00: entered promiscuous mode
Oct  2 08:30:37 np0005465987 NetworkManager[44910]: <info>  [1759408237.3762] manager: (tapbf0eb31a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.378 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf0eb31a-00, col_values=(('external_ids', {'iface-id': '21957061-cd8b-42c9-aecb-4863baaf50a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:37 np0005465987 nova_compute[230713]: 2025-10-02 12:30:37.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:30:37Z|00416|binding|INFO|Releasing lport 21957061-cd8b-42c9-aecb-4863baaf50a6 from this chassis (sb_readonly=0)
Oct  2 08:30:37 np0005465987 nova_compute[230713]: 2025-10-02 12:30:37.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.395 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf0eb31a-0898-42f3-9273-cdabe175e99a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf0eb31a-0898-42f3-9273-cdabe175e99a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.397 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3372f28e-1fce-40d9-acd9-74609319f890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.397 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-bf0eb31a-0898-42f3-9273-cdabe175e99a
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/bf0eb31a-0898-42f3-9273-cdabe175e99a.pid.haproxy
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID bf0eb31a-0898-42f3-9273-cdabe175e99a
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:30:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:37.398 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a', 'env', 'PROCESS_TAG=haproxy-bf0eb31a-0898-42f3-9273-cdabe175e99a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf0eb31a-0898-42f3-9273-cdabe175e99a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:30:37 np0005465987 podman[273427]: 2025-10-02 12:30:37.769037041 +0000 UTC m=+0.049384771 container create bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:30:37 np0005465987 systemd[1]: Started libpod-conmon-bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72.scope.
Oct  2 08:30:37 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:30:37 np0005465987 podman[273427]: 2025-10-02 12:30:37.744582297 +0000 UTC m=+0.024930047 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:30:37 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bc1d4305264f13486fee29d6609f71fde0850957138f07227b87a72e73da44/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:30:37 np0005465987 podman[273427]: 2025-10-02 12:30:37.855173663 +0000 UTC m=+0.135521403 container init bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:30:37 np0005465987 podman[273427]: 2025-10-02 12:30:37.860806068 +0000 UTC m=+0.141153798 container start bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:30:37 np0005465987 neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a[273443]: [NOTICE]   (273447) : New worker (273449) forked
Oct  2 08:30:37 np0005465987 neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a[273443]: [NOTICE]   (273447) : Loading success.
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.004 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408238.0038214, d165ea69-9e02-466c-b154-0ce6f22e0de0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.005 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] VM Started (Lifecycle Event)#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.042 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.047 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408238.0065367, d165ea69-9e02-466c-b154-0ce6f22e0de0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.048 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.091 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.094 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.129 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:38.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.889 2 DEBUG nova.compute.manager [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Received event network-vif-plugged-671f489a-86a6-4bf4-8876-8d1c36d54095 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.890 2 DEBUG oslo_concurrency.lockutils [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.890 2 DEBUG oslo_concurrency.lockutils [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.890 2 DEBUG oslo_concurrency.lockutils [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.890 2 DEBUG nova.compute.manager [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Processing event network-vif-plugged-671f489a-86a6-4bf4-8876-8d1c36d54095 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.890 2 DEBUG nova.compute.manager [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Received event network-vif-plugged-671f489a-86a6-4bf4-8876-8d1c36d54095 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.891 2 DEBUG oslo_concurrency.lockutils [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.891 2 DEBUG oslo_concurrency.lockutils [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.891 2 DEBUG oslo_concurrency.lockutils [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.891 2 DEBUG nova.compute.manager [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] No waiting events found dispatching network-vif-plugged-671f489a-86a6-4bf4-8876-8d1c36d54095 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.891 2 WARNING nova.compute.manager [req-595e9d56-f668-42eb-af9c-d8117804115f req-c9ec5f02-d99e-4676-acce-34eefb3c09c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Received unexpected event network-vif-plugged-671f489a-86a6-4bf4-8876-8d1c36d54095 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.892 2 DEBUG nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.895 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408238.8951616, d165ea69-9e02-466c-b154-0ce6f22e0de0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.895 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.897 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.899 2 INFO nova.virt.libvirt.driver [-] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Instance spawned successfully.#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.900 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.964 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.967 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.974 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.974 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.975 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.975 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.976 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:38 np0005465987 nova_compute[230713]: 2025-10-02 12:30:38.976 2 DEBUG nova.virt.libvirt.driver [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:39 np0005465987 nova_compute[230713]: 2025-10-02 12:30:39.023 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:39 np0005465987 nova_compute[230713]: 2025-10-02 12:30:39.118 2 INFO nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Took 12.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:30:39 np0005465987 nova_compute[230713]: 2025-10-02 12:30:39.119 2 DEBUG nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:39 np0005465987 nova_compute[230713]: 2025-10-02 12:30:39.201 2 INFO nova.compute.manager [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Took 14.35 seconds to build instance.#033[00m
Oct  2 08:30:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:39.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:39 np0005465987 nova_compute[230713]: 2025-10-02 12:30:39.221 2 DEBUG oslo_concurrency.lockutils [None req-e8e7f344-ca27-44a5-b2bc-3ebc09a24182 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:39 np0005465987 nova_compute[230713]: 2025-10-02 12:30:39.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1211407120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:39 np0005465987 nova_compute[230713]: 2025-10-02 12:30:39.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:40.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:41.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:41 np0005465987 podman[273459]: 2025-10-02 12:30:41.838514312 +0000 UTC m=+0.057492414 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:30:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:42.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:43.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.389 2 DEBUG oslo_concurrency.lockutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquiring lock "d165ea69-9e02-466c-b154-0ce6f22e0de0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.389 2 DEBUG oslo_concurrency.lockutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.390 2 DEBUG oslo_concurrency.lockutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquiring lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.390 2 DEBUG oslo_concurrency.lockutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.390 2 DEBUG oslo_concurrency.lockutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.391 2 INFO nova.compute.manager [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Terminating instance#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.392 2 DEBUG nova.compute.manager [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:30:43 np0005465987 kernel: tap671f489a-86 (unregistering): left promiscuous mode
Oct  2 08:30:43 np0005465987 NetworkManager[44910]: <info>  [1759408243.4306] device (tap671f489a-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:30:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:30:43Z|00417|binding|INFO|Releasing lport 671f489a-86a6-4bf4-8876-8d1c36d54095 from this chassis (sb_readonly=0)
Oct  2 08:30:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:30:43Z|00418|binding|INFO|Setting lport 671f489a-86a6-4bf4-8876-8d1c36d54095 down in Southbound
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:30:43Z|00419|binding|INFO|Removing iface tap671f489a-86 ovn-installed in OVS
Oct  2 08:30:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:43.453 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:d9:af 10.100.0.6'], port_security=['fa:16:3e:b1:d9:af 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd165ea69-9e02-466c-b154-0ce6f22e0de0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf0eb31a-0898-42f3-9273-cdabe175e99a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba4a52d51214482ab1572bac703ce258', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19dd29af-28c2-4be2-86ac-87aeb7db6b46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=98db9ff4-6650-4ac2-aa53-c6a046f703f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=671f489a-86a6-4bf4-8876-8d1c36d54095) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:43.456 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 671f489a-86a6-4bf4-8876-8d1c36d54095 in datapath bf0eb31a-0898-42f3-9273-cdabe175e99a unbound from our chassis#033[00m
Oct  2 08:30:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:43.458 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf0eb31a-0898-42f3-9273-cdabe175e99a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:43.461 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[34ef7aca-5fed-4ce9-a165-9eb2861e7a7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:43.462 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a namespace which is not needed anymore#033[00m
Oct  2 08:30:43 np0005465987 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct  2 08:30:43 np0005465987 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000074.scope: Consumed 5.333s CPU time.
Oct  2 08:30:43 np0005465987 systemd-machined[188335]: Machine qemu-50-instance-00000074 terminated.
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.634 2 INFO nova.virt.libvirt.driver [-] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Instance destroyed successfully.#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.634 2 DEBUG nova.objects.instance [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lazy-loading 'resources' on Instance uuid d165ea69-9e02-466c-b154-0ce6f22e0de0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.654 2 DEBUG nova.virt.libvirt.vif [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1172209500',display_name='tempest-ServerAddressesTestJSON-server-1172209500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1172209500',id=116,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:39Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba4a52d51214482ab1572bac703ce258',ramdisk_id='',reservation_id='r-69ogaut1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-201681457',owner_user_name='tempest-ServerAddressesTestJSON-201681457-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:39Z,user_data=None,user_id='88b7b954ac234f7299c3b2da84d2ca46',uuid=d165ea69-9e02-466c-b154-0ce6f22e0de0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.656 2 DEBUG nova.network.os_vif_util [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Converting VIF {"id": "671f489a-86a6-4bf4-8876-8d1c36d54095", "address": "fa:16:3e:b1:d9:af", "network": {"id": "bf0eb31a-0898-42f3-9273-cdabe175e99a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1597809081-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba4a52d51214482ab1572bac703ce258", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap671f489a-86", "ovs_interfaceid": "671f489a-86a6-4bf4-8876-8d1c36d54095", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.657 2 DEBUG nova.network.os_vif_util [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:d9:af,bridge_name='br-int',has_traffic_filtering=True,id=671f489a-86a6-4bf4-8876-8d1c36d54095,network=Network(bf0eb31a-0898-42f3-9273-cdabe175e99a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap671f489a-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.657 2 DEBUG os_vif [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:d9:af,bridge_name='br-int',has_traffic_filtering=True,id=671f489a-86a6-4bf4-8876-8d1c36d54095,network=Network(bf0eb31a-0898-42f3-9273-cdabe175e99a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap671f489a-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671f489a-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:43 np0005465987 neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a[273443]: [NOTICE]   (273447) : haproxy version is 2.8.14-c23fe91
Oct  2 08:30:43 np0005465987 neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a[273443]: [NOTICE]   (273447) : path to executable is /usr/sbin/haproxy
Oct  2 08:30:43 np0005465987 neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a[273443]: [WARNING]  (273447) : Exiting Master process...
Oct  2 08:30:43 np0005465987 nova_compute[230713]: 2025-10-02 12:30:43.667 2 INFO os_vif [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:d9:af,bridge_name='br-int',has_traffic_filtering=True,id=671f489a-86a6-4bf4-8876-8d1c36d54095,network=Network(bf0eb31a-0898-42f3-9273-cdabe175e99a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap671f489a-86')#033[00m
Oct  2 08:30:43 np0005465987 neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a[273443]: [ALERT]    (273447) : Current worker (273449) exited with code 143 (Terminated)
Oct  2 08:30:43 np0005465987 neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a[273443]: [WARNING]  (273447) : All workers exited. Exiting... (0)
Oct  2 08:30:43 np0005465987 systemd[1]: libpod-bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72.scope: Deactivated successfully.
Oct  2 08:30:43 np0005465987 podman[273502]: 2025-10-02 12:30:43.679472041 +0000 UTC m=+0.129793115 container died bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:30:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e311 e311: 3 total, 3 up, 3 in
Oct  2 08:30:43 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72-userdata-shm.mount: Deactivated successfully.
Oct  2 08:30:43 np0005465987 systemd[1]: var-lib-containers-storage-overlay-b3bc1d4305264f13486fee29d6609f71fde0850957138f07227b87a72e73da44-merged.mount: Deactivated successfully.
Oct  2 08:30:43 np0005465987 podman[273502]: 2025-10-02 12:30:43.933860575 +0000 UTC m=+0.384181649 container cleanup bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:30:44 np0005465987 podman[273559]: 2025-10-02 12:30:44.117244915 +0000 UTC m=+0.161844908 container remove bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:30:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:44.123 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3d45f695-7557-4d39-b68a-9dace97c94de]: (4, ('Thu Oct  2 12:30:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a (bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72)\nbd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72\nThu Oct  2 12:30:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a (bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72)\nbd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:44.125 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecab7d0-51aa-4906-a222-5a3e1851d5e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:44.126 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0eb31a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:44 np0005465987 nova_compute[230713]: 2025-10-02 12:30:44.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:44 np0005465987 kernel: tapbf0eb31a-00: left promiscuous mode
Oct  2 08:30:44 np0005465987 systemd[1]: libpod-conmon-bd762189a7c81b6f1aa65fdc3d1703ef1e43df352cd86c5190d3d63e93105a72.scope: Deactivated successfully.
Oct  2 08:30:44 np0005465987 nova_compute[230713]: 2025-10-02 12:30:44.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:44.145 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c77a6b62-0eb9-46ec-b7aa-b14d38863906]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:44.168 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6630ed2b-785d-4829-8f2e-23df5404423b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:44.169 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf5d6cd-47c7-4d10-802f-ecb3e49fdc79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:44.183 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b9a1f8-8c58-48b9-a72d-4f17e62a4c56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630574, 'reachable_time': 33056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273575, 'error': None, 'target': 'ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:44.185 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf0eb31a-0898-42f3-9273-cdabe175e99a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:30:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:30:44.185 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd9ae79-23f9-4a56-a020-1d6eac830cde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:44 np0005465987 systemd[1]: run-netns-ovnmeta\x2dbf0eb31a\x2d0898\x2d42f3\x2d9273\x2dcdabe175e99a.mount: Deactivated successfully.
Oct  2 08:30:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:44 np0005465987 nova_compute[230713]: 2025-10-02 12:30:44.524 2 INFO nova.virt.libvirt.driver [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Deleting instance files /var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0_del#033[00m
Oct  2 08:30:44 np0005465987 nova_compute[230713]: 2025-10-02 12:30:44.525 2 INFO nova.virt.libvirt.driver [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Deletion of /var/lib/nova/instances/d165ea69-9e02-466c-b154-0ce6f22e0de0_del complete#033[00m
Oct  2 08:30:44 np0005465987 nova_compute[230713]: 2025-10-02 12:30:44.608 2 INFO nova.compute.manager [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:30:44 np0005465987 nova_compute[230713]: 2025-10-02 12:30:44.608 2 DEBUG oslo.service.loopingcall [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:30:44 np0005465987 nova_compute[230713]: 2025-10-02 12:30:44.609 2 DEBUG nova.compute.manager [-] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:30:44 np0005465987 nova_compute[230713]: 2025-10-02 12:30:44.609 2 DEBUG nova.network.neutron [-] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:30:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:44.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:44 np0005465987 nova_compute[230713]: 2025-10-02 12:30:44.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:45.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:45 np0005465987 nova_compute[230713]: 2025-10-02 12:30:45.320 2 DEBUG nova.compute.manager [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Received event network-vif-unplugged-671f489a-86a6-4bf4-8876-8d1c36d54095 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:45 np0005465987 nova_compute[230713]: 2025-10-02 12:30:45.321 2 DEBUG oslo_concurrency.lockutils [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:45 np0005465987 nova_compute[230713]: 2025-10-02 12:30:45.322 2 DEBUG oslo_concurrency.lockutils [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:45 np0005465987 nova_compute[230713]: 2025-10-02 12:30:45.322 2 DEBUG oslo_concurrency.lockutils [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:45 np0005465987 nova_compute[230713]: 2025-10-02 12:30:45.323 2 DEBUG nova.compute.manager [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] No waiting events found dispatching network-vif-unplugged-671f489a-86a6-4bf4-8876-8d1c36d54095 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:45 np0005465987 nova_compute[230713]: 2025-10-02 12:30:45.323 2 DEBUG nova.compute.manager [req-34fb6104-24df-4290-ac23-8adffa99b77e req-9ca710e2-0501-4ce4-8fb4-9aea0cdc5975 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Received event network-vif-unplugged-671f489a-86a6-4bf4-8876-8d1c36d54095 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.106 2 DEBUG nova.network.neutron [-] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.129 2 INFO nova.compute.manager [-] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Took 1.52 seconds to deallocate network for instance.#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.208 2 DEBUG nova.compute.manager [req-0215831a-8fae-4ec1-86b8-6f69f5376630 req-bbc3ab3a-cd50-42c6-8c6c-bb92f99ab651 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Received event network-vif-deleted-671f489a-86a6-4bf4-8876-8d1c36d54095 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.209 2 DEBUG oslo_concurrency.lockutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.210 2 DEBUG oslo_concurrency.lockutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.338 2 DEBUG oslo_concurrency.processutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:46.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/390636352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.756 2 DEBUG oslo_concurrency.processutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.765 2 DEBUG nova.compute.provider_tree [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.801 2 DEBUG nova.scheduler.client.report [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.830 2 DEBUG oslo_concurrency.lockutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.876 2 INFO nova.scheduler.client.report [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Deleted allocations for instance d165ea69-9e02-466c-b154-0ce6f22e0de0#033[00m
Oct  2 08:30:46 np0005465987 nova_compute[230713]: 2025-10-02 12:30:46.996 2 DEBUG oslo_concurrency.lockutils [None req-145a9cc9-cb95-42db-beed-ea69c5191626 88b7b954ac234f7299c3b2da84d2ca46 ba4a52d51214482ab1572bac703ce258 - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:47.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:47 np0005465987 nova_compute[230713]: 2025-10-02 12:30:47.512 2 DEBUG nova.compute.manager [req-df1ebcb5-6082-4457-86e4-6f8a58551113 req-e85e6497-3fe4-4b0d-b28c-07967c4c4ec6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Received event network-vif-plugged-671f489a-86a6-4bf4-8876-8d1c36d54095 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:47 np0005465987 nova_compute[230713]: 2025-10-02 12:30:47.513 2 DEBUG oslo_concurrency.lockutils [req-df1ebcb5-6082-4457-86e4-6f8a58551113 req-e85e6497-3fe4-4b0d-b28c-07967c4c4ec6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:47 np0005465987 nova_compute[230713]: 2025-10-02 12:30:47.513 2 DEBUG oslo_concurrency.lockutils [req-df1ebcb5-6082-4457-86e4-6f8a58551113 req-e85e6497-3fe4-4b0d-b28c-07967c4c4ec6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:47 np0005465987 nova_compute[230713]: 2025-10-02 12:30:47.513 2 DEBUG oslo_concurrency.lockutils [req-df1ebcb5-6082-4457-86e4-6f8a58551113 req-e85e6497-3fe4-4b0d-b28c-07967c4c4ec6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d165ea69-9e02-466c-b154-0ce6f22e0de0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:47 np0005465987 nova_compute[230713]: 2025-10-02 12:30:47.513 2 DEBUG nova.compute.manager [req-df1ebcb5-6082-4457-86e4-6f8a58551113 req-e85e6497-3fe4-4b0d-b28c-07967c4c4ec6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] No waiting events found dispatching network-vif-plugged-671f489a-86a6-4bf4-8876-8d1c36d54095 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:47 np0005465987 nova_compute[230713]: 2025-10-02 12:30:47.514 2 WARNING nova.compute.manager [req-df1ebcb5-6082-4457-86e4-6f8a58551113 req-e85e6497-3fe4-4b0d-b28c-07967c4c4ec6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Received unexpected event network-vif-plugged-671f489a-86a6-4bf4-8876-8d1c36d54095 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:30:47 np0005465987 podman[273601]: 2025-10-02 12:30:47.841446308 +0000 UTC m=+0.053424142 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:30:47 np0005465987 podman[273600]: 2025-10-02 12:30:47.845282244 +0000 UTC m=+0.060970120 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:30:47 np0005465987 podman[273599]: 2025-10-02 12:30:47.857457619 +0000 UTC m=+0.076511007 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:48.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.741 2 DEBUG nova.compute.manager [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.833 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.833 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.862 2 DEBUG nova.objects.instance [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.877 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.878 2 INFO nova.compute.claims [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.878 2 DEBUG nova.objects.instance [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'resources' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.896 2 DEBUG nova.objects.instance [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.996 2 INFO nova.compute.resource_tracker [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating resource usage from migration 649cb3df-1832-4eb9-bab2-7623fb261c1e#033[00m
Oct  2 08:30:48 np0005465987 nova_compute[230713]: 2025-10-02 12:30:48.996 2 DEBUG nova.compute.resource_tracker [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Starting to track incoming migration 649cb3df-1832-4eb9-bab2-7623fb261c1e with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:30:49 np0005465987 nova_compute[230713]: 2025-10-02 12:30:49.094 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:49.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1681692066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:49 np0005465987 nova_compute[230713]: 2025-10-02 12:30:49.582 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:49 np0005465987 nova_compute[230713]: 2025-10-02 12:30:49.588 2 DEBUG nova.compute.provider_tree [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:49 np0005465987 nova_compute[230713]: 2025-10-02 12:30:49.622 2 DEBUG nova.scheduler.client.report [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:49 np0005465987 nova_compute[230713]: 2025-10-02 12:30:49.654 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:49 np0005465987 nova_compute[230713]: 2025-10-02 12:30:49.654 2 INFO nova.compute.manager [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Migrating#033[00m
Oct  2 08:30:49 np0005465987 nova_compute[230713]: 2025-10-02 12:30:49.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.070691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250070746, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 755, "num_deletes": 253, "total_data_size": 1259619, "memory_usage": 1275920, "flush_reason": "Manual Compaction"}
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250077933, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 818839, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49084, "largest_seqno": 49834, "table_properties": {"data_size": 815163, "index_size": 1456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8914, "raw_average_key_size": 20, "raw_value_size": 807593, "raw_average_value_size": 1823, "num_data_blocks": 63, "num_entries": 443, "num_filter_entries": 443, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408213, "oldest_key_time": 1759408213, "file_creation_time": 1759408250, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 7288 microseconds, and 3728 cpu microseconds.
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.077973) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 818839 bytes OK
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.077994) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.079568) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.079582) EVENT_LOG_v1 {"time_micros": 1759408250079577, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.079599) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1255581, prev total WAL file size 1255581, number of live WAL files 2.
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.080109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(799KB)], [93(13MB)]
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250080133, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 14598242, "oldest_snapshot_seqno": -1}
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 7406 keys, 12708837 bytes, temperature: kUnknown
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250147802, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 12708837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12656655, "index_size": 32550, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 191446, "raw_average_key_size": 25, "raw_value_size": 12521678, "raw_average_value_size": 1690, "num_data_blocks": 1286, "num_entries": 7406, "num_filter_entries": 7406, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759408250, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.148046) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 12708837 bytes
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.149391) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.5 rd, 187.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.1 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(33.3) write-amplify(15.5) OK, records in: 7930, records dropped: 524 output_compression: NoCompression
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.149409) EVENT_LOG_v1 {"time_micros": 1759408250149401, "job": 58, "event": "compaction_finished", "compaction_time_micros": 67747, "compaction_time_cpu_micros": 26140, "output_level": 6, "num_output_files": 1, "total_output_size": 12708837, "num_input_records": 7930, "num_output_records": 7406, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250149686, "job": 58, "event": "table_file_deletion", "file_number": 95}
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408250151951, "job": 58, "event": "table_file_deletion", "file_number": 93}
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.080026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.151986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.151990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.151992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.151993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:50 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:30:50.151995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:50.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.109 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.110 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.144 2 DEBUG nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:30:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:51.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.234 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.234 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.241 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.241 2 INFO nova.compute.claims [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.486 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3301702475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.926 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.932 2 DEBUG nova.compute.provider_tree [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:51 np0005465987 nova_compute[230713]: 2025-10-02 12:30:51.969 2 DEBUG nova.scheduler.client.report [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:30:51Z|00420|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.102 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.103 2 DEBUG nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.169 2 DEBUG nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.170 2 DEBUG nova.network.neutron [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.200 2 INFO nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.224 2 DEBUG nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:30:52 np0005465987 systemd-logind[794]: New session 59 of user nova.
Oct  2 08:30:52 np0005465987 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 08:30:52 np0005465987 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 08:30:52 np0005465987 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 08:30:52 np0005465987 systemd[1]: Starting User Manager for UID 42436...
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.356 2 DEBUG nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.359 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.360 2 INFO nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Creating image(s)#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.383 2 DEBUG nova.storage.rbd_utils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.409 2 DEBUG nova.storage.rbd_utils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.437 2 DEBUG nova.storage.rbd_utils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.441 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:52 np0005465987 systemd[273712]: Queued start job for default target Main User Target.
Oct  2 08:30:52 np0005465987 systemd[273712]: Created slice User Application Slice.
Oct  2 08:30:52 np0005465987 systemd[273712]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:30:52 np0005465987 systemd[273712]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:30:52 np0005465987 systemd[273712]: Reached target Paths.
Oct  2 08:30:52 np0005465987 systemd[273712]: Reached target Timers.
Oct  2 08:30:52 np0005465987 systemd[273712]: Starting D-Bus User Message Bus Socket...
Oct  2 08:30:52 np0005465987 systemd[273712]: Starting Create User's Volatile Files and Directories...
Oct  2 08:30:52 np0005465987 systemd[273712]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:30:52 np0005465987 systemd[273712]: Reached target Sockets.
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.511 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.512 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.513 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.514 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:52 np0005465987 systemd[273712]: Finished Create User's Volatile Files and Directories.
Oct  2 08:30:52 np0005465987 systemd[273712]: Reached target Basic System.
Oct  2 08:30:52 np0005465987 systemd[273712]: Reached target Main User Target.
Oct  2 08:30:52 np0005465987 systemd[273712]: Startup finished in 143ms.
Oct  2 08:30:52 np0005465987 systemd[1]: Started User Manager for UID 42436.
Oct  2 08:30:52 np0005465987 systemd[1]: Started Session 59 of User nova.
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.548 2 DEBUG nova.storage.rbd_utils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.553 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:52 np0005465987 nova_compute[230713]: 2025-10-02 12:30:52.586 2 DEBUG nova.policy [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2bd16d1f5f9d4eb396c474eedee67165', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:30:52 np0005465987 systemd[1]: session-59.scope: Deactivated successfully.
Oct  2 08:30:52 np0005465987 systemd-logind[794]: Session 59 logged out. Waiting for processes to exit.
Oct  2 08:30:52 np0005465987 systemd-logind[794]: Removed session 59.
Oct  2 08:30:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:30:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:30:52 np0005465987 systemd-logind[794]: New session 61 of user nova.
Oct  2 08:30:52 np0005465987 systemd[1]: Started Session 61 of User nova.
Oct  2 08:30:52 np0005465987 systemd[1]: session-61.scope: Deactivated successfully.
Oct  2 08:30:52 np0005465987 systemd-logind[794]: Session 61 logged out. Waiting for processes to exit.
Oct  2 08:30:52 np0005465987 systemd-logind[794]: Removed session 61.
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.100 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.203 2 DEBUG nova.storage.rbd_utils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] resizing rbd image 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:30:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:53.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.308 2 DEBUG nova.objects.instance [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'migration_context' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.331 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.331 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Ensure instance console log exists: /var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.332 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.332 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.332 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:53 np0005465987 nova_compute[230713]: 2025-10-02 12:30:53.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:54 np0005465987 nova_compute[230713]: 2025-10-02 12:30:54.388 2 DEBUG nova.network.neutron [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Successfully created port: ef13c059-8f9b-4954-a8fd-1da56abe5729 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:30:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:54 np0005465987 nova_compute[230713]: 2025-10-02 12:30:54.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:30:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526883118' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:30:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:30:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526883118' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:30:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:55.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:55 np0005465987 nova_compute[230713]: 2025-10-02 12:30:55.498 2 DEBUG nova.network.neutron [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Successfully updated port: ef13c059-8f9b-4954-a8fd-1da56abe5729 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:30:55 np0005465987 nova_compute[230713]: 2025-10-02 12:30:55.526 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:55 np0005465987 nova_compute[230713]: 2025-10-02 12:30:55.526 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquired lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:55 np0005465987 nova_compute[230713]: 2025-10-02 12:30:55.526 2 DEBUG nova.network.neutron [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:30:55 np0005465987 nova_compute[230713]: 2025-10-02 12:30:55.651 2 DEBUG nova.compute.manager [req-fc405d02-a4c7-4ab1-9084-06a2874976e2 req-cb6004da-6b64-4f4d-9bdb-226b9a15d895 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-changed-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:55 np0005465987 nova_compute[230713]: 2025-10-02 12:30:55.651 2 DEBUG nova.compute.manager [req-fc405d02-a4c7-4ab1-9084-06a2874976e2 req-cb6004da-6b64-4f4d-9bdb-226b9a15d895 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Refreshing instance network info cache due to event network-changed-ef13c059-8f9b-4954-a8fd-1da56abe5729. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:30:55 np0005465987 nova_compute[230713]: 2025-10-02 12:30:55.652 2 DEBUG oslo_concurrency.lockutils [req-fc405d02-a4c7-4ab1-9084-06a2874976e2 req-cb6004da-6b64-4f4d-9bdb-226b9a15d895 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:55 np0005465987 nova_compute[230713]: 2025-10-02 12:30:55.785 2 DEBUG nova.network.neutron [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:30:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:56.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:56 np0005465987 nova_compute[230713]: 2025-10-02 12:30:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:56 np0005465987 nova_compute[230713]: 2025-10-02 12:30:56.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:30:56 np0005465987 nova_compute[230713]: 2025-10-02 12:30:56.998 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:30:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:57.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e312 e312: 3 total, 3 up, 3 in
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.703 2 DEBUG nova.network.neutron [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Updating instance_info_cache with network_info: [{"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.724 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Releasing lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.725 2 DEBUG nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance network_info: |[{"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.725 2 DEBUG oslo_concurrency.lockutils [req-fc405d02-a4c7-4ab1-9084-06a2874976e2 req-cb6004da-6b64-4f4d-9bdb-226b9a15d895 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.725 2 DEBUG nova.network.neutron [req-fc405d02-a4c7-4ab1-9084-06a2874976e2 req-cb6004da-6b64-4f4d-9bdb-226b9a15d895 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Refreshing network info cache for port ef13c059-8f9b-4954-a8fd-1da56abe5729 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.729 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Start _get_guest_xml network_info=[{"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.733 2 WARNING nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.739 2 DEBUG nova.virt.libvirt.host [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.740 2 DEBUG nova.virt.libvirt.host [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.744 2 DEBUG nova.virt.libvirt.host [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.744 2 DEBUG nova.virt.libvirt.host [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.745 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.746 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.746 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.746 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.747 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.747 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.747 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.747 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.747 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.748 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.748 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.748 2 DEBUG nova.virt.hardware [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:30:57 np0005465987 nova_compute[230713]: 2025-10-02 12:30:57.750 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/190023607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.219 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.244 2 DEBUG nova.storage.rbd_utils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.248 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.631 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408243.6302063, d165ea69-9e02-466c-b154-0ce6f22e0de0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.632 2 INFO nova.compute.manager [-] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.650 2 DEBUG nova.compute.manager [None req-c862336f-8896-4d7e-8a9e-312ad288a0cf - - - - - -] [instance: d165ea69-9e02-466c-b154-0ce6f22e0de0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2771577985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:58.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.693 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.695 2 DEBUG nova.virt.libvirt.vif [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-443770275',display_name='tempest-ServerActionsTestJSON-server-443770275',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-443770275',id=118,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-1z1vi3sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=62d1cea8-2591-4b86-b69f-aee1eeafa901,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.695 2 DEBUG nova.network.os_vif_util [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.696 2 DEBUG nova.network.os_vif_util [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.697 2 DEBUG nova.objects.instance [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.713 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <uuid>62d1cea8-2591-4b86-b69f-aee1eeafa901</uuid>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <name>instance-00000076</name>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerActionsTestJSON-server-443770275</nova:name>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:30:57</nova:creationTime>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <nova:user uuid="2bd16d1f5f9d4eb396c474eedee67165">tempest-ServerActionsTestJSON-842270816-project-member</nova:user>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <nova:project uuid="4b8ca48cb5f64ef3b0736b8be82378b8">tempest-ServerActionsTestJSON-842270816</nova:project>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <nova:port uuid="ef13c059-8f9b-4954-a8fd-1da56abe5729">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <entry name="serial">62d1cea8-2591-4b86-b69f-aee1eeafa901</entry>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <entry name="uuid">62d1cea8-2591-4b86-b69f-aee1eeafa901</entry>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/62d1cea8-2591-4b86-b69f-aee1eeafa901_disk">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/62d1cea8-2591-4b86-b69f-aee1eeafa901_disk.config">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:fb:57:ce"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <target dev="tapef13c059-8f"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901/console.log" append="off"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:30:58 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:30:58 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:30:58 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:30:58 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.714 2 DEBUG nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Preparing to wait for external event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.714 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.715 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.715 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.715 2 DEBUG nova.virt.libvirt.vif [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-443770275',display_name='tempest-ServerActionsTestJSON-server-443770275',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-443770275',id=118,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-1z1vi3sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=62d1cea8-2591-4b86-b69f-aee1eeafa901,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.716 2 DEBUG nova.network.os_vif_util [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.716 2 DEBUG nova.network.os_vif_util [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.717 2 DEBUG os_vif [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.718 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.722 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef13c059-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.722 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef13c059-8f, col_values=(('external_ids', {'iface-id': 'ef13c059-8f9b-4954-a8fd-1da56abe5729', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:57:ce', 'vm-uuid': '62d1cea8-2591-4b86-b69f-aee1eeafa901'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:58 np0005465987 NetworkManager[44910]: <info>  [1759408258.7249] manager: (tapef13c059-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.732 2 INFO os_vif [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f')#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.854 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.856 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.856 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] No VIF found with MAC fa:16:3e:fb:57:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.857 2 INFO nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Using config drive#033[00m
Oct  2 08:30:58 np0005465987 nova_compute[230713]: 2025-10-02 12:30:58.987 2 DEBUG nova.storage.rbd_utils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:59 np0005465987 nova_compute[230713]: 2025-10-02 12:30:59.008 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:30:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:30:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:59.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:30:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:59 np0005465987 nova_compute[230713]: 2025-10-02 12:30:59.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:00 np0005465987 nova_compute[230713]: 2025-10-02 12:31:00.336 2 INFO nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Creating config drive at /var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901/disk.config#033[00m
Oct  2 08:31:00 np0005465987 nova_compute[230713]: 2025-10-02 12:31:00.343 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprrzxaa1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:00 np0005465987 nova_compute[230713]: 2025-10-02 12:31:00.480 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprrzxaa1" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:00 np0005465987 nova_compute[230713]: 2025-10-02 12:31:00.524 2 DEBUG nova.storage.rbd_utils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] rbd image 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:00 np0005465987 nova_compute[230713]: 2025-10-02 12:31:00.529 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901/disk.config 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:00.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:00 np0005465987 nova_compute[230713]: 2025-10-02 12:31:00.778 2 DEBUG oslo_concurrency.processutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901/disk.config 62d1cea8-2591-4b86-b69f-aee1eeafa901_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:00 np0005465987 nova_compute[230713]: 2025-10-02 12:31:00.778 2 INFO nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Deleting local config drive /var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901/disk.config because it was imported into RBD.#033[00m
Oct  2 08:31:00 np0005465987 kernel: tapef13c059-8f: entered promiscuous mode
Oct  2 08:31:00 np0005465987 NetworkManager[44910]: <info>  [1759408260.8423] manager: (tapef13c059-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Oct  2 08:31:00 np0005465987 nova_compute[230713]: 2025-10-02 12:31:00.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:00Z|00421|binding|INFO|Claiming lport ef13c059-8f9b-4954-a8fd-1da56abe5729 for this chassis.
Oct  2 08:31:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:00Z|00422|binding|INFO|ef13c059-8f9b-4954-a8fd-1da56abe5729: Claiming fa:16:3e:fb:57:ce 10.100.0.6
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.855 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:57:ce 10.100.0.6'], port_security=['fa:16:3e:fb:57:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '62d1cea8-2591-4b86-b69f-aee1eeafa901', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ef13c059-8f9b-4954-a8fd-1da56abe5729) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.857 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ef13c059-8f9b-4954-a8fd-1da56abe5729 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff bound to our chassis#033[00m
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.858 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff#033[00m
Oct  2 08:31:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:00Z|00423|binding|INFO|Setting lport ef13c059-8f9b-4954-a8fd-1da56abe5729 ovn-installed in OVS
Oct  2 08:31:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:00Z|00424|binding|INFO|Setting lport ef13c059-8f9b-4954-a8fd-1da56abe5729 up in Southbound
Oct  2 08:31:00 np0005465987 nova_compute[230713]: 2025-10-02 12:31:00.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.872 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0997ee2f-144a-4176-a3be-8b6b75ce4ed5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.874 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2c62a66-f1 in ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.876 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2c62a66-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.876 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a2d46b-7670-46e0-b18a-5acb22ee9e2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.880 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bb539485-1e6f-4d01-bde1-52e00d188ed7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:00 np0005465987 systemd-udevd[274037]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:31:00 np0005465987 NetworkManager[44910]: <info>  [1759408260.9001] device (tapef13c059-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:31:00 np0005465987 NetworkManager[44910]: <info>  [1759408260.9011] device (tapef13c059-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.897 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[cc57b250-1dc5-4d4d-bcf4-e0912afdae5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:00 np0005465987 systemd-machined[188335]: New machine qemu-51-instance-00000076.
Oct  2 08:31:00 np0005465987 systemd[1]: Started Virtual Machine qemu-51-instance-00000076.
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.918 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[85600361-b11a-43f8-84c3-ab50aa33175f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.952 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[79bcd79a-0b96-4947-a850-143bedc4dc42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:00 np0005465987 NetworkManager[44910]: <info>  [1759408260.9618] manager: (tapb2c62a66-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/208)
Oct  2 08:31:00 np0005465987 systemd-udevd[274041]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:31:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.961 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[78ed321f-6dc9-4208-9ef2-4637a22079c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:00.999 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6cddac-7370-4dc2-af68-7360bd239f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.006 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[819a6959-bcf4-49fd-b7af-ed2db520601b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 NetworkManager[44910]: <info>  [1759408261.0337] device (tapb2c62a66-f0): carrier: link connected
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.043 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d27c4da5-687b-4a57-bcb7-53ba24749473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.060 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[82202215-20c5-4616-8508-a1f758497657]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632960, 'reachable_time': 17434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274070, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.080 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8288048a-249e-4aca-a7ac-1143dfceaa69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:7a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632960, 'tstamp': 632960}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274071, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.099 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[22ad51d0-d883-44b5-b2e6-62d8e71dac4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632960, 'reachable_time': 17434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274072, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.130 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[eedf3d96-cbf0-4a27-8047-c8883122a24f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.200 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9827713d-95de-407c-aa8a-eb0d0f5e725e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.201 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.202 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.202 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2c62a66-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:01 np0005465987 NetworkManager[44910]: <info>  [1759408261.2050] manager: (tapb2c62a66-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Oct  2 08:31:01 np0005465987 nova_compute[230713]: 2025-10-02 12:31:01.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:01 np0005465987 kernel: tapb2c62a66-f0: entered promiscuous mode
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.212 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2c62a66-f0, col_values=(('external_ids', {'iface-id': '8c1234f6-f595-4979-94c0-98da2211f0e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:01 np0005465987 nova_compute[230713]: 2025-10-02 12:31:01.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:01 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:01Z|00425|binding|INFO|Releasing lport 8c1234f6-f595-4979-94c0-98da2211f0e1 from this chassis (sb_readonly=0)
Oct  2 08:31:01 np0005465987 nova_compute[230713]: 2025-10-02 12:31:01.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.230 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.231 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[28a43162-8517-4449-90b1-e28b91aaf08a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.232 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:31:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:01.232 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'env', 'PROCESS_TAG=haproxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:31:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:01.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:01 np0005465987 nova_compute[230713]: 2025-10-02 12:31:01.438 2 DEBUG nova.network.neutron [req-fc405d02-a4c7-4ab1-9084-06a2874976e2 req-cb6004da-6b64-4f4d-9bdb-226b9a15d895 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Updated VIF entry in instance network info cache for port ef13c059-8f9b-4954-a8fd-1da56abe5729. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:31:01 np0005465987 nova_compute[230713]: 2025-10-02 12:31:01.439 2 DEBUG nova.network.neutron [req-fc405d02-a4c7-4ab1-9084-06a2874976e2 req-cb6004da-6b64-4f4d-9bdb-226b9a15d895 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Updating instance_info_cache with network_info: [{"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:01 np0005465987 nova_compute[230713]: 2025-10-02 12:31:01.475 2 DEBUG oslo_concurrency.lockutils [req-fc405d02-a4c7-4ab1-9084-06a2874976e2 req-cb6004da-6b64-4f4d-9bdb-226b9a15d895 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:01 np0005465987 podman[274142]: 2025-10-02 12:31:01.618262174 +0000 UTC m=+0.022826659 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:31:01 np0005465987 nova_compute[230713]: 2025-10-02 12:31:01.960 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408261.9598749, 62d1cea8-2591-4b86-b69f-aee1eeafa901 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:01 np0005465987 nova_compute[230713]: 2025-10-02 12:31:01.961 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] VM Started (Lifecycle Event)#033[00m
Oct  2 08:31:01 np0005465987 nova_compute[230713]: 2025-10-02 12:31:01.998 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.003 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408261.960121, 62d1cea8-2591-4b86-b69f-aee1eeafa901 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.003 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.043 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.047 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.069 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:31:02 np0005465987 podman[274142]: 2025-10-02 12:31:02.252040085 +0000 UTC m=+0.656604550 container create d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 08:31:02 np0005465987 systemd[1]: Started libpod-conmon-d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b.scope.
Oct  2 08:31:02 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:31:02 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba0ba41f35621c5e20f5d558c73285c024adfa4b8e52941c8e9bc1a92e0502f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:31:02 np0005465987 podman[274142]: 2025-10-02 12:31:02.570310278 +0000 UTC m=+0.974874763 container init d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:31:02 np0005465987 podman[274142]: 2025-10-02 12:31:02.575961804 +0000 UTC m=+0.980526269 container start d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:31:02 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[274156]: [NOTICE]   (274160) : New worker (274162) forked
Oct  2 08:31:02 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[274156]: [NOTICE]   (274160) : Loading success.
Oct  2 08:31:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:02.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.711 2 DEBUG nova.compute.manager [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.712 2 DEBUG oslo_concurrency.lockutils [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.712 2 DEBUG oslo_concurrency.lockutils [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.712 2 DEBUG oslo_concurrency.lockutils [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.712 2 DEBUG nova.compute.manager [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Processing event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.712 2 DEBUG nova.compute.manager [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.712 2 DEBUG oslo_concurrency.lockutils [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.713 2 DEBUG oslo_concurrency.lockutils [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.713 2 DEBUG oslo_concurrency.lockutils [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.713 2 DEBUG nova.compute.manager [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.713 2 WARNING nova.compute.manager [req-846ac920-3acf-4958-844f-5ab188c70655 req-57009920-be49-4930-99e4-49d46a9766fe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.713 2 DEBUG nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.718 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.719 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408262.717957, 62d1cea8-2591-4b86-b69f-aee1eeafa901 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.719 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.722 2 INFO nova.virt.libvirt.driver [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance spawned successfully.#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.722 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.744 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.750 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.754 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.754 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.755 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.755 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.755 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.756 2 DEBUG nova.virt.libvirt.driver [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.792 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.898 2 INFO nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Took 10.54 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.899 2 DEBUG nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:02 np0005465987 nova_compute[230713]: 2025-10-02 12:31:02.992 2 INFO nova.compute.manager [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Took 11.79 seconds to build instance.#033[00m
Oct  2 08:31:03 np0005465987 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 08:31:03 np0005465987 systemd[273712]: Activating special unit Exit the Session...
Oct  2 08:31:03 np0005465987 systemd[273712]: Stopped target Main User Target.
Oct  2 08:31:03 np0005465987 systemd[273712]: Stopped target Basic System.
Oct  2 08:31:03 np0005465987 systemd[273712]: Stopped target Paths.
Oct  2 08:31:03 np0005465987 systemd[273712]: Stopped target Sockets.
Oct  2 08:31:03 np0005465987 systemd[273712]: Stopped target Timers.
Oct  2 08:31:03 np0005465987 systemd[273712]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:31:03 np0005465987 systemd[273712]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:31:03 np0005465987 systemd[273712]: Closed D-Bus User Message Bus Socket.
Oct  2 08:31:03 np0005465987 systemd[273712]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:31:03 np0005465987 systemd[273712]: Removed slice User Application Slice.
Oct  2 08:31:03 np0005465987 systemd[273712]: Reached target Shutdown.
Oct  2 08:31:03 np0005465987 systemd[273712]: Finished Exit the Session.
Oct  2 08:31:03 np0005465987 systemd[273712]: Reached target Exit the Session.
Oct  2 08:31:03 np0005465987 nova_compute[230713]: 2025-10-02 12:31:03.016 2 DEBUG oslo_concurrency.lockutils [None req-0e93a10a-585d-45b1-8022-b5122081e3b4 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:03 np0005465987 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 08:31:03 np0005465987 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 08:31:03 np0005465987 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 08:31:03 np0005465987 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 08:31:03 np0005465987 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 08:31:03 np0005465987 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 08:31:03 np0005465987 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 08:31:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:03.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:03 np0005465987 nova_compute[230713]: 2025-10-02 12:31:03.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:03 np0005465987 nova_compute[230713]: 2025-10-02 12:31:03.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:03 np0005465987 nova_compute[230713]: 2025-10-02 12:31:03.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:03 np0005465987 nova_compute[230713]: 2025-10-02 12:31:03.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.002 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.003 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.004 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/638648941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.424 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.515 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.516 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.520 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.520 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.524 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.524 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:04.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.695 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.696 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3898MB free_disk=20.739486694335938GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.696 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.696 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.803 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Migration for instance 6d9cbb75-985a-47af-9a91-f4a885d1b59a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.859 2 INFO nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating resource usage from migration 649cb3df-1832-4eb9-bab2-7623fb261c1e#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.859 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Starting to track incoming migration 649cb3df-1832-4eb9-bab2-7623fb261c1e with flavor eb3a53f1-304b-4cb0-acc3-abffce0fb181 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.964 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:31:04 np0005465987 nova_compute[230713]: 2025-10-02 12:31:04.964 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e744f3b9-ccc1-43b1-b134-e6050b9487eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.022 2 WARNING nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 6d9cbb75-985a-47af-9a91-f4a885d1b59a has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.023 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 62d1cea8-2591-4b86-b69f-aee1eeafa901 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.023 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.023 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=1088MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.159 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:05.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/52124419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.571 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.577 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.640 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.812 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:31:05 np0005465987 nova_compute[230713]: 2025-10-02 12:31:05.812 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:06.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.692 2 DEBUG nova.compute.manager [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-unplugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.693 2 DEBUG oslo_concurrency.lockutils [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.693 2 DEBUG oslo_concurrency.lockutils [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.693 2 DEBUG oslo_concurrency.lockutils [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.693 2 DEBUG nova.compute.manager [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-unplugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.693 2 WARNING nova.compute.manager [req-6ed46438-345f-42bd-aa5b-52011e59596f req-1574900d-5bf6-4888-bc3c-a9f657f3ecb8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-unplugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.813 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.813 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:06 np0005465987 nova_compute[230713]: 2025-10-02 12:31:06.865 2 INFO nova.network.neutron [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:31:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:07.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:07 np0005465987 nova_compute[230713]: 2025-10-02 12:31:07.391 2 DEBUG nova.compute.manager [req-aaa6f8c6-2189-4da9-9a06-12e148398654 req-1a998d32-6bf5-4b6f-8abe-e8816165b083 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-changed-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:07 np0005465987 nova_compute[230713]: 2025-10-02 12:31:07.391 2 DEBUG nova.compute.manager [req-aaa6f8c6-2189-4da9-9a06-12e148398654 req-1a998d32-6bf5-4b6f-8abe-e8816165b083 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Refreshing instance network info cache due to event network-changed-ef13c059-8f9b-4954-a8fd-1da56abe5729. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:31:07 np0005465987 nova_compute[230713]: 2025-10-02 12:31:07.391 2 DEBUG oslo_concurrency.lockutils [req-aaa6f8c6-2189-4da9-9a06-12e148398654 req-1a998d32-6bf5-4b6f-8abe-e8816165b083 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:07 np0005465987 nova_compute[230713]: 2025-10-02 12:31:07.392 2 DEBUG oslo_concurrency.lockutils [req-aaa6f8c6-2189-4da9-9a06-12e148398654 req-1a998d32-6bf5-4b6f-8abe-e8816165b083 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:07 np0005465987 nova_compute[230713]: 2025-10-02 12:31:07.392 2 DEBUG nova.network.neutron [req-aaa6f8c6-2189-4da9-9a06-12e148398654 req-1a998d32-6bf5-4b6f-8abe-e8816165b083 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Refreshing network info cache for port ef13c059-8f9b-4954-a8fd-1da56abe5729 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:31:07 np0005465987 nova_compute[230713]: 2025-10-02 12:31:07.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:07 np0005465987 nova_compute[230713]: 2025-10-02 12:31:07.964 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:31:08 np0005465987 nova_compute[230713]: 2025-10-02 12:31:08.090 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:08 np0005465987 nova_compute[230713]: 2025-10-02 12:31:08.090 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquired lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:08 np0005465987 nova_compute[230713]: 2025-10-02 12:31:08.090 2 DEBUG nova.network.neutron [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:31:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e313 e313: 3 total, 3 up, 3 in
Oct  2 08:31:08 np0005465987 nova_compute[230713]: 2025-10-02 12:31:08.478 2 DEBUG nova.compute.manager [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-changed-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:08 np0005465987 nova_compute[230713]: 2025-10-02 12:31:08.479 2 DEBUG nova.compute.manager [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Refreshing instance network info cache due to event network-changed-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:31:08 np0005465987 nova_compute[230713]: 2025-10-02 12:31:08.479 2 DEBUG oslo_concurrency.lockutils [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:31:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:08.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:31:08 np0005465987 nova_compute[230713]: 2025-10-02 12:31:08.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:09.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:09 np0005465987 nova_compute[230713]: 2025-10-02 12:31:09.287 2 DEBUG nova.compute.manager [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:09 np0005465987 nova_compute[230713]: 2025-10-02 12:31:09.287 2 DEBUG oslo_concurrency.lockutils [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:09 np0005465987 nova_compute[230713]: 2025-10-02 12:31:09.287 2 DEBUG oslo_concurrency.lockutils [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:09 np0005465987 nova_compute[230713]: 2025-10-02 12:31:09.288 2 DEBUG oslo_concurrency.lockutils [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:09 np0005465987 nova_compute[230713]: 2025-10-02 12:31:09.288 2 DEBUG nova.compute.manager [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:09 np0005465987 nova_compute[230713]: 2025-10-02 12:31:09.288 2 WARNING nova.compute.manager [req-fb1487cc-7045-4f03-88f0-df1963d48592 req-5629c79c-4bb1-4592-93ad-d4b08edc83fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:31:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:09 np0005465987 nova_compute[230713]: 2025-10-02 12:31:09.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:10 np0005465987 nova_compute[230713]: 2025-10-02 12:31:10.198 2 DEBUG nova.network.neutron [req-aaa6f8c6-2189-4da9-9a06-12e148398654 req-1a998d32-6bf5-4b6f-8abe-e8816165b083 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Updated VIF entry in instance network info cache for port ef13c059-8f9b-4954-a8fd-1da56abe5729. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:31:10 np0005465987 nova_compute[230713]: 2025-10-02 12:31:10.198 2 DEBUG nova.network.neutron [req-aaa6f8c6-2189-4da9-9a06-12e148398654 req-1a998d32-6bf5-4b6f-8abe-e8816165b083 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Updating instance_info_cache with network_info: [{"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:10 np0005465987 nova_compute[230713]: 2025-10-02 12:31:10.422 2 DEBUG oslo_concurrency.lockutils [req-aaa6f8c6-2189-4da9-9a06-12e148398654 req-1a998d32-6bf5-4b6f-8abe-e8816165b083 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:10.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:10 np0005465987 nova_compute[230713]: 2025-10-02 12:31:10.882 2 DEBUG nova.network.neutron [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating instance_info_cache with network_info: [{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:11 np0005465987 nova_compute[230713]: 2025-10-02 12:31:11.072 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Releasing lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:11 np0005465987 nova_compute[230713]: 2025-10-02 12:31:11.076 2 DEBUG oslo_concurrency.lockutils [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:11 np0005465987 nova_compute[230713]: 2025-10-02 12:31:11.077 2 DEBUG nova.network.neutron [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Refreshing network info cache for port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:31:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:11.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:11 np0005465987 nova_compute[230713]: 2025-10-02 12:31:11.309 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:31:11 np0005465987 nova_compute[230713]: 2025-10-02 12:31:11.311 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:31:11 np0005465987 nova_compute[230713]: 2025-10-02 12:31:11.312 2 INFO nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Creating image(s)#033[00m
Oct  2 08:31:11 np0005465987 nova_compute[230713]: 2025-10-02 12:31:11.369 2 DEBUG nova.storage.rbd_utils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] creating snapshot(nova-resize) on rbd image(6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:31:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e314 e314: 3 total, 3 up, 3 in
Oct  2 08:31:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:12.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.769 2 DEBUG nova.objects.instance [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:12 np0005465987 podman[274253]: 2025-10-02 12:31:12.860565034 +0000 UTC m=+0.068131567 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.945 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.946 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Ensure instance console log exists: /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.946 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.947 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.947 2 DEBUG oslo_concurrency.lockutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.949 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Start _get_guest_xml network_info=[{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:99:6e:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.953 2 WARNING nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.958 2 DEBUG nova.virt.libvirt.host [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.958 2 DEBUG nova.virt.libvirt.host [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.965 2 DEBUG nova.virt.libvirt.host [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.966 2 DEBUG nova.virt.libvirt.host [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.967 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.967 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb3a53f1-304b-4cb0-acc3-abffce0fb181',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.968 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.968 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.968 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.968 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.968 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.969 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.969 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.969 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.969 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.969 2 DEBUG nova.virt.hardware [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:31:12 np0005465987 nova_compute[230713]: 2025-10-02 12:31:12.970 2 DEBUG nova.objects.instance [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:13 np0005465987 nova_compute[230713]: 2025-10-02 12:31:13.025 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:13.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:31:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2343410255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:31:13 np0005465987 nova_compute[230713]: 2025-10-02 12:31:13.446 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:13 np0005465987 nova_compute[230713]: 2025-10-02 12:31:13.480 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:13 np0005465987 nova_compute[230713]: 2025-10-02 12:31:13.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:31:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3237724281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.092 2 DEBUG oslo_concurrency.processutils [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.095 2 DEBUG nova.virt.libvirt.vif [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-245965768',display_name='tempest-ServerDiskConfigTestJSON-server-245965768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-245965768',id=117,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-bp44ih1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:06Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=6d9cbb75-985a-47af-9a91-f4a885d1b59a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:99:6e:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.096 2 DEBUG nova.network.os_vif_util [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:99:6e:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.098 2 DEBUG nova.network.os_vif_util [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.103 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <uuid>6d9cbb75-985a-47af-9a91-f4a885d1b59a</uuid>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <name>instance-00000075</name>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <memory>196608</memory>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-245965768</nova:name>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:31:12</nova:creationTime>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.micro">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <nova:memory>192</nova:memory>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <nova:user uuid="4a89b71e2513413e922ee6d5d06362b1">tempest-ServerDiskConfigTestJSON-1123059068-project-member</nova:user>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <nova:project uuid="27a1729bf10548219b90df46839849f5">tempest-ServerDiskConfigTestJSON-1123059068</nova:project>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <nova:port uuid="b5ad4ac4-0d9f-487b-8448-19fdf04c9f36">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <entry name="serial">6d9cbb75-985a-47af-9a91-f4a885d1b59a</entry>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <entry name="uuid">6d9cbb75-985a-47af-9a91-f4a885d1b59a</entry>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6d9cbb75-985a-47af-9a91-f4a885d1b59a_disk.config">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:99:6e:1d"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <target dev="tapb5ad4ac4-0d"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a/console.log" append="off"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:31:14 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:31:14 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:31:14 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:31:14 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.105 2 DEBUG nova.virt.libvirt.vif [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-245965768',display_name='tempest-ServerDiskConfigTestJSON-server-245965768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-245965768',id=117,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-bp44ih1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:06Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=6d9cbb75-985a-47af-9a91-f4a885d1b59a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:99:6e:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.106 2 DEBUG nova.network.os_vif_util [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "vif_mac": "fa:16:3e:99:6e:1d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.106 2 DEBUG nova.network.os_vif_util [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.107 2 DEBUG os_vif [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.107 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.115 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5ad4ac4-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.115 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5ad4ac4-0d, col_values=(('external_ids', {'iface-id': 'b5ad4ac4-0d9f-487b-8448-19fdf04c9f36', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:6e:1d', 'vm-uuid': '6d9cbb75-985a-47af-9a91-f4a885d1b59a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 NetworkManager[44910]: <info>  [1759408274.1174] manager: (tapb5ad4ac4-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.125 2 INFO os_vif [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d')#033[00m
Oct  2 08:31:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.470 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.471 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.472 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] No VIF found with MAC fa:16:3e:99:6e:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.474 2 INFO nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Using config drive#033[00m
Oct  2 08:31:14 np0005465987 kernel: tapb5ad4ac4-0d: entered promiscuous mode
Oct  2 08:31:14 np0005465987 NetworkManager[44910]: <info>  [1759408274.5681] manager: (tapb5ad4ac4-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:14Z|00426|binding|INFO|Claiming lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for this chassis.
Oct  2 08:31:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:14Z|00427|binding|INFO|b5ad4ac4-0d9f-487b-8448-19fdf04c9f36: Claiming fa:16:3e:99:6e:1d 10.100.0.13
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.584 2 DEBUG nova.network.neutron [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updated VIF entry in instance network info cache for port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.585 2 DEBUG nova.network.neutron [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating instance_info_cache with network_info: [{"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:14Z|00428|binding|INFO|Setting lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 ovn-installed in OVS
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:14Z|00429|binding|INFO|Setting lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 up in Southbound
Oct  2 08:31:14 np0005465987 systemd-udevd[274400]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.606 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:6e:1d 10.100.0.13'], port_security=['fa:16:3e:99:6e:1d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6d9cbb75-985a-47af-9a91-f4a885d1b59a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:14 np0005465987 systemd-machined[188335]: New machine qemu-52-instance-00000075.
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.607 138325 INFO neutron.agent.ovn.metadata.agent [-] Port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 bound to our chassis#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.609 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 247d774d-0cc8-4ef2-a9b8-c756adae0874#033[00m
Oct  2 08:31:14 np0005465987 NetworkManager[44910]: <info>  [1759408274.6165] device (tapb5ad4ac4-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:31:14 np0005465987 NetworkManager[44910]: <info>  [1759408274.6174] device (tapb5ad4ac4-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.618 2 DEBUG oslo_concurrency.lockutils [req-bb6e9d43-ba09-4257-905d-5f4f2f91b858 req-94325670-4aec-411c-957c-05de689f8206 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6d9cbb75-985a-47af-9a91-f4a885d1b59a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:14 np0005465987 systemd[1]: Started Virtual Machine qemu-52-instance-00000075.
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.625 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1a9087-db9f-4b0e-8107-3ecee82c43e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.626 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap247d774d-01 in ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.628 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap247d774d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.628 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2749b6d1-c574-4d85-ba3f-c2e199118822]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.629 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ff84a858-119c-4f0f-a076-fa8a677c8811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.645 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb80c81-5f0b-4047-8f9f-ebab3af00873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.660 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[781b176f-2e90-43ec-8cb9-1fb5c0e4f966]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.692 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a4203a24-ad6b-42bd-867e-3bb090248a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 NetworkManager[44910]: <info>  [1759408274.7002] manager: (tap247d774d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.698 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7268ed3e-3aac-43fa-b098-a1b81e598628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:14.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.746 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4324a2-4c82-4cae-9359-8e5a654d041d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.755 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[975815d6-8ca5-49f2-9642-31e964498c24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 NetworkManager[44910]: <info>  [1759408274.7772] device (tap247d774d-00): carrier: link connected
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.782 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[cc4357ed-f71b-4318-902c-d1aa52337318]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.799 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e9529f21-e0c9-46fa-b190-d1d577e30051]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634335, 'reachable_time': 24738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274434, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.818 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[65c40845-789f-4504-bc86-0a328dcd9448]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:ab18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634335, 'tstamp': 634335}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274435, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.835 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[34b1d386-07c3-473b-9bb0-a3c498f280f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap247d774d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:ab:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634335, 'reachable_time': 24738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274436, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.872 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[82c14e31-8613-41c4-9681-9c0bb246c6e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.933 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[75826167-d7c9-4e9a-aced-1ede09b81ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.935 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.935 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.935 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247d774d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 NetworkManager[44910]: <info>  [1759408274.9376] manager: (tap247d774d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Oct  2 08:31:14 np0005465987 kernel: tap247d774d-00: entered promiscuous mode
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.940 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap247d774d-00, col_values=(('external_ids', {'iface-id': '04584168-a51c-41f9-9206-d39db8a81566'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:14Z|00430|binding|INFO|Releasing lport 04584168-a51c-41f9-9206-d39db8a81566 from this chassis (sb_readonly=0)
Oct  2 08:31:14 np0005465987 nova_compute[230713]: 2025-10-02 12:31:14.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.955 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.956 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8320e5ae-3d50-4952-9d99-cd3494c28840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.957 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/247d774d-0cc8-4ef2-a9b8-c756adae0874.pid.haproxy
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 247d774d-0cc8-4ef2-a9b8-c756adae0874
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:31:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:14.957 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'env', 'PROCESS_TAG=haproxy-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/247d774d-0cc8-4ef2-a9b8-c756adae0874.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:31:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:15.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:15 np0005465987 podman[274504]: 2025-10-02 12:31:15.267275671 +0000 UTC m=+0.020573587 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:31:15 np0005465987 nova_compute[230713]: 2025-10-02 12:31:15.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:16 np0005465987 nova_compute[230713]: 2025-10-02 12:31:16.253 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408276.2528014, 6d9cbb75-985a-47af-9a91-f4a885d1b59a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:16 np0005465987 nova_compute[230713]: 2025-10-02 12:31:16.253 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:31:16 np0005465987 nova_compute[230713]: 2025-10-02 12:31:16.262 2 DEBUG nova.compute.manager [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:31:16 np0005465987 nova_compute[230713]: 2025-10-02 12:31:16.265 2 INFO nova.virt.libvirt.driver [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance running successfully.#033[00m
Oct  2 08:31:16 np0005465987 virtqemud[230442]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:31:16 np0005465987 nova_compute[230713]: 2025-10-02 12:31:16.269 2 DEBUG nova.virt.libvirt.guest [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:31:16 np0005465987 nova_compute[230713]: 2025-10-02 12:31:16.269 2 DEBUG nova.virt.libvirt.driver [None req-b641ee24-dae4-4718-82ce-7a74c3693aac 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:31:16 np0005465987 podman[274504]: 2025-10-02 12:31:16.487300874 +0000 UTC m=+1.240598760 container create 8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:31:16 np0005465987 systemd[1]: Started libpod-conmon-8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3.scope.
Oct  2 08:31:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:16.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:16 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:31:16 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d579baced66efd80f46fea20707b78c11ffcfa2a273f005c7be715f5a4cfd80/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:31:17 np0005465987 podman[274504]: 2025-10-02 12:31:17.069593267 +0000 UTC m=+1.822891263 container init 8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:31:17 np0005465987 podman[274504]: 2025-10-02 12:31:17.07730695 +0000 UTC m=+1.830604886 container start 8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:31:17 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[274526]: [NOTICE]   (274530) : New worker (274532) forked
Oct  2 08:31:17 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[274526]: [NOTICE]   (274530) : Loading success.
Oct  2 08:31:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e315 e315: 3 total, 3 up, 3 in
Oct  2 08:31:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:31:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:18.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:31:19 np0005465987 nova_compute[230713]: 2025-10-02 12:31:19.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:19 np0005465987 podman[274542]: 2025-10-02 12:31:19.336170886 +0000 UTC m=+0.549424319 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:31:19 np0005465987 podman[274544]: 2025-10-02 12:31:19.351624651 +0000 UTC m=+0.547649710 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:31:19 np0005465987 podman[274541]: 2025-10-02 12:31:19.359868768 +0000 UTC m=+0.577663896 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  2 08:31:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:19Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:57:ce 10.100.0.6
Oct  2 08:31:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:19Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:57:ce 10.100.0.6
Oct  2 08:31:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:19 np0005465987 nova_compute[230713]: 2025-10-02 12:31:19.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:20 np0005465987 nova_compute[230713]: 2025-10-02 12:31:20.198 2 DEBUG nova.compute.manager [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:20 np0005465987 nova_compute[230713]: 2025-10-02 12:31:20.199 2 DEBUG oslo_concurrency.lockutils [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:20 np0005465987 nova_compute[230713]: 2025-10-02 12:31:20.199 2 DEBUG oslo_concurrency.lockutils [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:20 np0005465987 nova_compute[230713]: 2025-10-02 12:31:20.200 2 DEBUG oslo_concurrency.lockutils [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:20 np0005465987 nova_compute[230713]: 2025-10-02 12:31:20.200 2 DEBUG nova.compute.manager [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:20 np0005465987 nova_compute[230713]: 2025-10-02 12:31:20.200 2 WARNING nova.compute.manager [req-ce4b2c8a-fa51-41b4-8406-de7e823e1ef9 req-ed8a1fd0-9326-40b9-a3e3-d22791b852f4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:31:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:20.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:21.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:22 np0005465987 nova_compute[230713]: 2025-10-02 12:31:22.284 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:22 np0005465987 nova_compute[230713]: 2025-10-02 12:31:22.288 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:22.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:22 np0005465987 nova_compute[230713]: 2025-10-02 12:31:22.741 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:31:22 np0005465987 nova_compute[230713]: 2025-10-02 12:31:22.742 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408276.2611487, 6d9cbb75-985a-47af-9a91-f4a885d1b59a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:22 np0005465987 nova_compute[230713]: 2025-10-02 12:31:22.742 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] VM Started (Lifecycle Event)#033[00m
Oct  2 08:31:23 np0005465987 nova_compute[230713]: 2025-10-02 12:31:23.156 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:23 np0005465987 nova_compute[230713]: 2025-10-02 12:31:23.161 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:23.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:23 np0005465987 nova_compute[230713]: 2025-10-02 12:31:23.969 2 DEBUG nova.compute.manager [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:23 np0005465987 nova_compute[230713]: 2025-10-02 12:31:23.969 2 DEBUG oslo_concurrency.lockutils [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:23 np0005465987 nova_compute[230713]: 2025-10-02 12:31:23.969 2 DEBUG oslo_concurrency.lockutils [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:23 np0005465987 nova_compute[230713]: 2025-10-02 12:31:23.969 2 DEBUG oslo_concurrency.lockutils [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:23 np0005465987 nova_compute[230713]: 2025-10-02 12:31:23.970 2 DEBUG nova.compute.manager [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:23 np0005465987 nova_compute[230713]: 2025-10-02 12:31:23.970 2 WARNING nova.compute.manager [req-058f90f4-9305-42d3-a502-f7261dc9285b req-4589c3ed-c7dc-4ca6-a08a-678eff5c06cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:31:24 np0005465987 nova_compute[230713]: 2025-10-02 12:31:24.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:24.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:24 np0005465987 nova_compute[230713]: 2025-10-02 12:31:24.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:25.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:26.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:27.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:31:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:31:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:31:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:27.957 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:27.958 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:27.959 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:28Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:6e:1d 10.100.0.13
Oct  2 08:31:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:28.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:29 np0005465987 nova_compute[230713]: 2025-10-02 12:31:29.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:29.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:29 np0005465987 nova_compute[230713]: 2025-10-02 12:31:29.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 08:31:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:30.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 08:31:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:31.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e316 e316: 3 total, 3 up, 3 in
Oct  2 08:31:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2583777070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:32.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:33.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.605417) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293605498, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 762, "num_deletes": 253, "total_data_size": 1214044, "memory_usage": 1235560, "flush_reason": "Manual Compaction"}
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293613733, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 569462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49839, "largest_seqno": 50596, "table_properties": {"data_size": 566249, "index_size": 1057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9029, "raw_average_key_size": 21, "raw_value_size": 559267, "raw_average_value_size": 1303, "num_data_blocks": 47, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408251, "oldest_key_time": 1759408251, "file_creation_time": 1759408293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 8344 microseconds, and 2479 cpu microseconds.
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.613773) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 569462 bytes OK
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.613792) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.619235) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.619271) EVENT_LOG_v1 {"time_micros": 1759408293619263, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.619292) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1209981, prev total WAL file size 1209981, number of live WAL files 2.
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.619981) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353037' seq:72057594037927935, type:22 .. '6D6772737461740031373630' seq:0, type:0; will stop at (end)
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(556KB)], [96(12MB)]
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293620062, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 13278299, "oldest_snapshot_seqno": -1}
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7329 keys, 9648978 bytes, temperature: kUnknown
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293696430, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 9648978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9601551, "index_size": 27987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 190173, "raw_average_key_size": 25, "raw_value_size": 9472261, "raw_average_value_size": 1292, "num_data_blocks": 1096, "num_entries": 7329, "num_filter_entries": 7329, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759408293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.696690) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9648978 bytes
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.698228) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.7 rd, 126.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 12.1 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(40.3) write-amplify(16.9) OK, records in: 7835, records dropped: 506 output_compression: NoCompression
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.698249) EVENT_LOG_v1 {"time_micros": 1759408293698240, "job": 60, "event": "compaction_finished", "compaction_time_micros": 76442, "compaction_time_cpu_micros": 28724, "output_level": 6, "num_output_files": 1, "total_output_size": 9648978, "num_input_records": 7835, "num_output_records": 7329, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293698480, "job": 60, "event": "table_file_deletion", "file_number": 98}
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408293701526, "job": 60, "event": "table_file_deletion", "file_number": 96}
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.619842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.701582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.701590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.701593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.701596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:31:33 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:31:33.701599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:31:34 np0005465987 nova_compute[230713]: 2025-10-02 12:31:34.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:31:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:34.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:31:34 np0005465987 nova_compute[230713]: 2025-10-02 12:31:34.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:31:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:31:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:35.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:36.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:37.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:37 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.879 2 DEBUG oslo_concurrency.lockutils [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:37 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.880 2 DEBUG oslo_concurrency.lockutils [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:37 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.880 2 DEBUG oslo_concurrency.lockutils [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:37 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.880 2 DEBUG oslo_concurrency.lockutils [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:37 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.881 2 DEBUG oslo_concurrency.lockutils [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:37 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.882 2 INFO nova.compute.manager [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Terminating instance#033[00m
Oct  2 08:31:37 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.882 2 DEBUG nova.compute.manager [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:31:37 np0005465987 kernel: tapb5ad4ac4-0d (unregistering): left promiscuous mode
Oct  2 08:31:37 np0005465987 NetworkManager[44910]: <info>  [1759408297.9736] device (tapb5ad4ac4-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:31:37 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:37Z|00431|binding|INFO|Releasing lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 from this chassis (sb_readonly=0)
Oct  2 08:31:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:37Z|00432|binding|INFO|Setting lport b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 down in Southbound
Oct  2 08:31:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:37Z|00433|binding|INFO|Removing iface tapb5ad4ac4-0d ovn-installed in OVS
Oct  2 08:31:37 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:37.993 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:6e:1d 10.100.0.13'], port_security=['fa:16:3e:99:6e:1d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6d9cbb75-985a-47af-9a91-f4a885d1b59a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27a1729bf10548219b90df46839849f5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '19f6d4f0-1655-4062-a124-10140844bfae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7e0b23-d51b-4498-9dd8-e3096f69c99c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:37.994 138325 INFO neutron.agent.ovn.metadata.agent [-] Port b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 in datapath 247d774d-0cc8-4ef2-a9b8-c756adae0874 unbound from our chassis#033[00m
Oct  2 08:31:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:37.996 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 247d774d-0cc8-4ef2-a9b8-c756adae0874, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:31:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:37.997 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0fbdd6-104f-4387-ace8-7d2704d475b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:37.997 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 namespace which is not needed anymore#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:37.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005465987 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct  2 08:31:38 np0005465987 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000075.scope: Consumed 13.988s CPU time.
Oct  2 08:31:38 np0005465987 systemd-machined[188335]: Machine qemu-52-instance-00000075 terminated.
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.121 2 INFO nova.virt.libvirt.driver [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Instance destroyed successfully.#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.122 2 DEBUG nova.objects.instance [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lazy-loading 'resources' on Instance uuid 6d9cbb75-985a-47af-9a91-f4a885d1b59a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.141 2 DEBUG nova.virt.libvirt.vif [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-245965768',display_name='tempest-ServerDiskConfigTestJSON-server-245965768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-245965768',id=117,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27a1729bf10548219b90df46839849f5',ramdisk_id='',reservation_id='r-bp44ih1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1123059068',owner_user_name='tempest-ServerDiskConfigTestJSON-1123059068-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:33Z,user_data=None,user_id='4a89b71e2513413e922ee6d5d06362b1',uuid=6d9cbb75-985a-47af-9a91-f4a885d1b59a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:31:38 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[274526]: [NOTICE]   (274530) : haproxy version is 2.8.14-c23fe91
Oct  2 08:31:38 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[274526]: [NOTICE]   (274530) : path to executable is /usr/sbin/haproxy
Oct  2 08:31:38 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[274526]: [WARNING]  (274530) : Exiting Master process...
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.144 2 DEBUG nova.network.os_vif_util [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converting VIF {"id": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "address": "fa:16:3e:99:6e:1d", "network": {"id": "247d774d-0cc8-4ef2-a9b8-c756adae0874", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1434934627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27a1729bf10548219b90df46839849f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5ad4ac4-0d", "ovs_interfaceid": "b5ad4ac4-0d9f-487b-8448-19fdf04c9f36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:38 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[274526]: [ALERT]    (274530) : Current worker (274532) exited with code 143 (Terminated)
Oct  2 08:31:38 np0005465987 neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874[274526]: [WARNING]  (274530) : All workers exited. Exiting... (0)
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.145 2 DEBUG nova.network.os_vif_util [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.147 2 DEBUG os_vif [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005465987 systemd[1]: libpod-8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3.scope: Deactivated successfully.
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.150 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5ad4ac4-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:38 np0005465987 conmon[274526]: conmon 8a5dc8b7f0266f9bd96a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3.scope/container/memory.events
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005465987 podman[274816]: 2025-10-02 12:31:38.153353976 +0000 UTC m=+0.069297410 container died 8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.155 2 INFO os_vif [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:6e:1d,bridge_name='br-int',has_traffic_filtering=True,id=b5ad4ac4-0d9f-487b-8448-19fdf04c9f36,network=Network(247d774d-0cc8-4ef2-a9b8-c756adae0874),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5ad4ac4-0d')#033[00m
Oct  2 08:31:38 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3-userdata-shm.mount: Deactivated successfully.
Oct  2 08:31:38 np0005465987 systemd[1]: var-lib-containers-storage-overlay-5d579baced66efd80f46fea20707b78c11ffcfa2a273f005c7be715f5a4cfd80-merged.mount: Deactivated successfully.
Oct  2 08:31:38 np0005465987 podman[274816]: 2025-10-02 12:31:38.27881734 +0000 UTC m=+0.194760774 container cleanup 8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:31:38 np0005465987 systemd[1]: libpod-conmon-8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3.scope: Deactivated successfully.
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.318 2 DEBUG nova.compute.manager [req-24a2384e-4ce8-4587-a12a-225510bf7820 req-6c59917d-8431-480d-b9c1-215aa91c47e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-unplugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.318 2 DEBUG oslo_concurrency.lockutils [req-24a2384e-4ce8-4587-a12a-225510bf7820 req-6c59917d-8431-480d-b9c1-215aa91c47e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.319 2 DEBUG oslo_concurrency.lockutils [req-24a2384e-4ce8-4587-a12a-225510bf7820 req-6c59917d-8431-480d-b9c1-215aa91c47e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.319 2 DEBUG oslo_concurrency.lockutils [req-24a2384e-4ce8-4587-a12a-225510bf7820 req-6c59917d-8431-480d-b9c1-215aa91c47e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.320 2 DEBUG nova.compute.manager [req-24a2384e-4ce8-4587-a12a-225510bf7820 req-6c59917d-8431-480d-b9c1-215aa91c47e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-unplugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.320 2 DEBUG nova.compute.manager [req-24a2384e-4ce8-4587-a12a-225510bf7820 req-6c59917d-8431-480d-b9c1-215aa91c47e2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-unplugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:31:38 np0005465987 podman[274874]: 2025-10-02 12:31:38.347925142 +0000 UTC m=+0.047612881 container remove 8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:31:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:38.354 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc999e8-85c8-44f2-a66f-7be044d3c66d]: (4, ('Thu Oct  2 12:31:38 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3)\n8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3\nThu Oct  2 12:31:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 (8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3)\n8a5dc8b7f0266f9bd96a74050d862bfe466dbdec0eb6b9f0241b7477890df2b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:38.355 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3213f7d4-5d12-431f-abf7-88f6fc709457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:38.356 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247d774d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005465987 kernel: tap247d774d-00: left promiscuous mode
Oct  2 08:31:38 np0005465987 nova_compute[230713]: 2025-10-02 12:31:38.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:38.377 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[93deb5bf-a506-4c89-8d13-24d3a92f7817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:38.395 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e082a6c4-ff3e-410d-9c1d-7c7a61d04289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:38.397 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b8fc788e-36c9-419d-8a50-9544a258c8bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:38.412 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[87cf89cc-6d7b-4014-bd82-cbf84949b281]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634326, 'reachable_time': 18798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274889, 'error': None, 'target': 'ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:38 np0005465987 systemd[1]: run-netns-ovnmeta\x2d247d774d\x2d0cc8\x2d4ef2\x2da9b8\x2dc756adae0874.mount: Deactivated successfully.
Oct  2 08:31:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:38.417 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-247d774d-0cc8-4ef2-a9b8-c756adae0874 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:31:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:38.417 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4d1447-5681-41ef-8bdb-94c9ae23e191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:38.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e317 e317: 3 total, 3 up, 3 in
Oct  2 08:31:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:39.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:39 np0005465987 nova_compute[230713]: 2025-10-02 12:31:39.448 2 DEBUG oslo_concurrency.lockutils [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:39 np0005465987 nova_compute[230713]: 2025-10-02 12:31:39.449 2 DEBUG oslo_concurrency.lockutils [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:39 np0005465987 nova_compute[230713]: 2025-10-02 12:31:39.449 2 DEBUG nova.compute.manager [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:39 np0005465987 nova_compute[230713]: 2025-10-02 12:31:39.454 2 DEBUG nova.compute.manager [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:31:39 np0005465987 nova_compute[230713]: 2025-10-02 12:31:39.455 2 DEBUG nova.objects.instance [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'flavor' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:39 np0005465987 nova_compute[230713]: 2025-10-02 12:31:39.497 2 DEBUG nova.virt.libvirt.driver [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:31:39 np0005465987 nova_compute[230713]: 2025-10-02 12:31:39.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:40 np0005465987 nova_compute[230713]: 2025-10-02 12:31:40.633 2 DEBUG nova.compute.manager [req-113b0ab8-f35c-4c85-88c1-aef06afb5be1 req-d885ece2-176a-48c9-a8f9-1680e01c2580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:40 np0005465987 nova_compute[230713]: 2025-10-02 12:31:40.634 2 DEBUG oslo_concurrency.lockutils [req-113b0ab8-f35c-4c85-88c1-aef06afb5be1 req-d885ece2-176a-48c9-a8f9-1680e01c2580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:40 np0005465987 nova_compute[230713]: 2025-10-02 12:31:40.634 2 DEBUG oslo_concurrency.lockutils [req-113b0ab8-f35c-4c85-88c1-aef06afb5be1 req-d885ece2-176a-48c9-a8f9-1680e01c2580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:40 np0005465987 nova_compute[230713]: 2025-10-02 12:31:40.634 2 DEBUG oslo_concurrency.lockutils [req-113b0ab8-f35c-4c85-88c1-aef06afb5be1 req-d885ece2-176a-48c9-a8f9-1680e01c2580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:40 np0005465987 nova_compute[230713]: 2025-10-02 12:31:40.635 2 DEBUG nova.compute.manager [req-113b0ab8-f35c-4c85-88c1-aef06afb5be1 req-d885ece2-176a-48c9-a8f9-1680e01c2580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] No waiting events found dispatching network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:40 np0005465987 nova_compute[230713]: 2025-10-02 12:31:40.635 2 WARNING nova.compute.manager [req-113b0ab8-f35c-4c85-88c1-aef06afb5be1 req-d885ece2-176a-48c9-a8f9-1680e01c2580 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received unexpected event network-vif-plugged-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:31:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:40.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:41.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.033 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.036 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:31:42 np0005465987 kernel: tapef13c059-8f (unregistering): left promiscuous mode
Oct  2 08:31:42 np0005465987 NetworkManager[44910]: <info>  [1759408302.3542] device (tapef13c059-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:31:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:42Z|00434|binding|INFO|Releasing lport ef13c059-8f9b-4954-a8fd-1da56abe5729 from this chassis (sb_readonly=0)
Oct  2 08:31:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:42Z|00435|binding|INFO|Setting lport ef13c059-8f9b-4954-a8fd-1da56abe5729 down in Southbound
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:42Z|00436|binding|INFO|Removing iface tapef13c059-8f ovn-installed in OVS
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.372 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:57:ce 10.100.0.6'], port_security=['fa:16:3e:fb:57:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '62d1cea8-2591-4b86-b69f-aee1eeafa901', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ef13c059-8f9b-4954-a8fd-1da56abe5729) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.373 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ef13c059-8f9b-4954-a8fd-1da56abe5729 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff unbound from our chassis#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.374 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.376 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b5aa174c-680c-4a31-92d0-568359c8fd85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.376 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace which is not needed anymore#033[00m
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:42 np0005465987 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct  2 08:31:42 np0005465987 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000076.scope: Consumed 15.630s CPU time.
Oct  2 08:31:42 np0005465987 systemd-machined[188335]: Machine qemu-51-instance-00000076 terminated.
Oct  2 08:31:42 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[274156]: [NOTICE]   (274160) : haproxy version is 2.8.14-c23fe91
Oct  2 08:31:42 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[274156]: [NOTICE]   (274160) : path to executable is /usr/sbin/haproxy
Oct  2 08:31:42 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[274156]: [WARNING]  (274160) : Exiting Master process...
Oct  2 08:31:42 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[274156]: [ALERT]    (274160) : Current worker (274162) exited with code 143 (Terminated)
Oct  2 08:31:42 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[274156]: [WARNING]  (274160) : All workers exited. Exiting... (0)
Oct  2 08:31:42 np0005465987 systemd[1]: libpod-d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b.scope: Deactivated successfully.
Oct  2 08:31:42 np0005465987 conmon[274156]: conmon d9ee0ce29d81ff928f7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b.scope/container/memory.events
Oct  2 08:31:42 np0005465987 podman[274916]: 2025-10-02 12:31:42.519998649 +0000 UTC m=+0.046314537 container died d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:31:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b-userdata-shm.mount: Deactivated successfully.
Oct  2 08:31:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay-fba0ba41f35621c5e20f5d558c73285c024adfa4b8e52941c8e9bc1a92e0502f-merged.mount: Deactivated successfully.
Oct  2 08:31:42 np0005465987 podman[274916]: 2025-10-02 12:31:42.55455226 +0000 UTC m=+0.080868118 container cleanup d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:31:42 np0005465987 systemd[1]: libpod-conmon-d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b.scope: Deactivated successfully.
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.600 2 INFO nova.virt.libvirt.driver [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.606 2 INFO nova.virt.libvirt.driver [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance destroyed successfully.#033[00m
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.607 2 DEBUG nova.objects.instance [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'numa_topology' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:42 np0005465987 podman[274944]: 2025-10-02 12:31:42.62791485 +0000 UTC m=+0.051463838 container remove d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.634 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5f737a77-e80a-4309-940d-bea56cfa0fc7]: (4, ('Thu Oct  2 12:31:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b)\nd9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b\nThu Oct  2 12:31:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (d9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b)\nd9ee0ce29d81ff928f7d92b176c415e4e9d853aba20c5fc8c7449e28d516de1b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.636 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[001fc274-dd54-4da0-a0de-fe138475d73c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.637 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:42 np0005465987 kernel: tapb2c62a66-f0: left promiscuous mode
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.643 2 DEBUG nova.compute.manager [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.659 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[321b1f84-4534-4a6f-a193-d3acec2f56bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.690 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fbeab327-eca7-4969-b5e4-f9fdafc3b525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.692 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b6d866-3b10-4176-a0d0-f6d03d1b2078]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.706 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[afabb018-3dd6-46fb-9ed8-419c789806d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632952, 'reachable_time': 29498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274973, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:42 np0005465987 systemd[1]: run-netns-ovnmeta\x2db2c62a66\x2df9bc\x2d4a45\x2da843\x2daef2e12a7fff.mount: Deactivated successfully.
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.710 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:31:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:42.711 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[4355eb50-2037-464c-9035-5a9d2e69ac12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:42 np0005465987 nova_compute[230713]: 2025-10-02 12:31:42.728 2 DEBUG oslo_concurrency.lockutils [None req-be0ba53f-57de-4ee8-9635-f9703344ff7d 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:42.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:43 np0005465987 nova_compute[230713]: 2025-10-02 12:31:43.037 2 INFO nova.virt.libvirt.driver [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Deleting instance files /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a_del#033[00m
Oct  2 08:31:43 np0005465987 nova_compute[230713]: 2025-10-02 12:31:43.038 2 INFO nova.virt.libvirt.driver [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Deletion of /var/lib/nova/instances/6d9cbb75-985a-47af-9a91-f4a885d1b59a_del complete#033[00m
Oct  2 08:31:43 np0005465987 nova_compute[230713]: 2025-10-02 12:31:43.098 2 INFO nova.compute.manager [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Took 5.22 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:31:43 np0005465987 nova_compute[230713]: 2025-10-02 12:31:43.099 2 DEBUG oslo.service.loopingcall [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:31:43 np0005465987 nova_compute[230713]: 2025-10-02 12:31:43.099 2 DEBUG nova.compute.manager [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:31:43 np0005465987 nova_compute[230713]: 2025-10-02 12:31:43.099 2 DEBUG nova.network.neutron [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:31:43 np0005465987 nova_compute[230713]: 2025-10-02 12:31:43.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:43.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:43 np0005465987 podman[274974]: 2025-10-02 12:31:43.871557303 +0000 UTC m=+0.087796318 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.306 2 DEBUG nova.compute.manager [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-unplugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.307 2 DEBUG oslo_concurrency.lockutils [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.307 2 DEBUG oslo_concurrency.lockutils [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.307 2 DEBUG oslo_concurrency.lockutils [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.307 2 DEBUG nova.compute.manager [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-unplugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.307 2 WARNING nova.compute.manager [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-unplugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.308 2 DEBUG nova.compute.manager [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.308 2 DEBUG oslo_concurrency.lockutils [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.308 2 DEBUG oslo_concurrency.lockutils [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.308 2 DEBUG oslo_concurrency.lockutils [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.308 2 DEBUG nova.compute.manager [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.308 2 WARNING nova.compute.manager [req-41b1ac39-b014-4b34-8387-4db02a695146 req-9ff4401a-c950-4e66-8cda-75a457af7f76 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.471 2 DEBUG nova.objects.instance [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'flavor' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.503 2 DEBUG oslo_concurrency.lockutils [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.503 2 DEBUG oslo_concurrency.lockutils [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquired lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.504 2 DEBUG nova.network.neutron [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.504 2 DEBUG nova.objects.instance [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'info_cache' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.560 2 DEBUG nova.network.neutron [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.613 2 INFO nova.compute.manager [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Took 1.51 seconds to deallocate network for instance.#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.678 2 DEBUG oslo_concurrency.lockutils [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.679 2 DEBUG oslo_concurrency.lockutils [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.689 2 DEBUG oslo_concurrency.lockutils [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.739 2 INFO nova.scheduler.client.report [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Deleted allocations for instance 6d9cbb75-985a-47af-9a91-f4a885d1b59a#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.747 2 DEBUG nova.compute.manager [req-e9af8a46-e8ed-4d7d-85ef-59586b6391ea req-945bec13-7d93-4946-9b14-2cfc73b2317c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Received event network-vif-deleted-b5ad4ac4-0d9f-487b-8448-19fdf04c9f36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:44.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:44 np0005465987 nova_compute[230713]: 2025-10-02 12:31:44.850 2 DEBUG oslo_concurrency.lockutils [None req-fbbd8f94-d5ec-4ccb-b5c6-daa195a20df5 4a89b71e2513413e922ee6d5d06362b1 27a1729bf10548219b90df46839849f5 - - default default] Lock "6d9cbb75-985a-47af-9a91-f4a885d1b59a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:45.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e318 e318: 3 total, 3 up, 3 in
Oct  2 08:31:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:46.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:47.039 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.095 2 DEBUG nova.network.neutron [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Updating instance_info_cache with network_info: [{"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.114 2 DEBUG oslo_concurrency.lockutils [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Releasing lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.145 2 INFO nova.virt.libvirt.driver [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance destroyed successfully.#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.146 2 DEBUG nova.objects.instance [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'numa_topology' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.165 2 DEBUG nova.objects.instance [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'resources' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.181 2 DEBUG nova.virt.libvirt.vif [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-443770275',display_name='tempest-ServerActionsTestJSON-server-443770275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-443770275',id=118,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-1z1vi3sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=62d1cea8-2591-4b86-b69f-aee1eeafa901,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.181 2 DEBUG nova.network.os_vif_util [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.182 2 DEBUG nova.network.os_vif_util [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.183 2 DEBUG os_vif [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef13c059-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.191 2 INFO os_vif [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f')#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.200 2 DEBUG nova.virt.libvirt.driver [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Start _get_guest_xml network_info=[{"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.205 2 WARNING nova.virt.libvirt.driver [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.222 2 DEBUG nova.virt.libvirt.host [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.223 2 DEBUG nova.virt.libvirt.host [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.227 2 DEBUG nova.virt.libvirt.host [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.228 2 DEBUG nova.virt.libvirt.host [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.229 2 DEBUG nova.virt.libvirt.driver [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.229 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.230 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.230 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.230 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.231 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.231 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.231 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.231 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.232 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.232 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.232 2 DEBUG nova.virt.hardware [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.232 2 DEBUG nova.objects.instance [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.247 2 DEBUG oslo_concurrency.processutils [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:47.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:31:47 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/800553834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.723 2 DEBUG oslo_concurrency.processutils [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:47 np0005465987 nova_compute[230713]: 2025-10-02 12:31:47.766 2 DEBUG oslo_concurrency.processutils [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:31:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/259927622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.233 2 DEBUG oslo_concurrency.processutils [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.236 2 DEBUG nova.virt.libvirt.vif [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-443770275',display_name='tempest-ServerActionsTestJSON-server-443770275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-443770275',id=118,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-1z1vi3sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=62d1cea8-2591-4b86-b69f-aee1eeafa901,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.237 2 DEBUG nova.network.os_vif_util [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.238 2 DEBUG nova.network.os_vif_util [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.239 2 DEBUG nova.objects.instance [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.260 2 DEBUG nova.virt.libvirt.driver [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <uuid>62d1cea8-2591-4b86-b69f-aee1eeafa901</uuid>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <name>instance-00000076</name>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerActionsTestJSON-server-443770275</nova:name>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:31:47</nova:creationTime>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <nova:user uuid="2bd16d1f5f9d4eb396c474eedee67165">tempest-ServerActionsTestJSON-842270816-project-member</nova:user>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <nova:project uuid="4b8ca48cb5f64ef3b0736b8be82378b8">tempest-ServerActionsTestJSON-842270816</nova:project>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <nova:port uuid="ef13c059-8f9b-4954-a8fd-1da56abe5729">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <entry name="serial">62d1cea8-2591-4b86-b69f-aee1eeafa901</entry>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <entry name="uuid">62d1cea8-2591-4b86-b69f-aee1eeafa901</entry>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/62d1cea8-2591-4b86-b69f-aee1eeafa901_disk">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/62d1cea8-2591-4b86-b69f-aee1eeafa901_disk.config">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:fb:57:ce"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <target dev="tapef13c059-8f"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901/console.log" append="off"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <input type="keyboard" bus="usb"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:31:48 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:31:48 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:31:48 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:31:48 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.261 2 DEBUG nova.virt.libvirt.driver [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.262 2 DEBUG nova.virt.libvirt.driver [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.263 2 DEBUG nova.virt.libvirt.vif [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-443770275',display_name='tempest-ServerActionsTestJSON-server-443770275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-443770275',id=118,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-1z1vi3sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=62d1cea8-2591-4b86-b69f-aee1eeafa901,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.263 2 DEBUG nova.network.os_vif_util [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.264 2 DEBUG nova.network.os_vif_util [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.264 2 DEBUG os_vif [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef13c059-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.274 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef13c059-8f, col_values=(('external_ids', {'iface-id': 'ef13c059-8f9b-4954-a8fd-1da56abe5729', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:57:ce', 'vm-uuid': '62d1cea8-2591-4b86-b69f-aee1eeafa901'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 NetworkManager[44910]: <info>  [1759408308.2767] manager: (tapef13c059-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.283 2 INFO os_vif [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f')#033[00m
Oct  2 08:31:48 np0005465987 kernel: tapef13c059-8f: entered promiscuous mode
Oct  2 08:31:48 np0005465987 NetworkManager[44910]: <info>  [1759408308.3790] manager: (tapef13c059-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Oct  2 08:31:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:48Z|00437|binding|INFO|Claiming lport ef13c059-8f9b-4954-a8fd-1da56abe5729 for this chassis.
Oct  2 08:31:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:48Z|00438|binding|INFO|ef13c059-8f9b-4954-a8fd-1da56abe5729: Claiming fa:16:3e:fb:57:ce 10.100.0.6
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.398 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:57:ce 10.100.0.6'], port_security=['fa:16:3e:fb:57:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '62d1cea8-2591-4b86-b69f-aee1eeafa901', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '5', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ef13c059-8f9b-4954-a8fd-1da56abe5729) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.399 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ef13c059-8f9b-4954-a8fd-1da56abe5729 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff bound to our chassis#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.401 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff#033[00m
Oct  2 08:31:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:48Z|00439|binding|INFO|Setting lport ef13c059-8f9b-4954-a8fd-1da56abe5729 ovn-installed in OVS
Oct  2 08:31:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:48Z|00440|binding|INFO|Setting lport ef13c059-8f9b-4954-a8fd-1da56abe5729 up in Southbound
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 systemd-machined[188335]: New machine qemu-53-instance-00000076.
Oct  2 08:31:48 np0005465987 systemd-udevd[275071]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.421 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8e4d96-2711-425e-8721-27c97657b51f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.422 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2c62a66-f1 in ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.424 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2c62a66-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.424 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4598e045-adb5-4de6-ab47-6c9ed84f315d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.425 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[190d2d80-a0c9-426d-9950-e382f8119e00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 NetworkManager[44910]: <info>  [1759408308.4322] device (tapef13c059-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:31:48 np0005465987 NetworkManager[44910]: <info>  [1759408308.4329] device (tapef13c059-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:31:48 np0005465987 systemd[1]: Started Virtual Machine qemu-53-instance-00000076.
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.440 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[66e9a474-3b79-47b9-9200-ce3536132633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.475 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4223dc-fe52-4a73-8235-b6716d53eeca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.506 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7a602c8f-07cb-4d9b-b6be-217a318e4bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 NetworkManager[44910]: <info>  [1759408308.5140] manager: (tapb2c62a66-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.513 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[68cbaff4-03a0-4920-b438-ec19d3155a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.544 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[133ace8e-2904-4919-884e-54ef1a4416b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.548 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[260819ba-c508-4d53-a3f8-3b7e37a6af1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 NetworkManager[44910]: <info>  [1759408308.5733] device (tapb2c62a66-f0): carrier: link connected
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.579 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3f58ebdc-4254-4f3c-b2c7-9d589663ffb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.596 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccbaa99-5320-4975-b372-fbf0f4995a74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637714, 'reachable_time': 36091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275103, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.615 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3adb55a5-fab8-4ea0-af9d-ef12a698e8ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:7a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637714, 'tstamp': 637714}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275104, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.630 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[afef914d-35fe-4017-94f8-e3fed87c16cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637714, 'reachable_time': 36091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275105, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.657 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[413b9d61-3268-4638-b2cb-6fb79adb3937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.711 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c105c997-72de-455c-a3e3-f4d7b6c3ea98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.713 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.713 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.714 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2c62a66-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 NetworkManager[44910]: <info>  [1759408308.7162] manager: (tapb2c62a66-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct  2 08:31:48 np0005465987 kernel: tapb2c62a66-f0: entered promiscuous mode
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.719 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2c62a66-f0, col_values=(('external_ids', {'iface-id': '8c1234f6-f595-4979-94c0-98da2211f0e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:48Z|00441|binding|INFO|Releasing lport 8c1234f6-f595-4979-94c0-98da2211f0e1 from this chassis (sb_readonly=0)
Oct  2 08:31:48 np0005465987 nova_compute[230713]: 2025-10-02 12:31:48.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.737 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.738 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e737fc90-9ee7-4390-8573-083092dd36f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.739 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:31:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:48.739 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'env', 'PROCESS_TAG=haproxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:31:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:48.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:49 np0005465987 podman[275155]: 2025-10-02 12:31:49.068789995 +0000 UTC m=+0.030056027 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:31:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:49.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:49 np0005465987 podman[275155]: 2025-10-02 12:31:49.401050904 +0000 UTC m=+0.362316936 container create fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:31:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:49 np0005465987 systemd[1]: Started libpod-conmon-fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f.scope.
Oct  2 08:31:49 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:31:49 np0005465987 podman[275187]: 2025-10-02 12:31:49.589493653 +0000 UTC m=+0.145327793 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:31:49 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d1f60bb08894fb3730ea805aee5da9b2b34ed13f2a5fa2f256218fec7e50cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:31:49 np0005465987 podman[275188]: 2025-10-02 12:31:49.593835313 +0000 UTC m=+0.144204762 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:31:49 np0005465987 podman[275155]: 2025-10-02 12:31:49.668767676 +0000 UTC m=+0.630033678 container init fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:31:49 np0005465987 podman[275155]: 2025-10-02 12:31:49.675853331 +0000 UTC m=+0.637119333 container start fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:31:49 np0005465987 nova_compute[230713]: 2025-10-02 12:31:49.676 2 DEBUG nova.compute.manager [req-dbd5918c-0dfe-4932-aa02-a627d09c644e req-6114898e-2563-4731-8047-fed77d5c7c69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:49 np0005465987 nova_compute[230713]: 2025-10-02 12:31:49.677 2 DEBUG oslo_concurrency.lockutils [req-dbd5918c-0dfe-4932-aa02-a627d09c644e req-6114898e-2563-4731-8047-fed77d5c7c69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:49 np0005465987 nova_compute[230713]: 2025-10-02 12:31:49.677 2 DEBUG oslo_concurrency.lockutils [req-dbd5918c-0dfe-4932-aa02-a627d09c644e req-6114898e-2563-4731-8047-fed77d5c7c69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:49 np0005465987 nova_compute[230713]: 2025-10-02 12:31:49.677 2 DEBUG oslo_concurrency.lockutils [req-dbd5918c-0dfe-4932-aa02-a627d09c644e req-6114898e-2563-4731-8047-fed77d5c7c69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:49 np0005465987 nova_compute[230713]: 2025-10-02 12:31:49.677 2 DEBUG nova.compute.manager [req-dbd5918c-0dfe-4932-aa02-a627d09c644e req-6114898e-2563-4731-8047-fed77d5c7c69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:49 np0005465987 nova_compute[230713]: 2025-10-02 12:31:49.678 2 WARNING nova.compute.manager [req-dbd5918c-0dfe-4932-aa02-a627d09c644e req-6114898e-2563-4731-8047-fed77d5c7c69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 08:31:49 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275240]: [NOTICE]   (275260) : New worker (275262) forked
Oct  2 08:31:49 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275240]: [NOTICE]   (275260) : Loading success.
Oct  2 08:31:49 np0005465987 podman[275186]: 2025-10-02 12:31:49.754482456 +0000 UTC m=+0.314561782 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:31:49 np0005465987 nova_compute[230713]: 2025-10-02 12:31:49.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.052 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 62d1cea8-2591-4b86-b69f-aee1eeafa901 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.052 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408310.0509374, 62d1cea8-2591-4b86-b69f-aee1eeafa901 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.053 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.055 2 DEBUG nova.compute.manager [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.058 2 INFO nova.virt.libvirt.driver [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance rebooted successfully.#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.059 2 DEBUG nova.compute.manager [None req-8a813f56-13ac-4a47-af09-00acdfb0bde1 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.092 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.095 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.148 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.149 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408310.054563, 62d1cea8-2591-4b86-b69f-aee1eeafa901 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.149 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] VM Started (Lifecycle Event)#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.199 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:50 np0005465987 nova_compute[230713]: 2025-10-02 12:31:50.202 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:51.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:52 np0005465987 nova_compute[230713]: 2025-10-02 12:31:52.282 2 DEBUG nova.compute.manager [req-6d376af3-8b8f-4fd3-9d09-07ba918dda88 req-b19b72c4-f4bf-4617-ad70-b2c084bca3a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:52 np0005465987 nova_compute[230713]: 2025-10-02 12:31:52.282 2 DEBUG oslo_concurrency.lockutils [req-6d376af3-8b8f-4fd3-9d09-07ba918dda88 req-b19b72c4-f4bf-4617-ad70-b2c084bca3a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:52 np0005465987 nova_compute[230713]: 2025-10-02 12:31:52.283 2 DEBUG oslo_concurrency.lockutils [req-6d376af3-8b8f-4fd3-9d09-07ba918dda88 req-b19b72c4-f4bf-4617-ad70-b2c084bca3a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:52 np0005465987 nova_compute[230713]: 2025-10-02 12:31:52.283 2 DEBUG oslo_concurrency.lockutils [req-6d376af3-8b8f-4fd3-9d09-07ba918dda88 req-b19b72c4-f4bf-4617-ad70-b2c084bca3a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:52 np0005465987 nova_compute[230713]: 2025-10-02 12:31:52.283 2 DEBUG nova.compute.manager [req-6d376af3-8b8f-4fd3-9d09-07ba918dda88 req-b19b72c4-f4bf-4617-ad70-b2c084bca3a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:52 np0005465987 nova_compute[230713]: 2025-10-02 12:31:52.283 2 WARNING nova.compute.manager [req-6d376af3-8b8f-4fd3-9d09-07ba918dda88 req-b19b72c4-f4bf-4617-ad70-b2c084bca3a6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:31:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:52.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:53 np0005465987 nova_compute[230713]: 2025-10-02 12:31:53.121 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408298.1205542, 6d9cbb75-985a-47af-9a91-f4a885d1b59a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:53 np0005465987 nova_compute[230713]: 2025-10-02 12:31:53.121 2 INFO nova.compute.manager [-] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:31:53 np0005465987 nova_compute[230713]: 2025-10-02 12:31:53.207 2 DEBUG nova.compute.manager [None req-80a76284-d2c0-4daf-bbf9-0b4bd5eabecb - - - - - -] [instance: 6d9cbb75-985a-47af-9a91-f4a885d1b59a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:53 np0005465987 nova_compute[230713]: 2025-10-02 12:31:53.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:53.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:53 np0005465987 nova_compute[230713]: 2025-10-02 12:31:53.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:54.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:54 np0005465987 nova_compute[230713]: 2025-10-02 12:31:54.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.086 2 DEBUG nova.objects.instance [None req-40994cf5-6e0c-4d71-a818-f60e37d66638 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.143 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408315.1436584, 62d1cea8-2591-4b86-b69f-aee1eeafa901 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.144 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.184 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.189 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.238 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 08:31:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:55.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:55 np0005465987 kernel: tapef13c059-8f (unregistering): left promiscuous mode
Oct  2 08:31:55 np0005465987 NetworkManager[44910]: <info>  [1759408315.6458] device (tapef13c059-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:55 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:55Z|00442|binding|INFO|Releasing lport ef13c059-8f9b-4954-a8fd-1da56abe5729 from this chassis (sb_readonly=0)
Oct  2 08:31:55 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:55Z|00443|binding|INFO|Setting lport ef13c059-8f9b-4954-a8fd-1da56abe5729 down in Southbound
Oct  2 08:31:55 np0005465987 ovn_controller[129172]: 2025-10-02T12:31:55Z|00444|binding|INFO|Removing iface tapef13c059-8f ovn-installed in OVS
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:55.683 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:57:ce 10.100.0.6'], port_security=['fa:16:3e:fb:57:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '62d1cea8-2591-4b86-b69f-aee1eeafa901', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ef13c059-8f9b-4954-a8fd-1da56abe5729) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:55.684 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ef13c059-8f9b-4954-a8fd-1da56abe5729 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff unbound from our chassis#033[00m
Oct  2 08:31:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:55.686 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:31:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:55.687 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[df2faf70-5c18-45e9-bdff-bb5675d24e4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:55.687 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace which is not needed anymore#033[00m
Oct  2 08:31:55 np0005465987 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct  2 08:31:55 np0005465987 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000076.scope: Consumed 6.487s CPU time.
Oct  2 08:31:55 np0005465987 systemd-machined[188335]: Machine qemu-53-instance-00000076 terminated.
Oct  2 08:31:55 np0005465987 nova_compute[230713]: 2025-10-02 12:31:55.817 2 DEBUG nova.compute.manager [None req-40994cf5-6e0c-4d71-a818-f60e37d66638 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:55 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275240]: [NOTICE]   (275260) : haproxy version is 2.8.14-c23fe91
Oct  2 08:31:55 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275240]: [NOTICE]   (275260) : path to executable is /usr/sbin/haproxy
Oct  2 08:31:55 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275240]: [WARNING]  (275260) : Exiting Master process...
Oct  2 08:31:55 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275240]: [ALERT]    (275260) : Current worker (275262) exited with code 143 (Terminated)
Oct  2 08:31:55 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275240]: [WARNING]  (275260) : All workers exited. Exiting... (0)
Oct  2 08:31:55 np0005465987 systemd[1]: libpod-fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f.scope: Deactivated successfully.
Oct  2 08:31:55 np0005465987 podman[275298]: 2025-10-02 12:31:55.849698284 +0000 UTC m=+0.060832056 container died fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:31:55 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f-userdata-shm.mount: Deactivated successfully.
Oct  2 08:31:55 np0005465987 systemd[1]: var-lib-containers-storage-overlay-43d1f60bb08894fb3730ea805aee5da9b2b34ed13f2a5fa2f256218fec7e50cf-merged.mount: Deactivated successfully.
Oct  2 08:31:55 np0005465987 podman[275298]: 2025-10-02 12:31:55.94180743 +0000 UTC m=+0.152941182 container cleanup fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:31:55 np0005465987 systemd[1]: libpod-conmon-fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f.scope: Deactivated successfully.
Oct  2 08:31:56 np0005465987 podman[275340]: 2025-10-02 12:31:56.027890681 +0000 UTC m=+0.059709876 container remove fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:31:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:56.035 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1efdb0da-1c36-47d5-a84e-78b84cb3870e]: (4, ('Thu Oct  2 12:31:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f)\nfcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f\nThu Oct  2 12:31:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (fcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f)\nfcc277c43a235bd52c74a6d5a393e47ce0e310ae196dd1517b8ba45ecd383b4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:56.039 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[40d602bc-27cd-4405-ac01-b9fb51060f37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:56.041 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:56 np0005465987 kernel: tapb2c62a66-f0: left promiscuous mode
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:56.067 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[770de3bb-819e-4287-b094-ad02c4944acf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:56.105 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bf44422a-aa00-4427-aad5-c84cfda1214b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:56.107 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c83b84fc-4b93-4af1-b25a-50cf033d3e4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:56.123 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b03674-6689-4db8-b625-14f3c977bbfa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637707, 'reachable_time': 21385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275358, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:56.127 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:31:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:31:56.127 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[c5480592-2efc-4f3b-8d88-832cf98b69b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:56 np0005465987 systemd[1]: run-netns-ovnmeta\x2db2c62a66\x2df9bc\x2d4a45\x2da843\x2daef2e12a7fff.mount: Deactivated successfully.
Oct  2 08:31:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e319 e319: 3 total, 3 up, 3 in
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.581 2 DEBUG nova.compute.manager [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-unplugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.582 2 DEBUG oslo_concurrency.lockutils [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.583 2 DEBUG oslo_concurrency.lockutils [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.583 2 DEBUG oslo_concurrency.lockutils [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.583 2 DEBUG nova.compute.manager [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-unplugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.584 2 WARNING nova.compute.manager [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-unplugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.584 2 DEBUG nova.compute.manager [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.585 2 DEBUG oslo_concurrency.lockutils [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.585 2 DEBUG oslo_concurrency.lockutils [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.585 2 DEBUG oslo_concurrency.lockutils [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.586 2 DEBUG nova.compute.manager [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.586 2 WARNING nova.compute.manager [req-a4c2cce4-5b7e-4bfb-bf14-44cd2b976a43 req-7a68fb95-1fd6-4e5d-a442-21c27fdd2fd9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 08:31:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:31:56 np0005465987 nova_compute[230713]: 2025-10-02 12:31:56.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:31:57 np0005465987 nova_compute[230713]: 2025-10-02 12:31:57.189 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:57 np0005465987 nova_compute[230713]: 2025-10-02 12:31:57.189 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:57 np0005465987 nova_compute[230713]: 2025-10-02 12:31:57.190 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:31:57 np0005465987 nova_compute[230713]: 2025-10-02 12:31:57.190 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:57.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:57 np0005465987 nova_compute[230713]: 2025-10-02 12:31:57.728 2 INFO nova.compute.manager [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Resuming#033[00m
Oct  2 08:31:57 np0005465987 nova_compute[230713]: 2025-10-02 12:31:57.729 2 DEBUG nova.objects.instance [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'flavor' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:58 np0005465987 nova_compute[230713]: 2025-10-02 12:31:58.200 2 DEBUG oslo_concurrency.lockutils [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:58 np0005465987 nova_compute[230713]: 2025-10-02 12:31:58.201 2 DEBUG oslo_concurrency.lockutils [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquired lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:58 np0005465987 nova_compute[230713]: 2025-10-02 12:31:58.201 2 DEBUG nova.network.neutron [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:31:58 np0005465987 nova_compute[230713]: 2025-10-02 12:31:58.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:58.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:31:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:31:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:59.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:31:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:59 np0005465987 nova_compute[230713]: 2025-10-02 12:31:59.535 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updating instance_info_cache with network_info: [{"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:59 np0005465987 nova_compute[230713]: 2025-10-02 12:31:59.602 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:59 np0005465987 nova_compute[230713]: 2025-10-02 12:31:59.602 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:31:59 np0005465987 nova_compute[230713]: 2025-10-02 12:31:59.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:00Z|00445|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.391 2 DEBUG nova.network.neutron [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Updating instance_info_cache with network_info: [{"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.426 2 DEBUG oslo_concurrency.lockutils [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Releasing lock "refresh_cache-62d1cea8-2591-4b86-b69f-aee1eeafa901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.435 2 DEBUG nova.virt.libvirt.vif [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-443770275',display_name='tempest-ServerActionsTestJSON-server-443770275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-443770275',id=118,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-1z1vi3sk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:31:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=62d1cea8-2591-4b86-b69f-aee1eeafa901,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.435 2 DEBUG nova.network.os_vif_util [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.436 2 DEBUG nova.network.os_vif_util [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.437 2 DEBUG os_vif [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef13c059-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef13c059-8f, col_values=(('external_ids', {'iface-id': 'ef13c059-8f9b-4954-a8fd-1da56abe5729', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:57:ce', 'vm-uuid': '62d1cea8-2591-4b86-b69f-aee1eeafa901'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.445 2 INFO os_vif [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f')#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.479 2 DEBUG nova.objects.instance [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'numa_topology' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:00 np0005465987 kernel: tapef13c059-8f: entered promiscuous mode
Oct  2 08:32:00 np0005465987 NetworkManager[44910]: <info>  [1759408320.5700] manager: (tapef13c059-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:00Z|00446|binding|INFO|Claiming lport ef13c059-8f9b-4954-a8fd-1da56abe5729 for this chassis.
Oct  2 08:32:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:00Z|00447|binding|INFO|ef13c059-8f9b-4954-a8fd-1da56abe5729: Claiming fa:16:3e:fb:57:ce 10.100.0.6
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.581 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:57:ce 10.100.0.6'], port_security=['fa:16:3e:fb:57:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '62d1cea8-2591-4b86-b69f-aee1eeafa901', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ef13c059-8f9b-4954-a8fd-1da56abe5729) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.582 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ef13c059-8f9b-4954-a8fd-1da56abe5729 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff bound to our chassis#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.584 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff#033[00m
Oct  2 08:32:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:00Z|00448|binding|INFO|Setting lport ef13c059-8f9b-4954-a8fd-1da56abe5729 ovn-installed in OVS
Oct  2 08:32:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:00Z|00449|binding|INFO|Setting lport ef13c059-8f9b-4954-a8fd-1da56abe5729 up in Southbound
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.602 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[36e255e0-07a0-4fa2-8cf4-f41d48085a66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.605 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2c62a66-f1 in ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.607 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2c62a66-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.607 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[39679680-be9d-4f00-98ee-47bcfcbaa2bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.608 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[19eba3c4-8d3e-4437-a901-a5a523c301ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 systemd-udevd[275375]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:32:00 np0005465987 systemd-machined[188335]: New machine qemu-54-instance-00000076.
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.627 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7cbdb8-febc-4112-b862-d48754feb462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 NetworkManager[44910]: <info>  [1759408320.6308] device (tapef13c059-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:32:00 np0005465987 NetworkManager[44910]: <info>  [1759408320.6319] device (tapef13c059-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:32:00 np0005465987 systemd[1]: Started Virtual Machine qemu-54-instance-00000076.
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.653 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[69550a16-5aa9-4290-9547-6ddeda76eadb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.691 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[845d0234-3962-44de-83fa-b705c5ed597a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.698 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[957956e8-ab54-45f2-bb6d-71cf889e339f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 systemd-udevd[275377]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:32:00 np0005465987 NetworkManager[44910]: <info>  [1759408320.6996] manager: (tapb2c62a66-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.738 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ec3da6-5358-4ba4-95ac-8d38928f207e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.742 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5980a9cc-6d09-437d-87ce-e4325c57f575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 NetworkManager[44910]: <info>  [1759408320.7781] device (tapb2c62a66-f0): carrier: link connected
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.792 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2e6fea-8b35-49dc-962a-e000d1a5ab5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:32:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:00.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.818 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a3171508-bc55-4e1e-9146-876c6853563f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638935, 'reachable_time': 40615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275406, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.835 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[128ff5d1-3a0b-403a-ae17-5f11c9da6285]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:7a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638935, 'tstamp': 638935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275408, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.853 2 DEBUG nova.compute.manager [req-6b8b4582-b9bf-49d8-85af-5f9f4a7cb167 req-c17938f5-c949-4664-98ed-cce3ec80921f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.854 2 DEBUG oslo_concurrency.lockutils [req-6b8b4582-b9bf-49d8-85af-5f9f4a7cb167 req-c17938f5-c949-4664-98ed-cce3ec80921f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.854 2 DEBUG oslo_concurrency.lockutils [req-6b8b4582-b9bf-49d8-85af-5f9f4a7cb167 req-c17938f5-c949-4664-98ed-cce3ec80921f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.855 2 DEBUG oslo_concurrency.lockutils [req-6b8b4582-b9bf-49d8-85af-5f9f4a7cb167 req-c17938f5-c949-4664-98ed-cce3ec80921f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.855 2 DEBUG nova.compute.manager [req-6b8b4582-b9bf-49d8-85af-5f9f4a7cb167 req-c17938f5-c949-4664-98ed-cce3ec80921f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.855 2 WARNING nova.compute.manager [req-6b8b4582-b9bf-49d8-85af-5f9f4a7cb167 req-c17938f5-c949-4664-98ed-cce3ec80921f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state suspended and task_state resuming.#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.861 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f5d66c-2f1e-406f-9ff2-d58979b57b38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2c62a66-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:07:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638935, 'reachable_time': 40615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275422, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.906 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e4787926-a3b0-4384-a95b-f8a2fa32d6de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.994 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3d943c6a-8425-4c85-963e-5fc6c1350069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.995 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.995 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:00.996 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2c62a66-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:00 np0005465987 nova_compute[230713]: 2025-10-02 12:32:00.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:00 np0005465987 NetworkManager[44910]: <info>  [1759408320.9992] manager: (tapb2c62a66-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Oct  2 08:32:00 np0005465987 kernel: tapb2c62a66-f0: entered promiscuous mode
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:01.007 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2c62a66-f0, col_values=(('external_ids', {'iface-id': '8c1234f6-f595-4979-94c0-98da2211f0e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:01 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:01Z|00450|binding|INFO|Releasing lport 8c1234f6-f595-4979-94c0-98da2211f0e1 from this chassis (sb_readonly=0)
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:01.011 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:01.013 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[651d40de-091d-4a52-8e09-ff7cbe85310a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:01.014 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.pid.haproxy
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID b2c62a66-f9bc-4a45-a843-aef2e12a7fff
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:32:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:01.014 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'env', 'PROCESS_TAG=haproxy-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2c62a66-f9bc-4a45-a843-aef2e12a7fff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:01.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:01 np0005465987 podman[275482]: 2025-10-02 12:32:01.41934666 +0000 UTC m=+0.058663736 container create 54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:32:01 np0005465987 systemd[1]: Started libpod-conmon-54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444.scope.
Oct  2 08:32:01 np0005465987 podman[275482]: 2025-10-02 12:32:01.392027148 +0000 UTC m=+0.031344254 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:32:01 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:32:01 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a92c2a097d0dba3f5b28f6ae66e2de1eaf45b721aed2b327dc3386442ccc180/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:32:01 np0005465987 podman[275482]: 2025-10-02 12:32:01.519342684 +0000 UTC m=+0.158659820 container init 54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:32:01 np0005465987 podman[275482]: 2025-10-02 12:32:01.52719315 +0000 UTC m=+0.166510237 container start 54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:32:01 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275497]: [NOTICE]   (275501) : New worker (275503) forked
Oct  2 08:32:01 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275497]: [NOTICE]   (275501) : Loading success.
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.597 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 62d1cea8-2591-4b86-b69f-aee1eeafa901 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.598 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408321.596813, 62d1cea8-2591-4b86-b69f-aee1eeafa901 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.598 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] VM Started (Lifecycle Event)#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.616 2 DEBUG nova.compute.manager [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.617 2 DEBUG nova.objects.instance [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.625 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.632 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.663 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.664 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408321.6030216, 62d1cea8-2591-4b86-b69f-aee1eeafa901 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.664 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.669 2 INFO nova.virt.libvirt.driver [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance running successfully.#033[00m
Oct  2 08:32:01 np0005465987 virtqemud[230442]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.673 2 DEBUG nova.virt.libvirt.guest [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.673 2 DEBUG nova.compute.manager [None req-59715c77-3274-4da8-a869-15281d8c787b 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.683 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.686 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:01 np0005465987 nova_compute[230713]: 2025-10-02 12:32:01.717 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 08:32:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:32:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:02.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:32:03 np0005465987 nova_compute[230713]: 2025-10-02 12:32:03.112 2 DEBUG nova.compute.manager [req-07b7a620-f6aa-400a-9663-4786c0b42c48 req-e8a9e984-1a55-48be-b434-30e34b0059b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:03 np0005465987 nova_compute[230713]: 2025-10-02 12:32:03.113 2 DEBUG oslo_concurrency.lockutils [req-07b7a620-f6aa-400a-9663-4786c0b42c48 req-e8a9e984-1a55-48be-b434-30e34b0059b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:03 np0005465987 nova_compute[230713]: 2025-10-02 12:32:03.113 2 DEBUG oslo_concurrency.lockutils [req-07b7a620-f6aa-400a-9663-4786c0b42c48 req-e8a9e984-1a55-48be-b434-30e34b0059b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:03 np0005465987 nova_compute[230713]: 2025-10-02 12:32:03.114 2 DEBUG oslo_concurrency.lockutils [req-07b7a620-f6aa-400a-9663-4786c0b42c48 req-e8a9e984-1a55-48be-b434-30e34b0059b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:03 np0005465987 nova_compute[230713]: 2025-10-02 12:32:03.114 2 DEBUG nova.compute.manager [req-07b7a620-f6aa-400a-9663-4786c0b42c48 req-e8a9e984-1a55-48be-b434-30e34b0059b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:03 np0005465987 nova_compute[230713]: 2025-10-02 12:32:03.115 2 WARNING nova.compute.manager [req-07b7a620-f6aa-400a-9663-4786c0b42c48 req-e8a9e984-1a55-48be-b434-30e34b0059b4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:32:03 np0005465987 nova_compute[230713]: 2025-10-02 12:32:03.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:03.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:03 np0005465987 nova_compute[230713]: 2025-10-02 12:32:03.595 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:03 np0005465987 nova_compute[230713]: 2025-10-02 12:32:03.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.006 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.006 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.006 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.007 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.007 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3497539909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.557 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.647 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.647 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.651 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.652 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.655 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.655 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:04.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.835 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.836 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3867MB free_disk=20.751590728759766GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.837 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.837 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.985 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.985 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e744f3b9-ccc1-43b1-b134-e6050b9487eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.985 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 62d1cea8-2591-4b86-b69f-aee1eeafa901 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.986 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:32:04 np0005465987 nova_compute[230713]: 2025-10-02 12:32:04.986 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.083 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.174 2 DEBUG oslo_concurrency.lockutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.176 2 DEBUG oslo_concurrency.lockutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.177 2 DEBUG oslo_concurrency.lockutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.177 2 DEBUG oslo_concurrency.lockutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.178 2 DEBUG oslo_concurrency.lockutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.181 2 INFO nova.compute.manager [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Terminating instance#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.184 2 DEBUG nova.compute.manager [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:32:05 np0005465987 kernel: tapef13c059-8f (unregistering): left promiscuous mode
Oct  2 08:32:05 np0005465987 NetworkManager[44910]: <info>  [1759408325.3310] device (tapef13c059-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:32:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:05.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:05Z|00451|binding|INFO|Releasing lport ef13c059-8f9b-4954-a8fd-1da56abe5729 from this chassis (sb_readonly=0)
Oct  2 08:32:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:05Z|00452|binding|INFO|Setting lport ef13c059-8f9b-4954-a8fd-1da56abe5729 down in Southbound
Oct  2 08:32:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:05Z|00453|binding|INFO|Removing iface tapef13c059-8f ovn-installed in OVS
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.348 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:57:ce 10.100.0.6'], port_security=['fa:16:3e:fb:57:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '62d1cea8-2591-4b86-b69f-aee1eeafa901', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b8ca48cb5f64ef3b0736b8be82378b8', 'neutron:revision_number': '8', 'neutron:security_group_ids': '16cf92c4-b852-4373-864a-75cf05995c6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ff60d5-33e2-45e9-a563-ce0081b7cc04, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ef13c059-8f9b-4954-a8fd-1da56abe5729) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.351 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ef13c059-8f9b-4954-a8fd-1da56abe5729 in datapath b2c62a66-f9bc-4a45-a843-aef2e12a7fff unbound from our chassis#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.355 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.357 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c20c56ac-7b5d-459f-a061-5ab4d3368b17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.358 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff namespace which is not needed anymore#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:05 np0005465987 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct  2 08:32:05 np0005465987 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000076.scope: Consumed 4.448s CPU time.
Oct  2 08:32:05 np0005465987 systemd-machined[188335]: Machine qemu-54-instance-00000076 terminated.
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.424 2 INFO nova.virt.libvirt.driver [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Instance destroyed successfully.#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.425 2 DEBUG nova.objects.instance [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lazy-loading 'resources' on Instance uuid 62d1cea8-2591-4b86-b69f-aee1eeafa901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.444 2 DEBUG nova.virt.libvirt.vif [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-443770275',display_name='tempest-ServerActionsTestJSON-server-443770275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-443770275',id=118,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGPR3K1CRfy8NuVFf5q7pocTRVWdkUGpXAwygQtld+YJHetWc29OTANCyHNFo7YQv+XOnhZt50IrR5VORh36EWFuQsqglHMqAdAjWfcmv1078iyeLOwVFkKMjcTINNdhYA==',key_name='tempest-keypair-579122016',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b8ca48cb5f64ef3b0736b8be82378b8',ramdisk_id='',reservation_id='r-1z1vi3sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-842270816',owner_user_name='tempest-ServerActionsTestJSON-842270816-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2bd16d1f5f9d4eb396c474eedee67165',uuid=62d1cea8-2591-4b86-b69f-aee1eeafa901,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.444 2 DEBUG nova.network.os_vif_util [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converting VIF {"id": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "address": "fa:16:3e:fb:57:ce", "network": {"id": "b2c62a66-f9bc-4a45-a843-aef2e12a7fff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1352928597-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b8ca48cb5f64ef3b0736b8be82378b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef13c059-8f", "ovs_interfaceid": "ef13c059-8f9b-4954-a8fd-1da56abe5729", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.445 2 DEBUG nova.network.os_vif_util [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.445 2 DEBUG os_vif [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.447 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef13c059-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.453 2 INFO os_vif [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:57:ce,bridge_name='br-int',has_traffic_filtering=True,id=ef13c059-8f9b-4954-a8fd-1da56abe5729,network=Network(b2c62a66-f9bc-4a45-a843-aef2e12a7fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef13c059-8f')#033[00m
Oct  2 08:32:05 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275497]: [NOTICE]   (275501) : haproxy version is 2.8.14-c23fe91
Oct  2 08:32:05 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275497]: [NOTICE]   (275501) : path to executable is /usr/sbin/haproxy
Oct  2 08:32:05 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275497]: [WARNING]  (275501) : Exiting Master process...
Oct  2 08:32:05 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275497]: [WARNING]  (275501) : Exiting Master process...
Oct  2 08:32:05 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275497]: [ALERT]    (275501) : Current worker (275503) exited with code 143 (Terminated)
Oct  2 08:32:05 np0005465987 neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff[275497]: [WARNING]  (275501) : All workers exited. Exiting... (0)
Oct  2 08:32:05 np0005465987 systemd[1]: libpod-54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444.scope: Deactivated successfully.
Oct  2 08:32:05 np0005465987 podman[275587]: 2025-10-02 12:32:05.508118553 +0000 UTC m=+0.051190272 container died 54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:32:05 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444-userdata-shm.mount: Deactivated successfully.
Oct  2 08:32:05 np0005465987 systemd[1]: var-lib-containers-storage-overlay-7a92c2a097d0dba3f5b28f6ae66e2de1eaf45b721aed2b327dc3386442ccc180-merged.mount: Deactivated successfully.
Oct  2 08:32:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/989010263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.564 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.570 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:05 np0005465987 podman[275587]: 2025-10-02 12:32:05.573245706 +0000 UTC m=+0.116317425 container cleanup 54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:32:05 np0005465987 systemd[1]: libpod-conmon-54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444.scope: Deactivated successfully.
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.591 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.618 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.618 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:05 np0005465987 podman[275636]: 2025-10-02 12:32:05.662329319 +0000 UTC m=+0.066232165 container remove 54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.670 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ce04e452-d6ad-46ab-a040-19bf91de1c31]: (4, ('Thu Oct  2 12:32:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444)\n54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444\nThu Oct  2 12:32:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff (54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444)\n54d987d51f11d714b745d3937e918fee2a1796c4e7c99e3efa80a65181554444\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.673 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f6be54c6-cc72-42da-8e7d-f282a227c4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.675 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2c62a66-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:05 np0005465987 kernel: tapb2c62a66-f0: left promiscuous mode
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.695 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba3e5ae-d1a6-407d-90c2-855ef0ddf86b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.722 2 DEBUG nova.compute.manager [req-5738ae59-e37b-4254-ae74-f80d91b0211d req-cc842ff4-ce2b-4828-9c30-161a2d2bc096 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-unplugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.723 2 DEBUG oslo_concurrency.lockutils [req-5738ae59-e37b-4254-ae74-f80d91b0211d req-cc842ff4-ce2b-4828-9c30-161a2d2bc096 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.724 2 DEBUG oslo_concurrency.lockutils [req-5738ae59-e37b-4254-ae74-f80d91b0211d req-cc842ff4-ce2b-4828-9c30-161a2d2bc096 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.724 2 DEBUG oslo_concurrency.lockutils [req-5738ae59-e37b-4254-ae74-f80d91b0211d req-cc842ff4-ce2b-4828-9c30-161a2d2bc096 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.723 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4939702a-758a-470a-838f-b1b6bf710a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.724 2 DEBUG nova.compute.manager [req-5738ae59-e37b-4254-ae74-f80d91b0211d req-cc842ff4-ce2b-4828-9c30-161a2d2bc096 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-unplugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:05 np0005465987 nova_compute[230713]: 2025-10-02 12:32:05.725 2 DEBUG nova.compute.manager [req-5738ae59-e37b-4254-ae74-f80d91b0211d req-cc842ff4-ce2b-4828-9c30-161a2d2bc096 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-unplugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.725 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e3825b0a-757f-4232-97a0-47ef7a2b500c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.751 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[515ab5b9-1727-4f83-bc5c-857872cd5412]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638926, 'reachable_time': 35021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275652, 'error': None, 'target': 'ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:05 np0005465987 systemd[1]: run-netns-ovnmeta\x2db2c62a66\x2df9bc\x2d4a45\x2da843\x2daef2e12a7fff.mount: Deactivated successfully.
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.755 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2c62a66-f9bc-4a45-a843-aef2e12a7fff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:32:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:05.756 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[e10f8e80-1fd1-45c7-b369-a1d3c4b7abc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:06 np0005465987 nova_compute[230713]: 2025-10-02 12:32:06.619 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:06 np0005465987 nova_compute[230713]: 2025-10-02 12:32:06.619 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:06 np0005465987 nova_compute[230713]: 2025-10-02 12:32:06.620 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:06.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:06 np0005465987 nova_compute[230713]: 2025-10-02 12:32:06.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:07.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:07 np0005465987 nova_compute[230713]: 2025-10-02 12:32:07.861 2 DEBUG nova.compute.manager [req-cf2dfe78-5203-43f5-8062-b8bf1c6d73d5 req-d0aa2a21-7ed1-40ae-aac3-1e866acd1e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:07 np0005465987 nova_compute[230713]: 2025-10-02 12:32:07.861 2 DEBUG oslo_concurrency.lockutils [req-cf2dfe78-5203-43f5-8062-b8bf1c6d73d5 req-d0aa2a21-7ed1-40ae-aac3-1e866acd1e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:07 np0005465987 nova_compute[230713]: 2025-10-02 12:32:07.862 2 DEBUG oslo_concurrency.lockutils [req-cf2dfe78-5203-43f5-8062-b8bf1c6d73d5 req-d0aa2a21-7ed1-40ae-aac3-1e866acd1e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:07 np0005465987 nova_compute[230713]: 2025-10-02 12:32:07.862 2 DEBUG oslo_concurrency.lockutils [req-cf2dfe78-5203-43f5-8062-b8bf1c6d73d5 req-d0aa2a21-7ed1-40ae-aac3-1e866acd1e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:07 np0005465987 nova_compute[230713]: 2025-10-02 12:32:07.862 2 DEBUG nova.compute.manager [req-cf2dfe78-5203-43f5-8062-b8bf1c6d73d5 req-d0aa2a21-7ed1-40ae-aac3-1e866acd1e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] No waiting events found dispatching network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:07 np0005465987 nova_compute[230713]: 2025-10-02 12:32:07.863 2 WARNING nova.compute.manager [req-cf2dfe78-5203-43f5-8062-b8bf1c6d73d5 req-d0aa2a21-7ed1-40ae-aac3-1e866acd1e00 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received unexpected event network-vif-plugged-ef13c059-8f9b-4954-a8fd-1da56abe5729 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:32:08 np0005465987 nova_compute[230713]: 2025-10-02 12:32:08.087 2 INFO nova.virt.libvirt.driver [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Deleting instance files /var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901_del#033[00m
Oct  2 08:32:08 np0005465987 nova_compute[230713]: 2025-10-02 12:32:08.087 2 INFO nova.virt.libvirt.driver [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Deletion of /var/lib/nova/instances/62d1cea8-2591-4b86-b69f-aee1eeafa901_del complete#033[00m
Oct  2 08:32:08 np0005465987 nova_compute[230713]: 2025-10-02 12:32:08.174 2 INFO nova.compute.manager [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Took 2.99 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:32:08 np0005465987 nova_compute[230713]: 2025-10-02 12:32:08.175 2 DEBUG oslo.service.loopingcall [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:32:08 np0005465987 nova_compute[230713]: 2025-10-02 12:32:08.176 2 DEBUG nova.compute.manager [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:32:08 np0005465987 nova_compute[230713]: 2025-10-02 12:32:08.176 2 DEBUG nova.network.neutron [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:32:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:08.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e320 e320: 3 total, 3 up, 3 in
Oct  2 08:32:08 np0005465987 nova_compute[230713]: 2025-10-02 12:32:08.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:08 np0005465987 nova_compute[230713]: 2025-10-02 12:32:08.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:32:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:09.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:09 np0005465987 nova_compute[230713]: 2025-10-02 12:32:09.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:09 np0005465987 nova_compute[230713]: 2025-10-02 12:32:09.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:10 np0005465987 nova_compute[230713]: 2025-10-02 12:32:10.264 2 DEBUG nova.network.neutron [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:10 np0005465987 nova_compute[230713]: 2025-10-02 12:32:10.294 2 INFO nova.compute.manager [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Took 2.12 seconds to deallocate network for instance.#033[00m
Oct  2 08:32:10 np0005465987 nova_compute[230713]: 2025-10-02 12:32:10.335 2 DEBUG nova.compute.manager [req-752b9fe3-cb68-4f85-810b-5eed40d41058 req-e3ae544e-b513-4041-8a78-8f1db6962dc8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Received event network-vif-deleted-ef13c059-8f9b-4954-a8fd-1da56abe5729 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:10 np0005465987 nova_compute[230713]: 2025-10-02 12:32:10.351 2 DEBUG oslo_concurrency.lockutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:10 np0005465987 nova_compute[230713]: 2025-10-02 12:32:10.351 2 DEBUG oslo_concurrency.lockutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:10 np0005465987 nova_compute[230713]: 2025-10-02 12:32:10.443 2 DEBUG oslo_concurrency.processutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:10 np0005465987 nova_compute[230713]: 2025-10-02 12:32:10.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:10 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:10Z|00454|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct  2 08:32:10 np0005465987 nova_compute[230713]: 2025-10-02 12:32:10.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:10.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/71294947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:11 np0005465987 nova_compute[230713]: 2025-10-02 12:32:11.176 2 DEBUG oslo_concurrency.processutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:11 np0005465987 nova_compute[230713]: 2025-10-02 12:32:11.183 2 DEBUG nova.compute.provider_tree [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:11 np0005465987 nova_compute[230713]: 2025-10-02 12:32:11.203 2 DEBUG nova.scheduler.client.report [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:11 np0005465987 nova_compute[230713]: 2025-10-02 12:32:11.236 2 DEBUG oslo_concurrency.lockutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:11 np0005465987 nova_compute[230713]: 2025-10-02 12:32:11.269 2 INFO nova.scheduler.client.report [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Deleted allocations for instance 62d1cea8-2591-4b86-b69f-aee1eeafa901#033[00m
Oct  2 08:32:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:11.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:11 np0005465987 nova_compute[230713]: 2025-10-02 12:32:11.388 2 DEBUG oslo_concurrency.lockutils [None req-557f8567-417c-4735-8cc1-31d815963886 2bd16d1f5f9d4eb396c474eedee67165 4b8ca48cb5f64ef3b0736b8be82378b8 - - default default] Lock "62d1cea8-2591-4b86-b69f-aee1eeafa901" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:12.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:13.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:14 np0005465987 nova_compute[230713]: 2025-10-02 12:32:14.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:14.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:14 np0005465987 podman[275676]: 2025-10-02 12:32:14.859873817 +0000 UTC m=+0.076094826 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  2 08:32:14 np0005465987 nova_compute[230713]: 2025-10-02 12:32:14.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:15.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:15 np0005465987 nova_compute[230713]: 2025-10-02 12:32:15.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:16.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:18 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:18Z|00455|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct  2 08:32:18 np0005465987 nova_compute[230713]: 2025-10-02 12:32:18.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:18.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:18 np0005465987 nova_compute[230713]: 2025-10-02 12:32:18.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:19.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:19 np0005465987 podman[275697]: 2025-10-02 12:32:19.838771387 +0000 UTC m=+0.053348680 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:32:19 np0005465987 podman[275696]: 2025-10-02 12:32:19.874383038 +0000 UTC m=+0.087415158 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:32:19 np0005465987 podman[275698]: 2025-10-02 12:32:19.894052059 +0000 UTC m=+0.101562347 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:19 np0005465987 nova_compute[230713]: 2025-10-02 12:32:19.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:20 np0005465987 nova_compute[230713]: 2025-10-02 12:32:20.420 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408325.4199116, 62d1cea8-2591-4b86-b69f-aee1eeafa901 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:20 np0005465987 nova_compute[230713]: 2025-10-02 12:32:20.421 2 INFO nova.compute.manager [-] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:32:20 np0005465987 nova_compute[230713]: 2025-10-02 12:32:20.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:20 np0005465987 nova_compute[230713]: 2025-10-02 12:32:20.535 2 DEBUG nova.compute.manager [None req-107bc181-8fe3-43b3-88a1-387d2a6cde25 - - - - - -] [instance: 62d1cea8-2591-4b86-b69f-aee1eeafa901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:20.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:20 np0005465987 nova_compute[230713]: 2025-10-02 12:32:20.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:21.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:22.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:23.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:24 np0005465987 nova_compute[230713]: 2025-10-02 12:32:24.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:24.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:24 np0005465987 nova_compute[230713]: 2025-10-02 12:32:24.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:25.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:25 np0005465987 nova_compute[230713]: 2025-10-02 12:32:25.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:26 np0005465987 nova_compute[230713]: 2025-10-02 12:32:26.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:26.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:27.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:27.958 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:27.958 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:27.958 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.141 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.142 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.163 2 DEBUG nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.249 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.250 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.257 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.257 2 INFO nova.compute.claims [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.384 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1663262144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.798 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.805 2 DEBUG nova.compute.provider_tree [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.821 2 DEBUG nova.scheduler.client.report [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:28.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.847 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.847 2 DEBUG nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.945 2 DEBUG nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.945 2 DEBUG nova.network.neutron [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:32:28 np0005465987 nova_compute[230713]: 2025-10-02 12:32:28.979 2 INFO nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.071 2 DEBUG nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.205 2 DEBUG nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.206 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.206 2 INFO nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Creating image(s)#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.275 2 DEBUG nova.storage.rbd_utils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] rbd image d62b9dd7-e821-4bc4-97bb-ae0696886578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.304 2 DEBUG nova.storage.rbd_utils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] rbd image d62b9dd7-e821-4bc4-97bb-ae0696886578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.344 2 DEBUG nova.storage.rbd_utils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] rbd image d62b9dd7-e821-4bc4-97bb-ae0696886578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.350 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:29.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.382 2 DEBUG nova.policy [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0106eca2dafa49be98cd4d80559ecd6e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9ae403dc06bb4ae2931781d6fdbaba80', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.417 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.418 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.418 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.419 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.446 2 DEBUG nova.storage.rbd_utils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] rbd image d62b9dd7-e821-4bc4-97bb-ae0696886578_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.450 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d62b9dd7-e821-4bc4-97bb-ae0696886578_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:29 np0005465987 nova_compute[230713]: 2025-10-02 12:32:29.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.281 2 DEBUG nova.network.neutron [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Successfully created port: 172bedcc-f5b9-4c9a-924f-273748a74385 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.655 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d62b9dd7-e821-4bc4-97bb-ae0696886578_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.739 2 DEBUG nova.storage.rbd_utils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] resizing rbd image d62b9dd7-e821-4bc4-97bb-ae0696886578_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:32:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:30.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.881 2 DEBUG nova.objects.instance [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lazy-loading 'migration_context' on Instance uuid d62b9dd7-e821-4bc4-97bb-ae0696886578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.936 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.937 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Ensure instance console log exists: /var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.938 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.938 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:30 np0005465987 nova_compute[230713]: 2025-10-02 12:32:30.938 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:31.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:31 np0005465987 nova_compute[230713]: 2025-10-02 12:32:31.371 2 DEBUG nova.network.neutron [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Successfully updated port: 172bedcc-f5b9-4c9a-924f-273748a74385 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:32:31 np0005465987 nova_compute[230713]: 2025-10-02 12:32:31.390 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquiring lock "refresh_cache-d62b9dd7-e821-4bc4-97bb-ae0696886578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:31 np0005465987 nova_compute[230713]: 2025-10-02 12:32:31.391 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquired lock "refresh_cache-d62b9dd7-e821-4bc4-97bb-ae0696886578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:31 np0005465987 nova_compute[230713]: 2025-10-02 12:32:31.391 2 DEBUG nova.network.neutron [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:32:31 np0005465987 nova_compute[230713]: 2025-10-02 12:32:31.487 2 DEBUG nova.compute.manager [req-b8cb9431-5d98-46f3-8b0f-319d8497ffc2 req-e14a856d-4f0c-4f2a-95bd-21fe5e06e1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-changed-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:31 np0005465987 nova_compute[230713]: 2025-10-02 12:32:31.488 2 DEBUG nova.compute.manager [req-b8cb9431-5d98-46f3-8b0f-319d8497ffc2 req-e14a856d-4f0c-4f2a-95bd-21fe5e06e1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Refreshing instance network info cache due to event network-changed-172bedcc-f5b9-4c9a-924f-273748a74385. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:32:31 np0005465987 nova_compute[230713]: 2025-10-02 12:32:31.488 2 DEBUG oslo_concurrency.lockutils [req-b8cb9431-5d98-46f3-8b0f-319d8497ffc2 req-e14a856d-4f0c-4f2a-95bd-21fe5e06e1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d62b9dd7-e821-4bc4-97bb-ae0696886578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:31 np0005465987 nova_compute[230713]: 2025-10-02 12:32:31.568 2 DEBUG nova.network.neutron [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.516 2 DEBUG nova.network.neutron [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Updating instance_info_cache with network_info: [{"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.577 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Releasing lock "refresh_cache-d62b9dd7-e821-4bc4-97bb-ae0696886578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.578 2 DEBUG nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Instance network_info: |[{"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.579 2 DEBUG oslo_concurrency.lockutils [req-b8cb9431-5d98-46f3-8b0f-319d8497ffc2 req-e14a856d-4f0c-4f2a-95bd-21fe5e06e1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d62b9dd7-e821-4bc4-97bb-ae0696886578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.580 2 DEBUG nova.network.neutron [req-b8cb9431-5d98-46f3-8b0f-319d8497ffc2 req-e14a856d-4f0c-4f2a-95bd-21fe5e06e1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Refreshing network info cache for port 172bedcc-f5b9-4c9a-924f-273748a74385 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.585 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Start _get_guest_xml network_info=[{"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.592 2 WARNING nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.600 2 DEBUG nova.virt.libvirt.host [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.602 2 DEBUG nova.virt.libvirt.host [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.608 2 DEBUG nova.virt.libvirt.host [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.609 2 DEBUG nova.virt.libvirt.host [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.610 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.610 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.611 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.611 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.611 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.611 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.611 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.612 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.612 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.612 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.612 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.613 2 DEBUG nova.virt.hardware [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:32:32 np0005465987 nova_compute[230713]: 2025-10-02 12:32:32.615 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:32.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2085883857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.096 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.128 2 DEBUG nova.storage.rbd_utils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] rbd image d62b9dd7-e821-4bc4-97bb-ae0696886578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.133 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:33.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2437038837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.587 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.589 2 DEBUG nova.virt.libvirt.vif [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-307465732',display_name='tempest-InstanceActionsNegativeTestJSON-server-307465732',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-307465732',id=122,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9ae403dc06bb4ae2931781d6fdbaba80',ramdisk_id='',reservation_id='r-h0wkjl04',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-588864861',owner_user_name='tempest-InstanceActionsNegativeTestJSON-588864861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:29Z,user_data=None,user_id='0106eca2dafa49be98cd4d80559ecd6e',uuid=d62b9dd7-e821-4bc4-97bb-ae0696886578,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.590 2 DEBUG nova.network.os_vif_util [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Converting VIF {"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.591 2 DEBUG nova.network.os_vif_util [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:8e:f6,bridge_name='br-int',has_traffic_filtering=True,id=172bedcc-f5b9-4c9a-924f-273748a74385,network=Network(fa74c5c0-ec41-4340-955b-aa4e2463e355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap172bedcc-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.592 2 DEBUG nova.objects.instance [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lazy-loading 'pci_devices' on Instance uuid d62b9dd7-e821-4bc4-97bb-ae0696886578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.615 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <uuid>d62b9dd7-e821-4bc4-97bb-ae0696886578</uuid>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <name>instance-0000007a</name>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-307465732</nova:name>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:32:32</nova:creationTime>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <nova:user uuid="0106eca2dafa49be98cd4d80559ecd6e">tempest-InstanceActionsNegativeTestJSON-588864861-project-member</nova:user>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <nova:project uuid="9ae403dc06bb4ae2931781d6fdbaba80">tempest-InstanceActionsNegativeTestJSON-588864861</nova:project>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <nova:port uuid="172bedcc-f5b9-4c9a-924f-273748a74385">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <entry name="serial">d62b9dd7-e821-4bc4-97bb-ae0696886578</entry>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <entry name="uuid">d62b9dd7-e821-4bc4-97bb-ae0696886578</entry>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d62b9dd7-e821-4bc4-97bb-ae0696886578_disk">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d62b9dd7-e821-4bc4-97bb-ae0696886578_disk.config">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:39:8e:f6"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <target dev="tap172bedcc-f5"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578/console.log" append="off"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:32:33 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:32:33 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:32:33 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:32:33 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.616 2 DEBUG nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Preparing to wait for external event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.617 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.617 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.617 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.618 2 DEBUG nova.virt.libvirt.vif [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-307465732',display_name='tempest-InstanceActionsNegativeTestJSON-server-307465732',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-307465732',id=122,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9ae403dc06bb4ae2931781d6fdbaba80',ramdisk_id='',reservation_id='r-h0wkjl04',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-588864861',owner_user_name='tempest-InstanceActionsNegativeTestJSON-588864861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:29Z,user_data=None,user_id='0106eca2dafa49be98cd4d80559ecd6e',uuid=d62b9dd7-e821-4bc4-97bb-ae0696886578,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.618 2 DEBUG nova.network.os_vif_util [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Converting VIF {"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.619 2 DEBUG nova.network.os_vif_util [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:8e:f6,bridge_name='br-int',has_traffic_filtering=True,id=172bedcc-f5b9-4c9a-924f-273748a74385,network=Network(fa74c5c0-ec41-4340-955b-aa4e2463e355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap172bedcc-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.619 2 DEBUG os_vif [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:8e:f6,bridge_name='br-int',has_traffic_filtering=True,id=172bedcc-f5b9-4c9a-924f-273748a74385,network=Network(fa74c5c0-ec41-4340-955b-aa4e2463e355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap172bedcc-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.624 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap172bedcc-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.625 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap172bedcc-f5, col_values=(('external_ids', {'iface-id': '172bedcc-f5b9-4c9a-924f-273748a74385', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:8e:f6', 'vm-uuid': 'd62b9dd7-e821-4bc4-97bb-ae0696886578'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:33 np0005465987 NetworkManager[44910]: <info>  [1759408353.6278] manager: (tap172bedcc-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.636 2 INFO os_vif [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:8e:f6,bridge_name='br-int',has_traffic_filtering=True,id=172bedcc-f5b9-4c9a-924f-273748a74385,network=Network(fa74c5c0-ec41-4340-955b-aa4e2463e355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap172bedcc-f5')#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.709 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.710 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.710 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] No VIF found with MAC fa:16:3e:39:8e:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.710 2 INFO nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Using config drive#033[00m
Oct  2 08:32:33 np0005465987 nova_compute[230713]: 2025-10-02 12:32:33.741 2 DEBUG nova.storage.rbd_utils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] rbd image d62b9dd7-e821-4bc4-97bb-ae0696886578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.166 2 DEBUG nova.network.neutron [req-b8cb9431-5d98-46f3-8b0f-319d8497ffc2 req-e14a856d-4f0c-4f2a-95bd-21fe5e06e1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Updated VIF entry in instance network info cache for port 172bedcc-f5b9-4c9a-924f-273748a74385. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.167 2 DEBUG nova.network.neutron [req-b8cb9431-5d98-46f3-8b0f-319d8497ffc2 req-e14a856d-4f0c-4f2a-95bd-21fe5e06e1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Updating instance_info_cache with network_info: [{"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.186 2 DEBUG oslo_concurrency.lockutils [req-b8cb9431-5d98-46f3-8b0f-319d8497ffc2 req-e14a856d-4f0c-4f2a-95bd-21fe5e06e1db d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d62b9dd7-e821-4bc4-97bb-ae0696886578" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.266 2 INFO nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Creating config drive at /var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578/disk.config#033[00m
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.276 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvpwdu5l8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.413 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvpwdu5l8" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.459 2 DEBUG nova.storage.rbd_utils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] rbd image d62b9dd7-e821-4bc4-97bb-ae0696886578_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.462 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578/disk.config d62b9dd7-e821-4bc4-97bb-ae0696886578_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.783 2 DEBUG oslo_concurrency.processutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578/disk.config d62b9dd7-e821-4bc4-97bb-ae0696886578_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.784 2 INFO nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Deleting local config drive /var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578/disk.config because it was imported into RBD.#033[00m
Oct  2 08:32:34 np0005465987 kernel: tap172bedcc-f5: entered promiscuous mode
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:34Z|00456|binding|INFO|Claiming lport 172bedcc-f5b9-4c9a-924f-273748a74385 for this chassis.
Oct  2 08:32:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:34Z|00457|binding|INFO|172bedcc-f5b9-4c9a-924f-273748a74385: Claiming fa:16:3e:39:8e:f6 10.100.0.7
Oct  2 08:32:34 np0005465987 NetworkManager[44910]: <info>  [1759408354.8493] manager: (tap172bedcc-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Oct  2 08:32:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:34.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.863 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:8e:f6 10.100.0.7'], port_security=['fa:16:3e:39:8e:f6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd62b9dd7-e821-4bc4-97bb-ae0696886578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9ae403dc06bb4ae2931781d6fdbaba80', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6063b367-1374-4ea2-ba13-e2a0d1fdb611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae955a05-7b5a-417e-acef-62391fbb7ff5, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=172bedcc-f5b9-4c9a-924f-273748a74385) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.864 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 172bedcc-f5b9-4c9a-924f-273748a74385 in datapath fa74c5c0-ec41-4340-955b-aa4e2463e355 bound to our chassis#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.866 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa74c5c0-ec41-4340-955b-aa4e2463e355#033[00m
Oct  2 08:32:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:34Z|00458|binding|INFO|Setting lport 172bedcc-f5b9-4c9a-924f-273748a74385 up in Southbound
Oct  2 08:32:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:34Z|00459|binding|INFO|Setting lport 172bedcc-f5b9-4c9a-924f-273748a74385 ovn-installed in OVS
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:34 np0005465987 systemd-udevd[276196]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.880 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6eeeb9e4-5c59-4bba-8d6a-c2db25503d10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.884 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa74c5c0-e1 in ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.886 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa74c5c0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.886 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[97972f97-f789-4279-98ff-5193d811e99b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.887 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3f825838-bb06-4b9e-b5ab-2b0b3d7bf319]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.898 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[60300d24-9993-4271-a086-67e0be8099b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:34 np0005465987 NetworkManager[44910]: <info>  [1759408354.9027] device (tap172bedcc-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:32:34 np0005465987 NetworkManager[44910]: <info>  [1759408354.9034] device (tap172bedcc-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:32:34 np0005465987 systemd-machined[188335]: New machine qemu-55-instance-0000007a.
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.920 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[35b739b4-0d7d-4ca7-a80c-3d1e0f0ef779]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:34 np0005465987 systemd[1]: Started Virtual Machine qemu-55-instance-0000007a.
Oct  2 08:32:34 np0005465987 nova_compute[230713]: 2025-10-02 12:32:34.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.952 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b515c9f9-c65f-427f-9e97-3a91fc75f881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.959 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[30ebabb6-973e-4f2d-b177-fac6855ef648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:34 np0005465987 NetworkManager[44910]: <info>  [1759408354.9611] manager: (tapfa74c5c0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Oct  2 08:32:34 np0005465987 systemd-udevd[276201]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.993 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[71f9975d-5231-4670-ab5c-b090fffa7a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:34.996 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3954a7a0-67e6-4c93-a006-224106182cb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:35 np0005465987 NetworkManager[44910]: <info>  [1759408355.0237] device (tapfa74c5c0-e0): carrier: link connected
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.030 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b58445-6541-454b-bc75-f658b247522e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.052 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[88cdabd3-609a-4893-a098-1c840c062959]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa74c5c0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:f2:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642359, 'reachable_time': 39618, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276237, 'error': None, 'target': 'ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.074 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5bbcf5-cd2b-4d1b-8031-5dc84a137986]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe28:f2a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642359, 'tstamp': 642359}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276242, 'error': None, 'target': 'ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.097 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6b358cc1-4f8c-4213-baa6-737179e197f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa74c5c0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:f2:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642359, 'reachable_time': 39618, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276251, 'error': None, 'target': 'ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.136 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7d8ec5-3df8-4055-a0f9-1e22dbda3056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.207 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8092426d-c116-479e-86ee-c66919a34859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.208 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa74c5c0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.208 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.209 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa74c5c0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:35 np0005465987 kernel: tapfa74c5c0-e0: entered promiscuous mode
Oct  2 08:32:35 np0005465987 NetworkManager[44910]: <info>  [1759408355.2117] manager: (tapfa74c5c0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.213 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa74c5c0-e0, col_values=(('external_ids', {'iface-id': 'b5e20a88-79c1-497c-8684-6fcad98bdc3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:35Z|00460|binding|INFO|Releasing lport b5e20a88-79c1-497c-8684-6fcad98bdc3f from this chassis (sb_readonly=0)
Oct  2 08:32:35 np0005465987 nova_compute[230713]: 2025-10-02 12:32:35.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:35 np0005465987 nova_compute[230713]: 2025-10-02 12:32:35.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.231 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa74c5c0-ec41-4340-955b-aa4e2463e355.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa74c5c0-ec41-4340-955b-aa4e2463e355.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.233 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ac50595f-f6cb-4f64-b7a4-fe0b6b0a0da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.233 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-fa74c5c0-ec41-4340-955b-aa4e2463e355
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/fa74c5c0-ec41-4340-955b-aa4e2463e355.pid.haproxy
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID fa74c5c0-ec41-4340-955b-aa4e2463e355
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:32:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:35.236 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'env', 'PROCESS_TAG=haproxy-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa74c5c0-ec41-4340-955b-aa4e2463e355.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:32:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:35.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:32:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:32:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:32:35 np0005465987 podman[276326]: 2025-10-02 12:32:35.642131844 +0000 UTC m=+0.052774995 container create 48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:32:35 np0005465987 systemd[1]: Started libpod-conmon-48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45.scope.
Oct  2 08:32:35 np0005465987 podman[276326]: 2025-10-02 12:32:35.613467944 +0000 UTC m=+0.024111125 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:32:35 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:32:35 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e65a4fd41e6538de8cf32246813c10900283354c4f772b540bcd4e9d9c851e2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:32:35 np0005465987 podman[276326]: 2025-10-02 12:32:35.755373931 +0000 UTC m=+0.166017172 container init 48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:35 np0005465987 podman[276326]: 2025-10-02 12:32:35.763901476 +0000 UTC m=+0.174544657 container start 48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:32:35 np0005465987 neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355[276341]: [NOTICE]   (276345) : New worker (276347) forked
Oct  2 08:32:35 np0005465987 neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355[276341]: [NOTICE]   (276345) : Loading success.
Oct  2 08:32:35 np0005465987 nova_compute[230713]: 2025-10-02 12:32:35.917 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408355.916991, d62b9dd7-e821-4bc4-97bb-ae0696886578 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:35 np0005465987 nova_compute[230713]: 2025-10-02 12:32:35.918 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] VM Started (Lifecycle Event)#033[00m
Oct  2 08:32:35 np0005465987 nova_compute[230713]: 2025-10-02 12:32:35.949 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:35 np0005465987 nova_compute[230713]: 2025-10-02 12:32:35.954 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408355.917945, d62b9dd7-e821-4bc4-97bb-ae0696886578 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:35 np0005465987 nova_compute[230713]: 2025-10-02 12:32:35.955 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:32:35 np0005465987 nova_compute[230713]: 2025-10-02 12:32:35.978 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:35 np0005465987 nova_compute[230713]: 2025-10-02 12:32:35.982 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:36 np0005465987 nova_compute[230713]: 2025-10-02 12:32:36.008 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:32:36 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:36Z|00461|binding|INFO|Releasing lport b5e20a88-79c1-497c-8684-6fcad98bdc3f from this chassis (sb_readonly=0)
Oct  2 08:32:36 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:36Z|00462|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct  2 08:32:36 np0005465987 nova_compute[230713]: 2025-10-02 12:32:36.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:36.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:37.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.319 2 DEBUG nova.compute.manager [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.320 2 DEBUG oslo_concurrency.lockutils [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.320 2 DEBUG oslo_concurrency.lockutils [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.320 2 DEBUG oslo_concurrency.lockutils [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.320 2 DEBUG nova.compute.manager [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Processing event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.321 2 DEBUG nova.compute.manager [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.321 2 DEBUG oslo_concurrency.lockutils [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.321 2 DEBUG oslo_concurrency.lockutils [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.321 2 DEBUG oslo_concurrency.lockutils [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.321 2 DEBUG nova.compute.manager [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] No waiting events found dispatching network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.321 2 WARNING nova.compute.manager [req-95a34e4f-d977-49b8-b7e9-08b51448e28f req-6d228330-f347-40e3-8e84-1f9d9c1f7522 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received unexpected event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.322 2 DEBUG nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.325 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408358.3245668, d62b9dd7-e821-4bc4-97bb-ae0696886578 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.325 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.327 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.330 2 INFO nova.virt.libvirt.driver [-] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Instance spawned successfully.#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.331 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.349 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.355 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.359 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.360 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.360 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.360 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.361 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.361 2 DEBUG nova.virt.libvirt.driver [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.386 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.426 2 INFO nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Took 9.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.426 2 DEBUG nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.502 2 INFO nova.compute.manager [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Took 10.28 seconds to build instance.#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.539 2 DEBUG oslo_concurrency.lockutils [None req-7d64037d-6b2a-45c5-b310-5666ba79bc4e 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:38 np0005465987 nova_compute[230713]: 2025-10-02 12:32:38.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:38.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:39.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e321 e321: 3 total, 3 up, 3 in
Oct  2 08:32:39 np0005465987 nova_compute[230713]: 2025-10-02 12:32:39.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:40.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e322 e322: 3 total, 3 up, 3 in
Oct  2 08:32:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:41.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.046 2 DEBUG oslo_concurrency.lockutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.048 2 DEBUG oslo_concurrency.lockutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.048 2 DEBUG oslo_concurrency.lockutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.049 2 DEBUG oslo_concurrency.lockutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.049 2 DEBUG oslo_concurrency.lockutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.050 2 INFO nova.compute.manager [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Terminating instance#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.051 2 DEBUG nova.compute.manager [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:32:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:32:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:32:42 np0005465987 kernel: tap172bedcc-f5 (unregistering): left promiscuous mode
Oct  2 08:32:42 np0005465987 NetworkManager[44910]: <info>  [1759408362.1576] device (tap172bedcc-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00463|binding|INFO|Releasing lport 172bedcc-f5b9-4c9a-924f-273748a74385 from this chassis (sb_readonly=0)
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00464|binding|INFO|Setting lport 172bedcc-f5b9-4c9a-924f-273748a74385 down in Southbound
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00465|binding|INFO|Removing iface tap172bedcc-f5 ovn-installed in OVS
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.175 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:8e:f6 10.100.0.7'], port_security=['fa:16:3e:39:8e:f6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd62b9dd7-e821-4bc4-97bb-ae0696886578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9ae403dc06bb4ae2931781d6fdbaba80', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6063b367-1374-4ea2-ba13-e2a0d1fdb611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae955a05-7b5a-417e-acef-62391fbb7ff5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=172bedcc-f5b9-4c9a-924f-273748a74385) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.177 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 172bedcc-f5b9-4c9a-924f-273748a74385 in datapath fa74c5c0-ec41-4340-955b-aa4e2463e355 unbound from our chassis#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.179 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa74c5c0-ec41-4340-955b-aa4e2463e355, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.180 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5794c102-57b2-423a-98e9-0e704967af4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.181 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355 namespace which is not needed anymore#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005465987 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct  2 08:32:42 np0005465987 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007a.scope: Consumed 4.690s CPU time.
Oct  2 08:32:42 np0005465987 systemd-machined[188335]: Machine qemu-55-instance-0000007a terminated.
Oct  2 08:32:42 np0005465987 kernel: tap172bedcc-f5: entered promiscuous mode
Oct  2 08:32:42 np0005465987 kernel: tap172bedcc-f5 (unregistering): left promiscuous mode
Oct  2 08:32:42 np0005465987 systemd-udevd[276410]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:32:42 np0005465987 NetworkManager[44910]: <info>  [1759408362.2719] manager: (tap172bedcc-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00466|binding|INFO|Claiming lport 172bedcc-f5b9-4c9a-924f-273748a74385 for this chassis.
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00467|binding|INFO|172bedcc-f5b9-4c9a-924f-273748a74385: Claiming fa:16:3e:39:8e:f6 10.100.0.7
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.294 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:8e:f6 10.100.0.7'], port_security=['fa:16:3e:39:8e:f6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd62b9dd7-e821-4bc4-97bb-ae0696886578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9ae403dc06bb4ae2931781d6fdbaba80', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6063b367-1374-4ea2-ba13-e2a0d1fdb611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae955a05-7b5a-417e-acef-62391fbb7ff5, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=172bedcc-f5b9-4c9a-924f-273748a74385) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.302 2 INFO nova.virt.libvirt.driver [-] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Instance destroyed successfully.#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.303 2 DEBUG nova.objects.instance [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lazy-loading 'resources' on Instance uuid d62b9dd7-e821-4bc4-97bb-ae0696886578 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00468|binding|INFO|Setting lport 172bedcc-f5b9-4c9a-924f-273748a74385 ovn-installed in OVS
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00469|binding|INFO|Setting lport 172bedcc-f5b9-4c9a-924f-273748a74385 up in Southbound
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00470|binding|INFO|Releasing lport 172bedcc-f5b9-4c9a-924f-273748a74385 from this chassis (sb_readonly=1)
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00471|if_status|INFO|Dropped 2 log messages in last 173 seconds (most recently, 173 seconds ago) due to excessive rate
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00472|if_status|INFO|Not setting lport 172bedcc-f5b9-4c9a-924f-273748a74385 down as sb is readonly
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00473|binding|INFO|Removing iface tap172bedcc-f5 ovn-installed in OVS
Oct  2 08:32:42 np0005465987 neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355[276341]: [NOTICE]   (276345) : haproxy version is 2.8.14-c23fe91
Oct  2 08:32:42 np0005465987 neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355[276341]: [NOTICE]   (276345) : path to executable is /usr/sbin/haproxy
Oct  2 08:32:42 np0005465987 neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355[276341]: [WARNING]  (276345) : Exiting Master process...
Oct  2 08:32:42 np0005465987 neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355[276341]: [ALERT]    (276345) : Current worker (276347) exited with code 143 (Terminated)
Oct  2 08:32:42 np0005465987 neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355[276341]: [WARNING]  (276345) : All workers exited. Exiting... (0)
Oct  2 08:32:42 np0005465987 systemd[1]: libpod-48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45.scope: Deactivated successfully.
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00474|binding|INFO|Releasing lport 172bedcc-f5b9-4c9a-924f-273748a74385 from this chassis (sb_readonly=0)
Oct  2 08:32:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:42Z|00475|binding|INFO|Setting lport 172bedcc-f5b9-4c9a-924f-273748a74385 down in Southbound
Oct  2 08:32:42 np0005465987 podman[276431]: 2025-10-02 12:32:42.321177187 +0000 UTC m=+0.053344480 container died 48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.328 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:8e:f6 10.100.0.7'], port_security=['fa:16:3e:39:8e:f6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd62b9dd7-e821-4bc4-97bb-ae0696886578', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9ae403dc06bb4ae2931781d6fdbaba80', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6063b367-1374-4ea2-ba13-e2a0d1fdb611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae955a05-7b5a-417e-acef-62391fbb7ff5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=172bedcc-f5b9-4c9a-924f-273748a74385) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.339 2 DEBUG nova.virt.libvirt.vif [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-307465732',display_name='tempest-InstanceActionsNegativeTestJSON-server-307465732',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-307465732',id=122,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:32:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9ae403dc06bb4ae2931781d6fdbaba80',ramdisk_id='',reservation_id='r-h0wkjl04',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-588864861',owner_user_name='tempest-InstanceActionsNegativeTestJSON-588864861-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:38Z,user_data=None,user_id='0106eca2dafa49be98cd4d80559ecd6e',uuid=d62b9dd7-e821-4bc4-97bb-ae0696886578,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.339 2 DEBUG nova.network.os_vif_util [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Converting VIF {"id": "172bedcc-f5b9-4c9a-924f-273748a74385", "address": "fa:16:3e:39:8e:f6", "network": {"id": "fa74c5c0-ec41-4340-955b-aa4e2463e355", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1182159042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ae403dc06bb4ae2931781d6fdbaba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap172bedcc-f5", "ovs_interfaceid": "172bedcc-f5b9-4c9a-924f-273748a74385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.340 2 DEBUG nova.network.os_vif_util [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:8e:f6,bridge_name='br-int',has_traffic_filtering=True,id=172bedcc-f5b9-4c9a-924f-273748a74385,network=Network(fa74c5c0-ec41-4340-955b-aa4e2463e355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap172bedcc-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.341 2 DEBUG os_vif [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:8e:f6,bridge_name='br-int',has_traffic_filtering=True,id=172bedcc-f5b9-4c9a-924f-273748a74385,network=Network(fa74c5c0-ec41-4340-955b-aa4e2463e355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap172bedcc-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap172bedcc-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.351 2 INFO os_vif [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:8e:f6,bridge_name='br-int',has_traffic_filtering=True,id=172bedcc-f5b9-4c9a-924f-273748a74385,network=Network(fa74c5c0-ec41-4340-955b-aa4e2463e355),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap172bedcc-f5')#033[00m
Oct  2 08:32:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay-e65a4fd41e6538de8cf32246813c10900283354c4f772b540bcd4e9d9c851e2d-merged.mount: Deactivated successfully.
Oct  2 08:32:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45-userdata-shm.mount: Deactivated successfully.
Oct  2 08:32:42 np0005465987 podman[276431]: 2025-10-02 12:32:42.363103111 +0000 UTC m=+0.095270414 container cleanup 48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:32:42 np0005465987 systemd[1]: libpod-conmon-48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45.scope: Deactivated successfully.
Oct  2 08:32:42 np0005465987 podman[276479]: 2025-10-02 12:32:42.418764654 +0000 UTC m=+0.036443024 container remove 48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.420 2 DEBUG nova.compute.manager [req-ff057564-d113-4683-8305-af2daa1d6553 req-48b3cc37-f8bb-4af4-b036-e03b09cebb08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-unplugged-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.421 2 DEBUG oslo_concurrency.lockutils [req-ff057564-d113-4683-8305-af2daa1d6553 req-48b3cc37-f8bb-4af4-b036-e03b09cebb08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.421 2 DEBUG oslo_concurrency.lockutils [req-ff057564-d113-4683-8305-af2daa1d6553 req-48b3cc37-f8bb-4af4-b036-e03b09cebb08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.421 2 DEBUG oslo_concurrency.lockutils [req-ff057564-d113-4683-8305-af2daa1d6553 req-48b3cc37-f8bb-4af4-b036-e03b09cebb08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.421 2 DEBUG nova.compute.manager [req-ff057564-d113-4683-8305-af2daa1d6553 req-48b3cc37-f8bb-4af4-b036-e03b09cebb08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] No waiting events found dispatching network-vif-unplugged-172bedcc-f5b9-4c9a-924f-273748a74385 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.422 2 DEBUG nova.compute.manager [req-ff057564-d113-4683-8305-af2daa1d6553 req-48b3cc37-f8bb-4af4-b036-e03b09cebb08 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-unplugged-172bedcc-f5b9-4c9a-924f-273748a74385 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.426 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f82e962e-2337-4a9a-8b6e-8f788213c3a1]: (4, ('Thu Oct  2 12:32:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355 (48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45)\n48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45\nThu Oct  2 12:32:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355 (48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45)\n48b3a0ef0cd48774af45ae609507a36d70552503e00d19dc7315571650ed2d45\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.428 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8eee16c2-3e99-4906-ab10-7024e7f3ceed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.430 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa74c5c0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005465987 kernel: tapfa74c5c0-e0: left promiscuous mode
Oct  2 08:32:42 np0005465987 nova_compute[230713]: 2025-10-02 12:32:42.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.448 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a8068bc7-e7cc-48aa-9ccf-8dded5475734]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.482 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2433b8-8031-4d43-9a6f-bb4aa0ba4211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.483 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3872c68a-d709-420a-9045-be7481d44636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.505 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e447b359-837a-4bb3-a9de-c0f9837c8f3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642352, 'reachable_time': 19430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276497, 'error': None, 'target': 'ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.508 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa74c5c0-ec41-4340-955b-aa4e2463e355 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.509 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[801d69ee-eddf-477a-bd56-3ae11f8a9ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.510 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 172bedcc-f5b9-4c9a-924f-273748a74385 in datapath fa74c5c0-ec41-4340-955b-aa4e2463e355 unbound from our chassis#033[00m
Oct  2 08:32:42 np0005465987 systemd[1]: run-netns-ovnmeta\x2dfa74c5c0\x2dec41\x2d4340\x2d955b\x2daa4e2463e355.mount: Deactivated successfully.
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.511 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa74c5c0-ec41-4340-955b-aa4e2463e355, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.512 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[491c89c0-81db-4d39-9aa3-8a06bcff866a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.512 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 172bedcc-f5b9-4c9a-924f-273748a74385 in datapath fa74c5c0-ec41-4340-955b-aa4e2463e355 unbound from our chassis#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.513 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa74c5c0-ec41-4340-955b-aa4e2463e355, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:32:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:42.514 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d252fdb6-26bb-4f3b-bec2-3b91e01fafad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:42.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:43.124 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:43 np0005465987 nova_compute[230713]: 2025-10-02 12:32:43.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:43.125 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:32:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:32:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.442 2 INFO nova.virt.libvirt.driver [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Deleting instance files /var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578_del#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.444 2 INFO nova.virt.libvirt.driver [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Deletion of /var/lib/nova/instances/d62b9dd7-e821-4bc4-97bb-ae0696886578_del complete#033[00m
Oct  2 08:32:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.573 2 INFO nova.compute.manager [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Took 2.52 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.573 2 DEBUG oslo.service.loopingcall [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.576 2 DEBUG nova.compute.manager [-] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.576 2 DEBUG nova.network.neutron [-] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.580 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.580 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.580 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.581 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.581 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] No waiting events found dispatching network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.581 2 WARNING nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received unexpected event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.581 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.582 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.582 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.582 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.582 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] No waiting events found dispatching network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.583 2 WARNING nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received unexpected event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.583 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.583 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.583 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.584 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.584 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] No waiting events found dispatching network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.584 2 WARNING nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received unexpected event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.584 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-unplugged-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.585 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.585 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.585 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.585 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] No waiting events found dispatching network-vif-unplugged-172bedcc-f5b9-4c9a-924f-273748a74385 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.586 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-unplugged-172bedcc-f5b9-4c9a-924f-273748a74385 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.586 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.586 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.586 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.587 2 DEBUG oslo_concurrency.lockutils [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.587 2 DEBUG nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] No waiting events found dispatching network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.587 2 WARNING nova.compute.manager [req-bc885608-289a-41cb-8e2b-577502f97d74 req-220b0c3c-6ac8-49d1-9971-f8ee3a6f6111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received unexpected event network-vif-plugged-172bedcc-f5b9-4c9a-924f-273748a74385 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:32:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:44.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:44 np0005465987 nova_compute[230713]: 2025-10-02 12:32:44.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:45 np0005465987 nova_compute[230713]: 2025-10-02 12:32:45.367 2 DEBUG nova.network.neutron [-] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:45.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:45 np0005465987 nova_compute[230713]: 2025-10-02 12:32:45.406 2 INFO nova.compute.manager [-] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct  2 08:32:45 np0005465987 nova_compute[230713]: 2025-10-02 12:32:45.452 2 DEBUG oslo_concurrency.lockutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:45 np0005465987 nova_compute[230713]: 2025-10-02 12:32:45.452 2 DEBUG oslo_concurrency.lockutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:45 np0005465987 nova_compute[230713]: 2025-10-02 12:32:45.567 2 DEBUG oslo_concurrency.processutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:45 np0005465987 podman[276519]: 2025-10-02 12:32:45.863852192 +0000 UTC m=+0.065734581 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 08:32:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/680480766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:46 np0005465987 nova_compute[230713]: 2025-10-02 12:32:46.027 2 DEBUG oslo_concurrency.processutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:46 np0005465987 nova_compute[230713]: 2025-10-02 12:32:46.033 2 DEBUG nova.compute.provider_tree [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:46 np0005465987 nova_compute[230713]: 2025-10-02 12:32:46.054 2 DEBUG nova.scheduler.client.report [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:46 np0005465987 nova_compute[230713]: 2025-10-02 12:32:46.118 2 DEBUG oslo_concurrency.lockutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:46 np0005465987 nova_compute[230713]: 2025-10-02 12:32:46.221 2 INFO nova.scheduler.client.report [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Deleted allocations for instance d62b9dd7-e821-4bc4-97bb-ae0696886578#033[00m
Oct  2 08:32:46 np0005465987 nova_compute[230713]: 2025-10-02 12:32:46.330 2 DEBUG oslo_concurrency.lockutils [None req-d4d76536-5b73-4555-bec7-177ffc05c261 0106eca2dafa49be98cd4d80559ecd6e 9ae403dc06bb4ae2931781d6fdbaba80 - - default default] Lock "d62b9dd7-e821-4bc4-97bb-ae0696886578" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:46 np0005465987 nova_compute[230713]: 2025-10-02 12:32:46.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:46 np0005465987 nova_compute[230713]: 2025-10-02 12:32:46.656 2 DEBUG nova.compute.manager [req-d3b4cd13-5a31-4ea9-8c69-042d7298e2be req-91366de6-dac2-4769-b5a5-364af9ca5d8d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Received event network-vif-deleted-172bedcc-f5b9-4c9a-924f-273748a74385 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:46.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:47 np0005465987 nova_compute[230713]: 2025-10-02 12:32:47.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:47.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e323 e323: 3 total, 3 up, 3 in
Oct  2 08:32:48 np0005465987 nova_compute[230713]: 2025-10-02 12:32:48.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:48.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:32:49.127 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:49.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:49 np0005465987 nova_compute[230713]: 2025-10-02 12:32:49.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:32:50Z|00476|binding|INFO|Releasing lport 02b7597d-2fc1-4c56-8603-4dcb0c716c82 from this chassis (sb_readonly=0)
Oct  2 08:32:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:50.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:50 np0005465987 podman[276542]: 2025-10-02 12:32:50.909531272 +0000 UTC m=+0.106238586 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct  2 08:32:50 np0005465987 nova_compute[230713]: 2025-10-02 12:32:50.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:50 np0005465987 podman[276540]: 2025-10-02 12:32:50.930472258 +0000 UTC m=+0.135401709 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:50 np0005465987 podman[276541]: 2025-10-02 12:32:50.943752454 +0000 UTC m=+0.137503837 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e324 e324: 3 total, 3 up, 3 in
Oct  2 08:32:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:51.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:52 np0005465987 nova_compute[230713]: 2025-10-02 12:32:52.317 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:52 np0005465987 nova_compute[230713]: 2025-10-02 12:32:52.318 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:52 np0005465987 nova_compute[230713]: 2025-10-02 12:32:52.338 2 DEBUG nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:32:52 np0005465987 nova_compute[230713]: 2025-10-02 12:32:52.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:52 np0005465987 nova_compute[230713]: 2025-10-02 12:32:52.412 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:52 np0005465987 nova_compute[230713]: 2025-10-02 12:32:52.413 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:52 np0005465987 nova_compute[230713]: 2025-10-02 12:32:52.419 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:32:52 np0005465987 nova_compute[230713]: 2025-10-02 12:32:52.419 2 INFO nova.compute.claims [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:32:52 np0005465987 nova_compute[230713]: 2025-10-02 12:32:52.608 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:52.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.101 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.111 2 DEBUG nova.compute.provider_tree [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.131 2 DEBUG nova.scheduler.client.report [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.157 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.158 2 DEBUG nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.211 2 DEBUG nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.212 2 DEBUG nova.network.neutron [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.239 2 INFO nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.299 2 DEBUG nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:32:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.428 2 DEBUG nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.429 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.429 2 INFO nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Creating image(s)#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.460 2 DEBUG nova.storage.rbd_utils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.486 2 DEBUG nova.storage.rbd_utils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.513 2 DEBUG nova.storage.rbd_utils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.517 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.545 2 DEBUG nova.policy [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6c932f0d0e594f00855572fbe06ee3aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.597 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.598 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.598 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.599 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.623 2 DEBUG nova.storage.rbd_utils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:53 np0005465987 nova_compute[230713]: 2025-10-02 12:32:53.627 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:54 np0005465987 nova_compute[230713]: 2025-10-02 12:32:54.316 2 DEBUG nova.network.neutron [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Successfully created port: 738a4094-d8e6-4b4c-9cfc-440659391fa0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:32:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 35K writes, 138K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.88 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6543 writes, 23K keys, 6543 commit groups, 1.0 writes per commit group, ingest: 22.44 MB, 0.04 MB/s#012Interval WAL: 6544 writes, 2656 syncs, 2.46 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:32:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:54.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:54 np0005465987 nova_compute[230713]: 2025-10-02 12:32:54.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:54 np0005465987 nova_compute[230713]: 2025-10-02 12:32:54.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:55 np0005465987 nova_compute[230713]: 2025-10-02 12:32:55.408 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.781s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:55 np0005465987 nova_compute[230713]: 2025-10-02 12:32:55.521 2 DEBUG nova.storage.rbd_utils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] resizing rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:32:55 np0005465987 nova_compute[230713]: 2025-10-02 12:32:55.948 2 DEBUG nova.objects.instance [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'migration_context' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:55 np0005465987 nova_compute[230713]: 2025-10-02 12:32:55.965 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:32:55 np0005465987 nova_compute[230713]: 2025-10-02 12:32:55.965 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Ensure instance console log exists: /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:32:55 np0005465987 nova_compute[230713]: 2025-10-02 12:32:55.966 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:55 np0005465987 nova_compute[230713]: 2025-10-02 12:32:55.966 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:55 np0005465987 nova_compute[230713]: 2025-10-02 12:32:55.966 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.625 2 DEBUG nova.network.neutron [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Successfully updated port: 738a4094-d8e6-4b4c-9cfc-440659391fa0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.644 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.644 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquired lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.645 2 DEBUG nova.network.neutron [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.777 2 DEBUG nova.compute.manager [req-29df88c4-c967-4ec2-821c-7b832f25caba req-758e24a7-34c1-47e0-b88e-2e8158d7940a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-changed-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.777 2 DEBUG nova.compute.manager [req-29df88c4-c967-4ec2-821c-7b832f25caba req-758e24a7-34c1-47e0-b88e-2e8158d7940a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Refreshing instance network info cache due to event network-changed-738a4094-d8e6-4b4c-9cfc-440659391fa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.778 2 DEBUG oslo_concurrency.lockutils [req-29df88c4-c967-4ec2-821c-7b832f25caba req-758e24a7-34c1-47e0-b88e-2e8158d7940a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.888 2 DEBUG nova.network.neutron [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:32:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:56.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:56 np0005465987 nova_compute[230713]: 2025-10-02 12:32:56.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:32:57 np0005465987 nova_compute[230713]: 2025-10-02 12:32:57.302 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408362.3009858, d62b9dd7-e821-4bc4-97bb-ae0696886578 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:57 np0005465987 nova_compute[230713]: 2025-10-02 12:32:57.302 2 INFO nova.compute.manager [-] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:32:57 np0005465987 nova_compute[230713]: 2025-10-02 12:32:57.343 2 DEBUG nova.compute.manager [None req-ed00fb90-4fc4-4cdb-9fd4-468f597c7fca - - - - - -] [instance: d62b9dd7-e821-4bc4-97bb-ae0696886578] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:57 np0005465987 nova_compute[230713]: 2025-10-02 12:32:57.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:57.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:57 np0005465987 nova_compute[230713]: 2025-10-02 12:32:57.512 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:57 np0005465987 nova_compute[230713]: 2025-10-02 12:32:57.512 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:57 np0005465987 nova_compute[230713]: 2025-10-02 12:32:57.513 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:32:57 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:32:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 e325: 3 total, 3 up, 3 in
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.755 2 DEBUG nova.network.neutron [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Updating instance_info_cache with network_info: [{"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.777 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Releasing lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.778 2 DEBUG nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance network_info: |[{"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.778 2 DEBUG oslo_concurrency.lockutils [req-29df88c4-c967-4ec2-821c-7b832f25caba req-758e24a7-34c1-47e0-b88e-2e8158d7940a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.778 2 DEBUG nova.network.neutron [req-29df88c4-c967-4ec2-821c-7b832f25caba req-758e24a7-34c1-47e0-b88e-2e8158d7940a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Refreshing network info cache for port 738a4094-d8e6-4b4c-9cfc-440659391fa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.782 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Start _get_guest_xml network_info=[{"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.788 2 WARNING nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.798 2 DEBUG nova.virt.libvirt.host [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.799 2 DEBUG nova.virt.libvirt.host [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.808 2 DEBUG nova.virt.libvirt.host [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.809 2 DEBUG nova.virt.libvirt.host [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.811 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.811 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.811 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.812 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.812 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.812 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.813 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.813 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.813 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.813 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.814 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.814 2 DEBUG nova.virt.hardware [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:32:58 np0005465987 nova_compute[230713]: 2025-10-02 12:32:58.817 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:32:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:58.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:32:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3260648778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.287 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.318 2 DEBUG nova.storage.rbd_utils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.323 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:32:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4028793524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.756 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.760 2 DEBUG nova.virt.libvirt.vif [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1934445766',display_name='tempest-ServerRescueNegativeTestJSON-server-1934445766',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1934445766',id=125,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-a8p10ujd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:53Z,user_data=None,user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=6ec65b83-acd5-4790-8df0-0dfa903d2f84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.760 2 DEBUG nova.network.os_vif_util [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.761 2 DEBUG nova.network.os_vif_util [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:e7:27,bridge_name='br-int',has_traffic_filtering=True,id=738a4094-d8e6-4b4c-9cfc-440659391fa0,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a4094-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.762 2 DEBUG nova.objects.instance [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.781 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <uuid>6ec65b83-acd5-4790-8df0-0dfa903d2f84</uuid>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <name>instance-0000007d</name>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1934445766</nova:name>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:32:58</nova:creationTime>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <nova:user uuid="6c932f0d0e594f00855572fbe06ee3aa">tempest-ServerRescueNegativeTestJSON-959216005-project-member</nova:user>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <nova:project uuid="cc4d8f857b2d42bf9ae477fc5f514216">tempest-ServerRescueNegativeTestJSON-959216005</nova:project>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <nova:port uuid="738a4094-d8e6-4b4c-9cfc-440659391fa0">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <entry name="serial">6ec65b83-acd5-4790-8df0-0dfa903d2f84</entry>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <entry name="uuid">6ec65b83-acd5-4790-8df0-0dfa903d2f84</entry>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:a7:e7:27"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <target dev="tap738a4094-d8"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/console.log" append="off"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:32:59 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:32:59 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:32:59 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:32:59 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.783 2 DEBUG nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Preparing to wait for external event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.783 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.783 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.784 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.784 2 DEBUG nova.virt.libvirt.vif [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1934445766',display_name='tempest-ServerRescueNegativeTestJSON-server-1934445766',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1934445766',id=125,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-a8p10ujd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:32:53Z,user_data=None,user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=6ec65b83-acd5-4790-8df0-0dfa903d2f84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.785 2 DEBUG nova.network.os_vif_util [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.785 2 DEBUG nova.network.os_vif_util [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:e7:27,bridge_name='br-int',has_traffic_filtering=True,id=738a4094-d8e6-4b4c-9cfc-440659391fa0,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a4094-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.786 2 DEBUG os_vif [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:e7:27,bridge_name='br-int',has_traffic_filtering=True,id=738a4094-d8e6-4b4c-9cfc-440659391fa0,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a4094-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738a4094-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap738a4094-d8, col_values=(('external_ids', {'iface-id': '738a4094-d8e6-4b4c-9cfc-440659391fa0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:e7:27', 'vm-uuid': '6ec65b83-acd5-4790-8df0-0dfa903d2f84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:59 np0005465987 NetworkManager[44910]: <info>  [1759408379.7945] manager: (tap738a4094-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.799 2 INFO os_vif [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:e7:27,bridge_name='br-int',has_traffic_filtering=True,id=738a4094-d8e6-4b4c-9cfc-440659391fa0,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a4094-d8')#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.866 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.867 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.867 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No VIF found with MAC fa:16:3e:a7:e7:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.867 2 INFO nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Using config drive#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.891 2 DEBUG nova.storage.rbd_utils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:32:59 np0005465987 nova_compute[230713]: 2025-10-02 12:32:59.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:00 np0005465987 nova_compute[230713]: 2025-10-02 12:33:00.310 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updating instance_info_cache with network_info: [{"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:00 np0005465987 nova_compute[230713]: 2025-10-02 12:33:00.350 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-e744f3b9-ccc1-43b1-b134-e6050b9487eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:00 np0005465987 nova_compute[230713]: 2025-10-02 12:33:00.351 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:33:00 np0005465987 nova_compute[230713]: 2025-10-02 12:33:00.387 2 INFO nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Creating config drive at /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config#033[00m
Oct  2 08:33:00 np0005465987 nova_compute[230713]: 2025-10-02 12:33:00.391 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_zn88ago execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:00 np0005465987 nova_compute[230713]: 2025-10-02 12:33:00.540 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_zn88ago" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:00 np0005465987 nova_compute[230713]: 2025-10-02 12:33:00.617 2 DEBUG nova.storage.rbd_utils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:00 np0005465987 nova_compute[230713]: 2025-10-02 12:33:00.623 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:00.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.069 2 DEBUG oslo_concurrency.processutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.070 2 INFO nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Deleting local config drive /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config because it was imported into RBD.#033[00m
Oct  2 08:33:01 np0005465987 kernel: tap738a4094-d8: entered promiscuous mode
Oct  2 08:33:01 np0005465987 NetworkManager[44910]: <info>  [1759408381.1198] manager: (tap738a4094-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:01 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:01Z|00477|binding|INFO|Claiming lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 for this chassis.
Oct  2 08:33:01 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:01Z|00478|binding|INFO|738a4094-d8e6-4b4c-9cfc-440659391fa0: Claiming fa:16:3e:a7:e7:27 10.100.0.9
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.133 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:e7:27 10.100.0.9'], port_security=['fa:16:3e:a7:e7:27 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6ec65b83-acd5-4790-8df0-0dfa903d2f84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=738a4094-d8e6-4b4c-9cfc-440659391fa0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.135 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 738a4094-d8e6-4b4c-9cfc-440659391fa0 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 bound to our chassis#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.137 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726#033[00m
Oct  2 08:33:01 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:01Z|00479|binding|INFO|Setting lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 ovn-installed in OVS
Oct  2 08:33:01 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:01Z|00480|binding|INFO|Setting lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 up in Southbound
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.150 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d67f12cc-42c0-4c39-931d-1ea355f252f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.151 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape58f4ba2-c1 in ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.153 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape58f4ba2-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.153 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7da67fd9-6951-439b-b207-efe7063ea38e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.154 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7e78a6-1a46-428a-8390-6f2d0292c6b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 systemd-machined[188335]: New machine qemu-56-instance-0000007d.
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.165 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[37ee8116-1f19-4743-8996-d93eea524d77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 systemd[1]: Started Virtual Machine qemu-56-instance-0000007d.
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.179 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9d518e-16f3-45bf-a919-eeaf6b0340f5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 systemd-udevd[276932]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:01 np0005465987 NetworkManager[44910]: <info>  [1759408381.2032] device (tap738a4094-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:33:01 np0005465987 NetworkManager[44910]: <info>  [1759408381.2042] device (tap738a4094-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.211 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bed1fd30-0923-4108-b355-3ddfdfdd35be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.216 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[45b80f17-01a3-4717-bedd-0b5c51c96c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 systemd-udevd[276939]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:01 np0005465987 NetworkManager[44910]: <info>  [1759408381.2180] manager: (tape58f4ba2-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.250 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb34fc3-c125-4724-a7e0-a02388644527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.253 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a2e8a3-1c6f-45cc-8f3d-f346312a113f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 NetworkManager[44910]: <info>  [1759408381.2746] device (tape58f4ba2-c0): carrier: link connected
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.279 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc9e33f-8544-41a8-89da-eb8af27256e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.295 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c00d5306-5041-4627-afc8-a02073e3bd5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644984, 'reachable_time': 18570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276962, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.310 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf1eb24-3c64-4798-b521-25404fb1e538]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:ad0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 644984, 'tstamp': 644984}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276963, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.324 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbf02d0-f90e-4c30-a7d0-be83360a60d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644984, 'reachable_time': 18570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276964, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.357 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[777ebcd4-cad9-4e7f-b59b-91e1d9b94d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.408 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5830402a-1228-4c0f-ab65-b6dec63e3d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.410 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.410 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.410 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:01 np0005465987 NetworkManager[44910]: <info>  [1759408381.4128] manager: (tape58f4ba2-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct  2 08:33:01 np0005465987 kernel: tape58f4ba2-c0: entered promiscuous mode
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:33:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:01.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.415 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:01 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:01Z|00481|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.433 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.434 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[66d7b206-5006-499e-9833-b09785c1a628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.435 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-e58f4ba2-c72c-42b8-acea-ca6241431726
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID e58f4ba2-c72c-42b8-acea-ca6241431726
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:33:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:01.436 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'env', 'PROCESS_TAG=haproxy-e58f4ba2-c72c-42b8-acea-ca6241431726', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e58f4ba2-c72c-42b8-acea-ca6241431726.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.504 2 DEBUG nova.compute.manager [req-469daa98-2a12-4fa2-9d80-4435a0689ba0 req-bfe54249-7425-46d3-9334-53f98d83b787 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.504 2 DEBUG oslo_concurrency.lockutils [req-469daa98-2a12-4fa2-9d80-4435a0689ba0 req-bfe54249-7425-46d3-9334-53f98d83b787 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.505 2 DEBUG oslo_concurrency.lockutils [req-469daa98-2a12-4fa2-9d80-4435a0689ba0 req-bfe54249-7425-46d3-9334-53f98d83b787 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.505 2 DEBUG oslo_concurrency.lockutils [req-469daa98-2a12-4fa2-9d80-4435a0689ba0 req-bfe54249-7425-46d3-9334-53f98d83b787 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.505 2 DEBUG nova.compute.manager [req-469daa98-2a12-4fa2-9d80-4435a0689ba0 req-bfe54249-7425-46d3-9334-53f98d83b787 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Processing event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:33:01 np0005465987 podman[277038]: 2025-10-02 12:33:01.859936763 +0000 UTC m=+0.059921410 container create 86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:33:01 np0005465987 systemd[1]: Started libpod-conmon-86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b.scope.
Oct  2 08:33:01 np0005465987 podman[277038]: 2025-10-02 12:33:01.829384873 +0000 UTC m=+0.029369560 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.925 2 DEBUG nova.network.neutron [req-29df88c4-c967-4ec2-821c-7b832f25caba req-758e24a7-34c1-47e0-b88e-2e8158d7940a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Updated VIF entry in instance network info cache for port 738a4094-d8e6-4b4c-9cfc-440659391fa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.925 2 DEBUG nova.network.neutron [req-29df88c4-c967-4ec2-821c-7b832f25caba req-758e24a7-34c1-47e0-b88e-2e8158d7940a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Updating instance_info_cache with network_info: [{"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:01 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:33:01 np0005465987 nova_compute[230713]: 2025-10-02 12:33:01.944 2 DEBUG oslo_concurrency.lockutils [req-29df88c4-c967-4ec2-821c-7b832f25caba req-758e24a7-34c1-47e0-b88e-2e8158d7940a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:01 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734108aa76c56a00dbec9d34bf465c6f7a72bbd6d84f7bfa471892848e960d4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:33:01 np0005465987 podman[277038]: 2025-10-02 12:33:01.960858482 +0000 UTC m=+0.160843159 container init 86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:33:01 np0005465987 podman[277038]: 2025-10-02 12:33:01.96764494 +0000 UTC m=+0.167629587 container start 86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:33:01 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[277053]: [NOTICE]   (277057) : New worker (277059) forked
Oct  2 08:33:01 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[277053]: [NOTICE]   (277057) : Loading success.
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.162 2 DEBUG nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.163 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408382.1617708, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.163 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Started (Lifecycle Event)#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.168 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.172 2 INFO nova.virt.libvirt.driver [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance spawned successfully.#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.173 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.197 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.205 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.211 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.211 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.211 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.212 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.212 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.213 2 DEBUG nova.virt.libvirt.driver [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.232 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.232 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408382.1621456, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.232 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.267 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.272 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408382.1678, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.272 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.302 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.306 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.329 2 INFO nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Took 8.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.329 2 DEBUG nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.338 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.421 2 INFO nova.compute.manager [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Took 10.03 seconds to build instance.#033[00m
Oct  2 08:33:02 np0005465987 nova_compute[230713]: 2025-10-02 12:33:02.439 2 DEBUG oslo_concurrency.lockutils [None req-51495f90-8c03-4429-b16a-f4708528dd9f 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:02.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:33:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:33:03 np0005465987 nova_compute[230713]: 2025-10-02 12:33:03.665 2 DEBUG nova.compute.manager [req-4954d65f-2792-4628-8cf8-ae99e8ec23d1 req-bccd29c0-c881-4547-a672-e43b98666e2b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:03 np0005465987 nova_compute[230713]: 2025-10-02 12:33:03.666 2 DEBUG oslo_concurrency.lockutils [req-4954d65f-2792-4628-8cf8-ae99e8ec23d1 req-bccd29c0-c881-4547-a672-e43b98666e2b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:03 np0005465987 nova_compute[230713]: 2025-10-02 12:33:03.666 2 DEBUG oslo_concurrency.lockutils [req-4954d65f-2792-4628-8cf8-ae99e8ec23d1 req-bccd29c0-c881-4547-a672-e43b98666e2b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:03 np0005465987 nova_compute[230713]: 2025-10-02 12:33:03.666 2 DEBUG oslo_concurrency.lockutils [req-4954d65f-2792-4628-8cf8-ae99e8ec23d1 req-bccd29c0-c881-4547-a672-e43b98666e2b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:03 np0005465987 nova_compute[230713]: 2025-10-02 12:33:03.667 2 DEBUG nova.compute.manager [req-4954d65f-2792-4628-8cf8-ae99e8ec23d1 req-bccd29c0-c881-4547-a672-e43b98666e2b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:03 np0005465987 nova_compute[230713]: 2025-10-02 12:33:03.667 2 WARNING nova.compute.manager [req-4954d65f-2792-4628-8cf8-ae99e8ec23d1 req-bccd29c0-c881-4547-a672-e43b98666e2b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:04 np0005465987 nova_compute[230713]: 2025-10-02 12:33:04.344 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:04 np0005465987 nova_compute[230713]: 2025-10-02 12:33:04.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:04.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:04 np0005465987 nova_compute[230713]: 2025-10-02 12:33:04.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:05.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:05 np0005465987 nova_compute[230713]: 2025-10-02 12:33:05.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:05 np0005465987 nova_compute[230713]: 2025-10-02 12:33:05.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:05 np0005465987 nova_compute[230713]: 2025-10-02 12:33:05.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:05 np0005465987 nova_compute[230713]: 2025-10-02 12:33:05.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:06 np0005465987 nova_compute[230713]: 2025-10-02 12:33:06.158 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:06 np0005465987 nova_compute[230713]: 2025-10-02 12:33:06.159 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:06 np0005465987 nova_compute[230713]: 2025-10-02 12:33:06.159 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:06 np0005465987 nova_compute[230713]: 2025-10-02 12:33:06.159 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:33:06 np0005465987 nova_compute[230713]: 2025-10-02 12:33:06.159 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:06 np0005465987 nova_compute[230713]: 2025-10-02 12:33:06.666 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:06.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.013 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.013 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.016 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.017 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.020 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.020 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.223 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.224 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3912MB free_disk=20.764171600341797GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.224 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.224 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:07.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.468 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.468 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance e744f3b9-ccc1-43b1-b134-e6050b9487eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.469 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 6ec65b83-acd5-4790-8df0-0dfa903d2f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.469 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.469 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.541 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:33:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4158264808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.954 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:07 np0005465987 nova_compute[230713]: 2025-10-02 12:33:07.959 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:33:08 np0005465987 nova_compute[230713]: 2025-10-02 12:33:08.194 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:33:08 np0005465987 nova_compute[230713]: 2025-10-02 12:33:08.464 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:33:08 np0005465987 nova_compute[230713]: 2025-10-02 12:33:08.465 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:08.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:09.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:09 np0005465987 nova_compute[230713]: 2025-10-02 12:33:09.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:09 np0005465987 nova_compute[230713]: 2025-10-02 12:33:09.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:33:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203929170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:33:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:10.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:33:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:11.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:33:11 np0005465987 nova_compute[230713]: 2025-10-02 12:33:11.466 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:11 np0005465987 nova_compute[230713]: 2025-10-02 12:33:11.466 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:11 np0005465987 nova_compute[230713]: 2025-10-02 12:33:11.467 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:33:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:12.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:13.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:14 np0005465987 nova_compute[230713]: 2025-10-02 12:33:14.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:14 np0005465987 nova_compute[230713]: 2025-10-02 12:33:14.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:15.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:16 np0005465987 podman[277113]: 2025-10-02 12:33:16.853801911 +0000 UTC m=+0.057814333 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:33:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:16.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:17Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:e7:27 10.100.0.9
Oct  2 08:33:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:17Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:e7:27 10.100.0.9
Oct  2 08:33:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:17.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:18.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:19.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:19 np0005465987 nova_compute[230713]: 2025-10-02 12:33:19.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005465987 nova_compute[230713]: 2025-10-02 12:33:19.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:20.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:21.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:21 np0005465987 podman[277135]: 2025-10-02 12:33:21.852656911 +0000 UTC m=+0.057573176 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:33:21 np0005465987 podman[277134]: 2025-10-02 12:33:21.856529448 +0000 UTC m=+0.062987056 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:33:21 np0005465987 podman[277133]: 2025-10-02 12:33:21.886338058 +0000 UTC m=+0.097777043 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:33:21 np0005465987 nova_compute[230713]: 2025-10-02 12:33:21.932 2 DEBUG oslo_concurrency.lockutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:21 np0005465987 nova_compute[230713]: 2025-10-02 12:33:21.933 2 DEBUG oslo_concurrency.lockutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:21 np0005465987 nova_compute[230713]: 2025-10-02 12:33:21.934 2 DEBUG oslo_concurrency.lockutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:21 np0005465987 nova_compute[230713]: 2025-10-02 12:33:21.935 2 DEBUG oslo_concurrency.lockutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:21 np0005465987 nova_compute[230713]: 2025-10-02 12:33:21.935 2 DEBUG oslo_concurrency.lockutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:21 np0005465987 nova_compute[230713]: 2025-10-02 12:33:21.937 2 INFO nova.compute.manager [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Terminating instance#033[00m
Oct  2 08:33:21 np0005465987 nova_compute[230713]: 2025-10-02 12:33:21.939 2 DEBUG nova.compute.manager [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:33:22 np0005465987 kernel: tap204ddeb1-bc (unregistering): left promiscuous mode
Oct  2 08:33:22 np0005465987 NetworkManager[44910]: <info>  [1759408402.1848] device (tap204ddeb1-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:33:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:22Z|00482|binding|INFO|Releasing lport 204ddeb1-bc16-445e-a774-09893c526348 from this chassis (sb_readonly=0)
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:22Z|00483|binding|INFO|Setting lport 204ddeb1-bc16-445e-a774-09893c526348 down in Southbound
Oct  2 08:33:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:22Z|00484|binding|INFO|Removing iface tap204ddeb1-bc ovn-installed in OVS
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.203 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:14:c2 10.100.0.10'], port_security=['fa:16:3e:9d:14:c2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e744f3b9-ccc1-43b1-b134-e6050b9487eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a7e36b3-799e-47d8-a152-7f7146431afe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=204ddeb1-bc16-445e-a774-09893c526348) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.204 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 204ddeb1-bc16-445e-a774-09893c526348 in datapath 585473f8-52e4-4e55-96df-8a236d361126 unbound from our chassis#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.205 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 585473f8-52e4-4e55-96df-8a236d361126#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.223 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba5d713-d0fe-4e64-b475-c4caaacf5ac7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.251 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b0de54-799b-4060-9d0e-1f5a91ba3e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.254 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7c7513-2b6c-4502-80a7-bf74ffcd8af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:22 np0005465987 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct  2 08:33:22 np0005465987 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006e.scope: Consumed 26.914s CPU time.
Oct  2 08:33:22 np0005465987 systemd-machined[188335]: Machine qemu-47-instance-0000006e terminated.
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.287 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a70f4658-b71c-4914-8ee5-bf73643542b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.310 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[885bf9d2-3b0f-431b-8027-088f3fb3860b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap585473f8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:8e:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611351, 'reachable_time': 15828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277206, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.324 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1f34e181-989f-4d2d-9f57-566eddb09433]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611362, 'tstamp': 611362}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277207, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap585473f8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611365, 'tstamp': 611365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277207, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.326 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.333 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585473f8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.333 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.334 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap585473f8-50, col_values=(('external_ids', {'iface-id': '02b7597d-2fc1-4c56-8603-4dcb0c716c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:22.334 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.373 2 INFO nova.virt.libvirt.driver [-] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Instance destroyed successfully.#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.373 2 DEBUG nova.objects.instance [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'resources' on Instance uuid e744f3b9-ccc1-43b1-b134-e6050b9487eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.391 2 DEBUG nova.virt.libvirt.vif [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=110,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBEmvr2XQXDnaV0WQDbbXt57cEK6okdC4PHEYdjpQBx2HU9OQgvgRTm3sGWmsa/AInUTPV9ABsCq2lJ9PCqfb1WP51XCZeB9QBIxafEy8h788huF0550ajkopZIwmSLpiA==',key_name='tempest-keypair-425033456',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-kuclzhj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:28:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=e744f3b9-ccc1-43b1-b134-e6050b9487eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.391 2 DEBUG nova.network.os_vif_util [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "204ddeb1-bc16-445e-a774-09893c526348", "address": "fa:16:3e:9d:14:c2", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap204ddeb1-bc", "ovs_interfaceid": "204ddeb1-bc16-445e-a774-09893c526348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.392 2 DEBUG nova.network.os_vif_util [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:14:c2,bridge_name='br-int',has_traffic_filtering=True,id=204ddeb1-bc16-445e-a774-09893c526348,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap204ddeb1-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.392 2 DEBUG os_vif [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:14:c2,bridge_name='br-int',has_traffic_filtering=True,id=204ddeb1-bc16-445e-a774-09893c526348,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap204ddeb1-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.394 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap204ddeb1-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.399 2 INFO os_vif [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:14:c2,bridge_name='br-int',has_traffic_filtering=True,id=204ddeb1-bc16-445e-a774-09893c526348,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap204ddeb1-bc')#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.436 2 DEBUG nova.compute.manager [req-781beafb-edff-452a-bf13-1098c61bc2c1 req-0eb75fae-3dfe-491f-8f5c-9a45b1ebbb3a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received event network-vif-unplugged-204ddeb1-bc16-445e-a774-09893c526348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.436 2 DEBUG oslo_concurrency.lockutils [req-781beafb-edff-452a-bf13-1098c61bc2c1 req-0eb75fae-3dfe-491f-8f5c-9a45b1ebbb3a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.437 2 DEBUG oslo_concurrency.lockutils [req-781beafb-edff-452a-bf13-1098c61bc2c1 req-0eb75fae-3dfe-491f-8f5c-9a45b1ebbb3a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.437 2 DEBUG oslo_concurrency.lockutils [req-781beafb-edff-452a-bf13-1098c61bc2c1 req-0eb75fae-3dfe-491f-8f5c-9a45b1ebbb3a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.437 2 DEBUG nova.compute.manager [req-781beafb-edff-452a-bf13-1098c61bc2c1 req-0eb75fae-3dfe-491f-8f5c-9a45b1ebbb3a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] No waiting events found dispatching network-vif-unplugged-204ddeb1-bc16-445e-a774-09893c526348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.438 2 DEBUG nova.compute.manager [req-781beafb-edff-452a-bf13-1098c61bc2c1 req-0eb75fae-3dfe-491f-8f5c-9a45b1ebbb3a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received event network-vif-unplugged-204ddeb1-bc16-445e-a774-09893c526348 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.967 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.967 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.968 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.968 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:22.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.968 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:22 np0005465987 nova_compute[230713]: 2025-10-02 12:33:22.968 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.015 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.016 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Image id c2d0c2bc-fe21-4689-86ae-d6728c15874c yields fingerprint 50c3d0e01c5fd68886c717f1fdd053015a0fe968 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.016 2 INFO nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] image c2d0c2bc-fe21-4689-86ae-d6728c15874c at (/var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968): checking#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.016 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] image c2d0c2bc-fe21-4689-86ae-d6728c15874c at (/var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.019 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.020 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.020 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] e744f3b9-ccc1-43b1-b134-e6050b9487eb is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.020 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] 6ec65b83-acd5-4790-8df0-0dfa903d2f84 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.021 2 WARNING nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.021 2 INFO nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Active base files: /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.022 2 INFO nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Removable base files: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.022 2 INFO nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.023 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.023 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  2 08:33:23 np0005465987 nova_compute[230713]: 2025-10-02 12:33:23.023 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  2 08:33:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:23.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:24 np0005465987 nova_compute[230713]: 2025-10-02 12:33:24.582 2 DEBUG nova.compute.manager [req-e4936e96-7030-4fa7-9aaa-5c140e582495 req-0b7c1a1a-0dd8-46df-a7c2-c17d31351b4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received event network-vif-plugged-204ddeb1-bc16-445e-a774-09893c526348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:24 np0005465987 nova_compute[230713]: 2025-10-02 12:33:24.582 2 DEBUG oslo_concurrency.lockutils [req-e4936e96-7030-4fa7-9aaa-5c140e582495 req-0b7c1a1a-0dd8-46df-a7c2-c17d31351b4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:24 np0005465987 nova_compute[230713]: 2025-10-02 12:33:24.582 2 DEBUG oslo_concurrency.lockutils [req-e4936e96-7030-4fa7-9aaa-5c140e582495 req-0b7c1a1a-0dd8-46df-a7c2-c17d31351b4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:24 np0005465987 nova_compute[230713]: 2025-10-02 12:33:24.583 2 DEBUG oslo_concurrency.lockutils [req-e4936e96-7030-4fa7-9aaa-5c140e582495 req-0b7c1a1a-0dd8-46df-a7c2-c17d31351b4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:24 np0005465987 nova_compute[230713]: 2025-10-02 12:33:24.583 2 DEBUG nova.compute.manager [req-e4936e96-7030-4fa7-9aaa-5c140e582495 req-0b7c1a1a-0dd8-46df-a7c2-c17d31351b4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] No waiting events found dispatching network-vif-plugged-204ddeb1-bc16-445e-a774-09893c526348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:24 np0005465987 nova_compute[230713]: 2025-10-02 12:33:24.583 2 WARNING nova.compute.manager [req-e4936e96-7030-4fa7-9aaa-5c140e582495 req-0b7c1a1a-0dd8-46df-a7c2-c17d31351b4d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received unexpected event network-vif-plugged-204ddeb1-bc16-445e-a774-09893c526348 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:33:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:24.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:25 np0005465987 nova_compute[230713]: 2025-10-02 12:33:25.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:25 np0005465987 nova_compute[230713]: 2025-10-02 12:33:25.203 2 INFO nova.virt.libvirt.driver [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Deleting instance files /var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb_del#033[00m
Oct  2 08:33:25 np0005465987 nova_compute[230713]: 2025-10-02 12:33:25.204 2 INFO nova.virt.libvirt.driver [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Deletion of /var/lib/nova/instances/e744f3b9-ccc1-43b1-b134-e6050b9487eb_del complete#033[00m
Oct  2 08:33:25 np0005465987 nova_compute[230713]: 2025-10-02 12:33:25.262 2 INFO nova.compute.manager [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Took 3.32 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:33:25 np0005465987 nova_compute[230713]: 2025-10-02 12:33:25.262 2 DEBUG oslo.service.loopingcall [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:33:25 np0005465987 nova_compute[230713]: 2025-10-02 12:33:25.263 2 DEBUG nova.compute.manager [-] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:33:25 np0005465987 nova_compute[230713]: 2025-10-02 12:33:25.263 2 DEBUG nova.network.neutron [-] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:33:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:25.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:26 np0005465987 nova_compute[230713]: 2025-10-02 12:33:26.322 2 DEBUG nova.network.neutron [-] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:26 np0005465987 nova_compute[230713]: 2025-10-02 12:33:26.362 2 INFO nova.compute.manager [-] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Took 1.10 seconds to deallocate network for instance.#033[00m
Oct  2 08:33:26 np0005465987 nova_compute[230713]: 2025-10-02 12:33:26.419 2 DEBUG oslo_concurrency.lockutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:26 np0005465987 nova_compute[230713]: 2025-10-02 12:33:26.419 2 DEBUG oslo_concurrency.lockutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:26 np0005465987 nova_compute[230713]: 2025-10-02 12:33:26.423 2 DEBUG nova.compute.manager [req-54c81f8a-47a2-4051-8a12-0f143c78e5f0 req-7291eebe-856b-40f4-aff3-fdd86e9fb5ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Received event network-vif-deleted-204ddeb1-bc16-445e-a774-09893c526348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:26 np0005465987 nova_compute[230713]: 2025-10-02 12:33:26.517 2 DEBUG oslo_concurrency.processutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:33:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/542825886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:33:26 np0005465987 nova_compute[230713]: 2025-10-02 12:33:26.936 2 DEBUG oslo_concurrency.processutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:26 np0005465987 nova_compute[230713]: 2025-10-02 12:33:26.945 2 DEBUG nova.compute.provider_tree [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:33:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:26.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:26 np0005465987 nova_compute[230713]: 2025-10-02 12:33:26.976 2 DEBUG nova.scheduler.client.report [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:33:27 np0005465987 nova_compute[230713]: 2025-10-02 12:33:27.016 2 DEBUG oslo_concurrency.lockutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:27 np0005465987 nova_compute[230713]: 2025-10-02 12:33:27.044 2 INFO nova.scheduler.client.report [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Deleted allocations for instance e744f3b9-ccc1-43b1-b134-e6050b9487eb#033[00m
Oct  2 08:33:27 np0005465987 nova_compute[230713]: 2025-10-02 12:33:27.166 2 DEBUG oslo_concurrency.lockutils [None req-48e0823d-7da6-4a36-a995-eebe1e53ea6d 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "e744f3b9-ccc1-43b1-b134-e6050b9487eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:27 np0005465987 nova_compute[230713]: 2025-10-02 12:33:27.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:27.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:27.959 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:27.959 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:27.961 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:33:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:28.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:33:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:30 np0005465987 nova_compute[230713]: 2025-10-02 12:33:30.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:30.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 10K writes, 51K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1719 writes, 8385 keys, 1719 commit groups, 1.0 writes per commit group, ingest: 16.35 MB, 0.03 MB/s#012Interval WAL: 1719 writes, 1719 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     53.5      1.14              0.19        30    0.038       0      0       0.0       0.0#012  L6      1/0    9.20 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4    105.9     88.7      3.03              0.80        29    0.104    172K    16K       0.0       0.0#012 Sum      1/0    9.20 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     76.9     79.1      4.17              0.99        59    0.071    172K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3    138.3    137.7      0.53              0.23        12    0.044     46K   3142       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    105.9     88.7      3.03              0.80        29    0.104    172K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     53.6      1.14              0.19        29    0.039       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.060, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.32 GB write, 0.09 MB/s write, 0.31 GB read, 0.09 MB/s read, 4.2 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 35.63 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.00052 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2083,34.32 MB,11.2878%) FilterBlock(59,490.80 KB,0.157662%) IndexBlock(59,852.14 KB,0.27374%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:33:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:31.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.537 2 DEBUG oslo_concurrency.lockutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.538 2 DEBUG oslo_concurrency.lockutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.539 2 DEBUG oslo_concurrency.lockutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.539 2 DEBUG oslo_concurrency.lockutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.540 2 DEBUG oslo_concurrency.lockutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.542 2 INFO nova.compute.manager [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Terminating instance#033[00m
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.544 2 DEBUG nova.compute.manager [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:33:32 np0005465987 kernel: tap2131d2f6-11 (unregistering): left promiscuous mode
Oct  2 08:33:32 np0005465987 NetworkManager[44910]: <info>  [1759408412.7774] device (tap2131d2f6-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:33:32 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:32Z|00485|binding|INFO|Releasing lport 2131d2f6-11eb-40e4-98d9-fded3f4db535 from this chassis (sb_readonly=0)
Oct  2 08:33:32 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:32Z|00486|binding|INFO|Setting lport 2131d2f6-11eb-40e4-98d9-fded3f4db535 down in Southbound
Oct  2 08:33:32 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:32Z|00487|binding|INFO|Removing iface tap2131d2f6-11 ovn-installed in OVS
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:32.840 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:67:0c 10.100.0.8'], port_security=['fa:16:3e:f2:67:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'db5a79b4-af1c-4c62-a8d8-bfc6aab11a19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-585473f8-52e4-4e55-96df-8a236d361126', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5533aaac08cd4856af72ef4992bb5e76', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29622381-99e2-43f7-9de1-ba82c9e0ff23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec297f04-3bda-490f-87d3-1f684caf96fd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2131d2f6-11eb-40e4-98d9-fded3f4db535) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:32.843 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2131d2f6-11eb-40e4-98d9-fded3f4db535 in datapath 585473f8-52e4-4e55-96df-8a236d361126 unbound from our chassis#033[00m
Oct  2 08:33:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:32.845 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 585473f8-52e4-4e55-96df-8a236d361126, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:32.846 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[161047fc-21d6-458c-9e42-e2c11cdbf9f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:32.847 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 namespace which is not needed anymore#033[00m
Oct  2 08:33:32 np0005465987 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct  2 08:33:32 np0005465987 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006b.scope: Consumed 28.643s CPU time.
Oct  2 08:33:32 np0005465987 systemd-machined[188335]: Machine qemu-45-instance-0000006b terminated.
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.991 2 INFO nova.virt.libvirt.driver [-] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Instance destroyed successfully.#033[00m
Oct  2 08:33:32 np0005465987 nova_compute[230713]: 2025-10-02 12:33:32.992 2 DEBUG nova.objects.instance [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lazy-loading 'resources' on Instance uuid db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:32.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.015 2 DEBUG nova.virt.libvirt.vif [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:27:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-49052235',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-49052235',id=107,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:27:26Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5533aaac08cd4856af72ef4992bb5e76',ramdisk_id='',reservation_id='r-t2d6d8nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-1564585024',owner_user_name='tempest-AttachVolumeMultiAttachTest-1564585024-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:27:26Z,user_data=None,user_id='22d56fcd2a4b4851bfd126ae4548ee9b',uuid=db5a79b4-af1c-4c62-a8d8-bfc6aab11a19,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.016 2 DEBUG nova.network.os_vif_util [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converting VIF {"id": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "address": "fa:16:3e:f2:67:0c", "network": {"id": "585473f8-52e4-4e55-96df-8a236d361126", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1197534465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5533aaac08cd4856af72ef4992bb5e76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2131d2f6-11", "ovs_interfaceid": "2131d2f6-11eb-40e4-98d9-fded3f4db535", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.017 2 DEBUG nova.network.os_vif_util [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:67:0c,bridge_name='br-int',has_traffic_filtering=True,id=2131d2f6-11eb-40e4-98d9-fded3f4db535,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2131d2f6-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.018 2 DEBUG os_vif [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:67:0c,bridge_name='br-int',has_traffic_filtering=True,id=2131d2f6-11eb-40e4-98d9-fded3f4db535,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2131d2f6-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.021 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2131d2f6-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.030 2 INFO os_vif [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:67:0c,bridge_name='br-int',has_traffic_filtering=True,id=2131d2f6-11eb-40e4-98d9-fded3f4db535,network=Network(585473f8-52e4-4e55-96df-8a236d361126),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2131d2f6-11')#033[00m
Oct  2 08:33:33 np0005465987 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[269365]: [NOTICE]   (269369) : haproxy version is 2.8.14-c23fe91
Oct  2 08:33:33 np0005465987 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[269365]: [NOTICE]   (269369) : path to executable is /usr/sbin/haproxy
Oct  2 08:33:33 np0005465987 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[269365]: [WARNING]  (269369) : Exiting Master process...
Oct  2 08:33:33 np0005465987 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[269365]: [WARNING]  (269369) : Exiting Master process...
Oct  2 08:33:33 np0005465987 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[269365]: [ALERT]    (269369) : Current worker (269371) exited with code 143 (Terminated)
Oct  2 08:33:33 np0005465987 neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126[269365]: [WARNING]  (269369) : All workers exited. Exiting... (0)
Oct  2 08:33:33 np0005465987 systemd[1]: libpod-dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656.scope: Deactivated successfully.
Oct  2 08:33:33 np0005465987 podman[277285]: 2025-10-02 12:33:33.060511432 +0000 UTC m=+0.085111885 container died dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:33:33 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656-userdata-shm.mount: Deactivated successfully.
Oct  2 08:33:33 np0005465987 systemd[1]: var-lib-containers-storage-overlay-85c60905e44ca3a0966343daad94566577b03b90605f574f3c6817a2e785d8d7-merged.mount: Deactivated successfully.
Oct  2 08:33:33 np0005465987 podman[277285]: 2025-10-02 12:33:33.114392705 +0000 UTC m=+0.138993158 container cleanup dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:33:33 np0005465987 systemd[1]: libpod-conmon-dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656.scope: Deactivated successfully.
Oct  2 08:33:33 np0005465987 podman[277325]: 2025-10-02 12:33:33.186734107 +0000 UTC m=+0.038569703 container remove dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:33:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:33.196 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f94478-1a8e-4ba3-a5de-a4a17596e535]: (4, ('Thu Oct  2 12:33:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 (dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656)\ndc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656\nThu Oct  2 12:33:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 (dc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656)\ndc245f491668cac0ce58f3a43fcaa63759bbbb960ed5711c71ecef2d05a92656\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:33.199 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[313050c3-9e91-4d65-9ea9-109373b5f38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:33.201 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585473f8-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:33 np0005465987 kernel: tap585473f8-50: left promiscuous mode
Oct  2 08:33:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:33.208 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[58763ddb-39db-46f9-8a22-e0ad682db099]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:33.237 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8a184fae-bb19-459d-83ca-2886ce3d13df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:33.238 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1246795b-7a16-4120-b6e3-837249bbf3d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:33.258 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b37a9411-68d5-4338-961f-37aaffc5910d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611343, 'reachable_time': 22378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277356, 'error': None, 'target': 'ovnmeta-585473f8-52e4-4e55-96df-8a236d361126', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:33.261 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-585473f8-52e4-4e55-96df-8a236d361126 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:33:33 np0005465987 systemd[1]: run-netns-ovnmeta\x2d585473f8\x2d52e4\x2d4e55\x2d96df\x2d8a236d361126.mount: Deactivated successfully.
Oct  2 08:33:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:33.261 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fc6180-0220-40db-aa4e-027b46d7b00d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.329 2 DEBUG nova.compute.manager [req-8268eab1-87dd-4793-bddb-72d550e7ae26 req-2f6de2f3-fa4a-446f-ae46-6055c462ef5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Received event network-vif-unplugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.329 2 DEBUG oslo_concurrency.lockutils [req-8268eab1-87dd-4793-bddb-72d550e7ae26 req-2f6de2f3-fa4a-446f-ae46-6055c462ef5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.330 2 DEBUG oslo_concurrency.lockutils [req-8268eab1-87dd-4793-bddb-72d550e7ae26 req-2f6de2f3-fa4a-446f-ae46-6055c462ef5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.330 2 DEBUG oslo_concurrency.lockutils [req-8268eab1-87dd-4793-bddb-72d550e7ae26 req-2f6de2f3-fa4a-446f-ae46-6055c462ef5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.330 2 DEBUG nova.compute.manager [req-8268eab1-87dd-4793-bddb-72d550e7ae26 req-2f6de2f3-fa4a-446f-ae46-6055c462ef5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] No waiting events found dispatching network-vif-unplugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:33 np0005465987 nova_compute[230713]: 2025-10-02 12:33:33.331 2 DEBUG nova.compute.manager [req-8268eab1-87dd-4793-bddb-72d550e7ae26 req-2f6de2f3-fa4a-446f-ae46-6055c462ef5d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Received event network-vif-unplugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:33:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:33.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.379 2 INFO nova.virt.libvirt.driver [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Deleting instance files /var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19_del#033[00m
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.381 2 INFO nova.virt.libvirt.driver [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Deletion of /var/lib/nova/instances/db5a79b4-af1c-4c62-a8d8-bfc6aab11a19_del complete#033[00m
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.432 2 INFO nova.compute.manager [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Took 1.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.433 2 DEBUG oslo.service.loopingcall [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.433 2 DEBUG nova.compute.manager [-] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.433 2 DEBUG nova.network.neutron [-] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:33:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.701 2 INFO nova.compute.manager [None req-446393b6-db99-4aac-b5bc-5d19e35c951a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Pausing#033[00m
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.702 2 DEBUG nova.objects.instance [None req-446393b6-db99-4aac-b5bc-5d19e35c951a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'flavor' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.736 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408414.7365012, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.737 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.740 2 DEBUG nova.compute.manager [None req-446393b6-db99-4aac-b5bc-5d19e35c951a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:34.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:34 np0005465987 nova_compute[230713]: 2025-10-02 12:33:34.998 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.002 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.032 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.494 2 DEBUG nova.compute.manager [req-a393fd4e-3894-42ab-919f-af2a412521e8 req-639882cd-5d4c-43ce-aa03-ebb5acf64030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Received event network-vif-plugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.495 2 DEBUG oslo_concurrency.lockutils [req-a393fd4e-3894-42ab-919f-af2a412521e8 req-639882cd-5d4c-43ce-aa03-ebb5acf64030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.496 2 DEBUG oslo_concurrency.lockutils [req-a393fd4e-3894-42ab-919f-af2a412521e8 req-639882cd-5d4c-43ce-aa03-ebb5acf64030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.496 2 DEBUG oslo_concurrency.lockutils [req-a393fd4e-3894-42ab-919f-af2a412521e8 req-639882cd-5d4c-43ce-aa03-ebb5acf64030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.497 2 DEBUG nova.compute.manager [req-a393fd4e-3894-42ab-919f-af2a412521e8 req-639882cd-5d4c-43ce-aa03-ebb5acf64030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] No waiting events found dispatching network-vif-plugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.497 2 WARNING nova.compute.manager [req-a393fd4e-3894-42ab-919f-af2a412521e8 req-639882cd-5d4c-43ce-aa03-ebb5acf64030 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Received unexpected event network-vif-plugged-2131d2f6-11eb-40e4-98d9-fded3f4db535 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.633 2 DEBUG nova.network.neutron [-] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.662 2 INFO nova.compute.manager [-] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Took 1.23 seconds to deallocate network for instance.#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.775 2 DEBUG nova.compute.manager [req-8047a37d-d14e-4f5b-a515-ca5f764926e7 req-c2b371a4-9c0c-4712-aa15-8960a526d63f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Received event network-vif-deleted-2131d2f6-11eb-40e4-98d9-fded3f4db535 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:35 np0005465987 nova_compute[230713]: 2025-10-02 12:33:35.934 2 INFO nova.compute.manager [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Took 0.27 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:33:36 np0005465987 nova_compute[230713]: 2025-10-02 12:33:36.006 2 DEBUG oslo_concurrency.lockutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:36 np0005465987 nova_compute[230713]: 2025-10-02 12:33:36.007 2 DEBUG oslo_concurrency.lockutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:36 np0005465987 nova_compute[230713]: 2025-10-02 12:33:36.074 2 DEBUG oslo_concurrency.processutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:33:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2238847500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:33:36 np0005465987 nova_compute[230713]: 2025-10-02 12:33:36.593 2 DEBUG oslo_concurrency.processutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:36 np0005465987 nova_compute[230713]: 2025-10-02 12:33:36.603 2 DEBUG nova.compute.provider_tree [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:33:36 np0005465987 nova_compute[230713]: 2025-10-02 12:33:36.625 2 DEBUG nova.scheduler.client.report [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:33:36 np0005465987 nova_compute[230713]: 2025-10-02 12:33:36.660 2 DEBUG oslo_concurrency.lockutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:36 np0005465987 nova_compute[230713]: 2025-10-02 12:33:36.696 2 INFO nova.scheduler.client.report [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Deleted allocations for instance db5a79b4-af1c-4c62-a8d8-bfc6aab11a19#033[00m
Oct  2 08:33:36 np0005465987 nova_compute[230713]: 2025-10-02 12:33:36.805 2 DEBUG oslo_concurrency.lockutils [None req-a688e9eb-4f50-4c2a-ace2-1c0b2d48cc76 22d56fcd2a4b4851bfd126ae4548ee9b 5533aaac08cd4856af72ef4992bb5e76 - - default default] Lock "db5a79b4-af1c-4c62-a8d8-bfc6aab11a19" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:36.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.370 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408402.3698623, e744f3b9-ccc1-43b1-b134-e6050b9487eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.371 2 INFO nova.compute.manager [-] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.392 2 INFO nova.compute.manager [None req-426fa6c0-d8b7-49df-925b-f94ce30af6ce 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Unpausing#033[00m
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.393 2 DEBUG nova.objects.instance [None req-426fa6c0-d8b7-49df-925b-f94ce30af6ce 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'flavor' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.395 2 DEBUG nova.compute.manager [None req-9c69ec31-b080-4b2c-bacb-7c64d1132a6a - - - - - -] [instance: e744f3b9-ccc1-43b1-b134-e6050b9487eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.420 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408417.4201157, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.420 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:33:37 np0005465987 virtqemud[230442]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.424 2 DEBUG nova.virt.libvirt.guest [None req-426fa6c0-d8b7-49df-925b-f94ce30af6ce 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.425 2 DEBUG nova.compute.manager [None req-426fa6c0-d8b7-49df-925b-f94ce30af6ce 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.448 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.451 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:37.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:37 np0005465987 nova_compute[230713]: 2025-10-02 12:33:37.625 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct  2 08:33:38 np0005465987 nova_compute[230713]: 2025-10-02 12:33:38.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:39.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:39.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:40 np0005465987 nova_compute[230713]: 2025-10-02 12:33:40.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:41.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:41.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:41 np0005465987 nova_compute[230713]: 2025-10-02 12:33:41.668 2 INFO nova.compute.manager [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Rescuing#033[00m
Oct  2 08:33:41 np0005465987 nova_compute[230713]: 2025-10-02 12:33:41.669 2 DEBUG oslo_concurrency.lockutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:41 np0005465987 nova_compute[230713]: 2025-10-02 12:33:41.669 2 DEBUG oslo_concurrency.lockutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquired lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:41 np0005465987 nova_compute[230713]: 2025-10-02 12:33:41.669 2 DEBUG nova.network.neutron [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:33:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:43.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:43 np0005465987 nova_compute[230713]: 2025-10-02 12:33:43.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:43.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:43Z|00488|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct  2 08:33:43 np0005465987 nova_compute[230713]: 2025-10-02 12:33:43.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:43 np0005465987 nova_compute[230713]: 2025-10-02 12:33:43.768 2 DEBUG nova.network.neutron [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Updating instance_info_cache with network_info: [{"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:43Z|00489|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct  2 08:33:43 np0005465987 nova_compute[230713]: 2025-10-02 12:33:43.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:44 np0005465987 nova_compute[230713]: 2025-10-02 12:33:44.046 2 DEBUG oslo_concurrency.lockutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Releasing lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:44 np0005465987 nova_compute[230713]: 2025-10-02 12:33:44.459 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:33:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:33:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:33:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:44.879 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:44.880 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:33:44 np0005465987 nova_compute[230713]: 2025-10-02 12:33:44.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:45.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:45 np0005465987 nova_compute[230713]: 2025-10-02 12:33:45.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:45.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:33:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:33:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:33:46 np0005465987 kernel: tap738a4094-d8 (unregistering): left promiscuous mode
Oct  2 08:33:46 np0005465987 NetworkManager[44910]: <info>  [1759408426.7663] device (tap738a4094-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:33:46 np0005465987 nova_compute[230713]: 2025-10-02 12:33:46.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:46Z|00490|binding|INFO|Releasing lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 from this chassis (sb_readonly=0)
Oct  2 08:33:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:46Z|00491|binding|INFO|Setting lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 down in Southbound
Oct  2 08:33:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:46Z|00492|binding|INFO|Removing iface tap738a4094-d8 ovn-installed in OVS
Oct  2 08:33:46 np0005465987 nova_compute[230713]: 2025-10-02 12:33:46.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:46 np0005465987 nova_compute[230713]: 2025-10-02 12:33:46.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:46 np0005465987 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct  2 08:33:46 np0005465987 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007d.scope: Consumed 15.801s CPU time.
Oct  2 08:33:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:46.838 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:e7:27 10.100.0.9'], port_security=['fa:16:3e:a7:e7:27 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6ec65b83-acd5-4790-8df0-0dfa903d2f84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=738a4094-d8e6-4b4c-9cfc-440659391fa0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:46.839 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 738a4094-d8e6-4b4c-9cfc-440659391fa0 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 unbound from our chassis#033[00m
Oct  2 08:33:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:46.841 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e58f4ba2-c72c-42b8-acea-ca6241431726, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:33:46 np0005465987 systemd-machined[188335]: Machine qemu-56-instance-0000007d terminated.
Oct  2 08:33:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:46.842 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[58ef8e48-81a0-437c-a9de-a9f043580c3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:46.843 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace which is not needed anymore#033[00m
Oct  2 08:33:47 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[277053]: [NOTICE]   (277057) : haproxy version is 2.8.14-c23fe91
Oct  2 08:33:47 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[277053]: [NOTICE]   (277057) : path to executable is /usr/sbin/haproxy
Oct  2 08:33:47 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[277053]: [ALERT]    (277057) : Current worker (277059) exited with code 143 (Terminated)
Oct  2 08:33:47 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[277053]: [WARNING]  (277057) : All workers exited. Exiting... (0)
Oct  2 08:33:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:47 np0005465987 systemd[1]: libpod-86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b.scope: Deactivated successfully.
Oct  2 08:33:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:47.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:47 np0005465987 podman[277660]: 2025-10-02 12:33:47.027768881 +0000 UTC m=+0.063833688 container died 86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:33:47 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b-userdata-shm.mount: Deactivated successfully.
Oct  2 08:33:47 np0005465987 systemd[1]: var-lib-containers-storage-overlay-734108aa76c56a00dbec9d34bf465c6f7a72bbd6d84f7bfa471892848e960d4b-merged.mount: Deactivated successfully.
Oct  2 08:33:47 np0005465987 podman[277660]: 2025-10-02 12:33:47.075608439 +0000 UTC m=+0.111673246 container cleanup 86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:33:47 np0005465987 systemd[1]: libpod-conmon-86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b.scope: Deactivated successfully.
Oct  2 08:33:47 np0005465987 podman[277685]: 2025-10-02 12:33:47.11015814 +0000 UTC m=+0.055824399 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 08:33:47 np0005465987 podman[277711]: 2025-10-02 12:33:47.176348382 +0000 UTC m=+0.081069753 container remove 86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:33:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:47.181 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf3159f-99c3-49c6-9ebc-83b49fb5fdb9]: (4, ('Thu Oct  2 12:33:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b)\n86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b\nThu Oct  2 12:33:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b)\n86c59aa9a29d4d6745d039361f1d04e9603fb1fa637fca82ada273db65848f1b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:47.183 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5db5c18e-cd7f-4340-8a38-715d222655bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:47.183 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:47 np0005465987 kernel: tape58f4ba2-c0: left promiscuous mode
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:47.213 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[731dd904-097e-4c98-a0cb-df129d5cdcac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:47.241 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ca860927-a44b-41e9-8f29-1c75cf08e7d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:47.242 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7a28b0e0-4e1a-4e3d-8649-7970aaace78b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:47.263 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[159721d4-fa13-4fdb-a923-dc47dd1b9e60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644978, 'reachable_time': 29118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277737, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:47 np0005465987 systemd[1]: run-netns-ovnmeta\x2de58f4ba2\x2dc72c\x2d42b8\x2dacea\x2dca6241431726.mount: Deactivated successfully.
Oct  2 08:33:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:47.266 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:33:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:47.267 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[cbff949c-0659-4ecc-8b3b-6e10984ef3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.475 2 INFO nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.479 2 INFO nova.virt.libvirt.driver [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance destroyed successfully.#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.479 2 DEBUG nova.objects.instance [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:47.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.579 2 INFO nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Attempting rescue#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.580 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.583 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.584 2 INFO nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Creating image(s)#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.614 2 DEBUG nova.storage.rbd_utils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.619 2 DEBUG nova.objects.instance [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.808 2 DEBUG nova.storage.rbd_utils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.844 2 DEBUG nova.storage.rbd_utils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.849 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.933 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.935 2 DEBUG oslo_concurrency.lockutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.936 2 DEBUG oslo_concurrency.lockutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.936 2 DEBUG oslo_concurrency.lockutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.970 2 DEBUG nova.storage.rbd_utils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:47 np0005465987 nova_compute[230713]: 2025-10-02 12:33:47.974 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.005 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408412.988338, db5a79b4-af1c-4c62-a8d8-bfc6aab11a19 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.006 2 INFO nova.compute.manager [-] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.172 2 DEBUG nova.compute.manager [None req-92fb98f7-996e-462e-8b99-f5d8fcfcbd9f - - - - - -] [instance: db5a79b4-af1c-4c62-a8d8-bfc6aab11a19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.513 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.513 2 DEBUG nova.objects.instance [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'migration_context' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.572 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.573 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Start _get_guest_xml network_info=[{"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "vif_mac": "fa:16:3e:a7:e7:27"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.574 2 DEBUG nova.objects.instance [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'resources' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.686 2 WARNING nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.691 2 DEBUG nova.virt.libvirt.host [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.692 2 DEBUG nova.virt.libvirt.host [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.694 2 DEBUG nova.virt.libvirt.host [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.694 2 DEBUG nova.virt.libvirt.host [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.695 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.695 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.696 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.696 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.696 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.696 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.697 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.697 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.697 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.697 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.697 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.698 2 DEBUG nova.virt.hardware [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.698 2 DEBUG nova.objects.instance [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:48 np0005465987 nova_compute[230713]: 2025-10-02 12:33:48.771 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:48.881 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:49.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:33:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2887617718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:33:49 np0005465987 nova_compute[230713]: 2025-10-02 12:33:49.211 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:49 np0005465987 nova_compute[230713]: 2025-10-02 12:33:49.212 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:33:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:49.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:33:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:33:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1028812945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:33:49 np0005465987 nova_compute[230713]: 2025-10-02 12:33:49.691 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:49 np0005465987 nova_compute[230713]: 2025-10-02 12:33:49.692 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:33:50 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/129542748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.132 2 DEBUG nova.compute.manager [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-unplugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.133 2 DEBUG oslo_concurrency.lockutils [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.133 2 DEBUG oslo_concurrency.lockutils [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.133 2 DEBUG oslo_concurrency.lockutils [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.134 2 DEBUG nova.compute.manager [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-unplugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.134 2 WARNING nova.compute.manager [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-unplugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.134 2 DEBUG nova.compute.manager [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.134 2 DEBUG oslo_concurrency.lockutils [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.135 2 DEBUG oslo_concurrency.lockutils [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.135 2 DEBUG oslo_concurrency.lockutils [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.135 2 DEBUG nova.compute.manager [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.135 2 WARNING nova.compute.manager [req-7e962f83-08d8-4b7d-922c-4531a81b0569 req-7b6b222f-c44b-4dfe-add6-db8f55e0fb14 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.136 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.137 2 DEBUG nova.virt.libvirt.vif [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1934445766',display_name='tempest-ServerRescueNegativeTestJSON-server-1934445766',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1934445766',id=125,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-a8p10ujd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:33:37Z,user_data=None,user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=6ec65b83-acd5-4790-8df0-0dfa903d2f84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "vif_mac": "fa:16:3e:a7:e7:27"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.138 2 DEBUG nova.network.os_vif_util [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "vif_mac": "fa:16:3e:a7:e7:27"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.139 2 DEBUG nova.network.os_vif_util [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:e7:27,bridge_name='br-int',has_traffic_filtering=True,id=738a4094-d8e6-4b4c-9cfc-440659391fa0,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a4094-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.140 2 DEBUG nova.objects.instance [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.258 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <uuid>6ec65b83-acd5-4790-8df0-0dfa903d2f84</uuid>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <name>instance-0000007d</name>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1934445766</nova:name>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:33:48</nova:creationTime>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <nova:user uuid="6c932f0d0e594f00855572fbe06ee3aa">tempest-ServerRescueNegativeTestJSON-959216005-project-member</nova:user>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <nova:project uuid="cc4d8f857b2d42bf9ae477fc5f514216">tempest-ServerRescueNegativeTestJSON-959216005</nova:project>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <nova:port uuid="738a4094-d8e6-4b4c-9cfc-440659391fa0">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <entry name="serial">6ec65b83-acd5-4790-8df0-0dfa903d2f84</entry>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <entry name="uuid">6ec65b83-acd5-4790-8df0-0dfa903d2f84</entry>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.rescue">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config.rescue">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:a7:e7:27"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <target dev="tap738a4094-d8"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/console.log" append="off"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:33:50 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:33:50 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:33:50 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:33:50 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.265 2 INFO nova.virt.libvirt.driver [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance destroyed successfully.#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.424 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.425 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.425 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.425 2 DEBUG nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] No VIF found with MAC fa:16:3e:a7:e7:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.426 2 INFO nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Using config drive#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.448 2 DEBUG nova.storage.rbd_utils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.514 2 DEBUG nova.objects.instance [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:50 np0005465987 nova_compute[230713]: 2025-10-02 12:33:50.546 2 DEBUG nova.objects.instance [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'keypairs' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:51.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:51 np0005465987 nova_compute[230713]: 2025-10-02 12:33:51.759 2 INFO nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Creating config drive at /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config.rescue#033[00m
Oct  2 08:33:51 np0005465987 nova_compute[230713]: 2025-10-02 12:33:51.769 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0zma89s9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:51 np0005465987 nova_compute[230713]: 2025-10-02 12:33:51.905 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0zma89s9" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:51 np0005465987 nova_compute[230713]: 2025-10-02 12:33:51.937 2 DEBUG nova.storage.rbd_utils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] rbd image 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:33:51 np0005465987 nova_compute[230713]: 2025-10-02 12:33:51.940 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config.rescue 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:52 np0005465987 nova_compute[230713]: 2025-10-02 12:33:52.487 2 DEBUG oslo_concurrency.processutils [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config.rescue 6ec65b83-acd5-4790-8df0-0dfa903d2f84_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:52 np0005465987 nova_compute[230713]: 2025-10-02 12:33:52.488 2 INFO nova.virt.libvirt.driver [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Deleting local config drive /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:33:52 np0005465987 kernel: tap738a4094-d8: entered promiscuous mode
Oct  2 08:33:52 np0005465987 NetworkManager[44910]: <info>  [1759408432.5663] manager: (tap738a4094-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Oct  2 08:33:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:52Z|00493|binding|INFO|Claiming lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 for this chassis.
Oct  2 08:33:52 np0005465987 nova_compute[230713]: 2025-10-02 12:33:52.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:52Z|00494|binding|INFO|738a4094-d8e6-4b4c-9cfc-440659391fa0: Claiming fa:16:3e:a7:e7:27 10.100.0.9
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.585 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:e7:27 10.100.0.9'], port_security=['fa:16:3e:a7:e7:27 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6ec65b83-acd5-4790-8df0-0dfa903d2f84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=738a4094-d8e6-4b4c-9cfc-440659391fa0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.587 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 738a4094-d8e6-4b4c-9cfc-440659391fa0 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 bound to our chassis#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.589 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726#033[00m
Oct  2 08:33:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:52Z|00495|binding|INFO|Setting lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 ovn-installed in OVS
Oct  2 08:33:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:52Z|00496|binding|INFO|Setting lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 up in Southbound
Oct  2 08:33:52 np0005465987 nova_compute[230713]: 2025-10-02 12:33:52.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.610 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2c295e-4682-421c-8fcf-0495b86b3cbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.611 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape58f4ba2-c1 in ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.612 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape58f4ba2-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.613 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[34fbe26a-50d2-4514-95c5-48137c70b9c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.614 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f34d91c1-65d3-4998-86a0-05fa014d8bf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.629 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[37c27c8f-76ca-48f1-8a7c-c2183a10f008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.643 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3b466d-50af-4c07-9121-d8203e272bf3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 systemd-machined[188335]: New machine qemu-57-instance-0000007d.
Oct  2 08:33:52 np0005465987 systemd[1]: Started Virtual Machine qemu-57-instance-0000007d.
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.687 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[75be3857-dde9-42c9-a56b-3a0300b24ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 systemd-udevd[278026]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:52 np0005465987 podman[277969]: 2025-10-02 12:33:52.694198153 +0000 UTC m=+0.081106714 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:33:52 np0005465987 NetworkManager[44910]: <info>  [1759408432.6946] manager: (tape58f4ba2-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.693 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0b436137-1f1a-4253-8fe9-549d3abdc88e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 systemd-udevd[278028]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:52 np0005465987 NetworkManager[44910]: <info>  [1759408432.7116] device (tap738a4094-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:33:52 np0005465987 NetworkManager[44910]: <info>  [1759408432.7124] device (tap738a4094-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:33:52 np0005465987 podman[277975]: 2025-10-02 12:33:52.723654974 +0000 UTC m=+0.105633070 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  2 08:33:52 np0005465987 podman[277967]: 2025-10-02 12:33:52.733832014 +0000 UTC m=+0.129383044 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.739 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[20e4edf0-fa79-4860-ac7d-be26b2b1ef06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.745 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3648355b-75af-4031-860e-ff8b1ef2ea58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 NetworkManager[44910]: <info>  [1759408432.7729] device (tape58f4ba2-c0): carrier: link connected
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.780 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[78494289-5c9e-4e56-bd62-fd7a138c2d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.799 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba64554-b8a3-42b6-99ba-1564ead1536e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650134, 'reachable_time': 33757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278061, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.815 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfdf92b-7a01-427f-b149-0bfe7ae90f45]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:ad0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650134, 'tstamp': 650134}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278062, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.832 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1df6dabc-f84b-4191-b127-52235cb562f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650134, 'reachable_time': 33757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278063, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.863 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c508dbba-8311-4057-89b8-a3ad004d012e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.916 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4518cc4a-5e87-49a3-861b-0c8c5f9fce50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.918 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.918 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.918 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:52 np0005465987 kernel: tape58f4ba2-c0: entered promiscuous mode
Oct  2 08:33:52 np0005465987 nova_compute[230713]: 2025-10-02 12:33:52.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005465987 NetworkManager[44910]: <info>  [1759408432.9213] manager: (tape58f4ba2-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Oct  2 08:33:52 np0005465987 nova_compute[230713]: 2025-10-02 12:33:52.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.923 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:52 np0005465987 nova_compute[230713]: 2025-10-02 12:33:52.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:52Z|00497|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct  2 08:33:52 np0005465987 nova_compute[230713]: 2025-10-02 12:33:52.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.938 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.939 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7ca22b-b982-4908-8fae-ce23bab3d64c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.940 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-e58f4ba2-c72c-42b8-acea-ca6241431726
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID e58f4ba2-c72c-42b8-acea-ca6241431726
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:33:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:52.940 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'env', 'PROCESS_TAG=haproxy-e58f4ba2-c72c-42b8-acea-ca6241431726', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e58f4ba2-c72c-42b8-acea-ca6241431726.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:33:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:53.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:53 np0005465987 podman[278205]: 2025-10-02 12:33:53.325233568 +0000 UTC m=+0.045464582 container create 04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:33:53 np0005465987 systemd[1]: Started libpod-conmon-04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a.scope.
Oct  2 08:33:53 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:33:53 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f528fc36683fe1616d692cdd56b4c06b64385de015fdfe722d513dbc5d78e8cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:33:53 np0005465987 podman[278205]: 2025-10-02 12:33:53.30096692 +0000 UTC m=+0.021197954 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:33:53 np0005465987 podman[278205]: 2025-10-02 12:33:53.402838214 +0000 UTC m=+0.123069258 container init 04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:33:53 np0005465987 podman[278205]: 2025-10-02 12:33:53.409065497 +0000 UTC m=+0.129296511 container start 04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:33:53 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278220]: [NOTICE]   (278224) : New worker (278226) forked
Oct  2 08:33:53 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278220]: [NOTICE]   (278224) : Loading success.
Oct  2 08:33:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.527 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 6ec65b83-acd5-4790-8df0-0dfa903d2f84 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.528 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408433.5272746, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.529 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.532 2 DEBUG nova.compute.manager [None req-047d2234-4a6d-4c9d-9836-24467c6e2d54 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.581 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.584 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.624 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.624 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408433.5284827, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.625 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Started (Lifecycle Event)#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.649 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.652 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.872 2 DEBUG nova.compute.manager [req-9adff644-a5d8-458c-9469-7789b672bad2 req-06925b5b-e7a5-4c70-aeb1-92fd0cd49859 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.873 2 DEBUG oslo_concurrency.lockutils [req-9adff644-a5d8-458c-9469-7789b672bad2 req-06925b5b-e7a5-4c70-aeb1-92fd0cd49859 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.873 2 DEBUG oslo_concurrency.lockutils [req-9adff644-a5d8-458c-9469-7789b672bad2 req-06925b5b-e7a5-4c70-aeb1-92fd0cd49859 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.873 2 DEBUG oslo_concurrency.lockutils [req-9adff644-a5d8-458c-9469-7789b672bad2 req-06925b5b-e7a5-4c70-aeb1-92fd0cd49859 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.874 2 DEBUG nova.compute.manager [req-9adff644-a5d8-458c-9469-7789b672bad2 req-06925b5b-e7a5-4c70-aeb1-92fd0cd49859 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:53 np0005465987 nova_compute[230713]: 2025-10-02 12:33:53.874 2 WARNING nova.compute.manager [req-9adff644-a5d8-458c-9469-7789b672bad2 req-06925b5b-e7a5-4c70-aeb1-92fd0cd49859 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:33:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:33:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:33:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:55.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:55 np0005465987 nova_compute[230713]: 2025-10-02 12:33:55.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:33:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:55.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:33:55 np0005465987 nova_compute[230713]: 2025-10-02 12:33:55.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:55 np0005465987 nova_compute[230713]: 2025-10-02 12:33:55.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:55 np0005465987 nova_compute[230713]: 2025-10-02 12:33:55.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.127 2 DEBUG nova.compute.manager [req-cebe8885-70be-4230-adbe-bd13830a83b9 req-c64cb1f1-090a-4e57-b748-03ceceeb4776 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.128 2 DEBUG oslo_concurrency.lockutils [req-cebe8885-70be-4230-adbe-bd13830a83b9 req-c64cb1f1-090a-4e57-b748-03ceceeb4776 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.128 2 DEBUG oslo_concurrency.lockutils [req-cebe8885-70be-4230-adbe-bd13830a83b9 req-c64cb1f1-090a-4e57-b748-03ceceeb4776 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.129 2 DEBUG oslo_concurrency.lockutils [req-cebe8885-70be-4230-adbe-bd13830a83b9 req-c64cb1f1-090a-4e57-b748-03ceceeb4776 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.129 2 DEBUG nova.compute.manager [req-cebe8885-70be-4230-adbe-bd13830a83b9 req-c64cb1f1-090a-4e57-b748-03ceceeb4776 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.129 2 WARNING nova.compute.manager [req-cebe8885-70be-4230-adbe-bd13830a83b9 req-c64cb1f1-090a-4e57-b748-03ceceeb4776 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.980 2 INFO nova.compute.manager [None req-49b5098b-ca42-44d2-8519-bdd7f97b9206 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Unrescuing#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.980 2 DEBUG oslo_concurrency.lockutils [None req-49b5098b-ca42-44d2-8519-bdd7f97b9206 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.981 2 DEBUG oslo_concurrency.lockutils [None req-49b5098b-ca42-44d2-8519-bdd7f97b9206 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquired lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.981 2 DEBUG nova.network.neutron [None req-49b5098b-ca42-44d2-8519-bdd7f97b9206 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.992 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:56 np0005465987 nova_compute[230713]: 2025-10-02 12:33:56.992 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:33:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:57.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:57 np0005465987 nova_compute[230713]: 2025-10-02 12:33:57.156 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:33:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.853516) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408437853555, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1981, "num_deletes": 255, "total_data_size": 4330418, "memory_usage": 4415952, "flush_reason": "Manual Compaction"}
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408437868141, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 2820366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50601, "largest_seqno": 52577, "table_properties": {"data_size": 2812169, "index_size": 4883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18403, "raw_average_key_size": 20, "raw_value_size": 2795326, "raw_average_value_size": 3180, "num_data_blocks": 211, "num_entries": 879, "num_filter_entries": 879, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408293, "oldest_key_time": 1759408293, "file_creation_time": 1759408437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 14657 microseconds, and 5844 cpu microseconds.
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.868175) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 2820366 bytes OK
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.868192) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.869628) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.869640) EVENT_LOG_v1 {"time_micros": 1759408437869636, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.869654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 4321408, prev total WAL file size 4321408, number of live WAL files 2.
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.870744) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(2754KB)], [99(9422KB)]
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408437870803, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 12469344, "oldest_snapshot_seqno": -1}
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 7677 keys, 10591478 bytes, temperature: kUnknown
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408437942787, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 10591478, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10540949, "index_size": 30257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198414, "raw_average_key_size": 25, "raw_value_size": 10404756, "raw_average_value_size": 1355, "num_data_blocks": 1185, "num_entries": 7677, "num_filter_entries": 7677, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759408437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.943015) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10591478 bytes
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.944370) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.1 rd, 147.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.2 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 8208, records dropped: 531 output_compression: NoCompression
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.944390) EVENT_LOG_v1 {"time_micros": 1759408437944381, "job": 62, "event": "compaction_finished", "compaction_time_micros": 72045, "compaction_time_cpu_micros": 46795, "output_level": 6, "num_output_files": 1, "total_output_size": 10591478, "num_input_records": 8208, "num_output_records": 7677, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408437945046, "job": 62, "event": "table_file_deletion", "file_number": 101}
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408437946738, "job": 62, "event": "table_file_deletion", "file_number": 99}
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.870477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.946820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.946828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.946831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.946834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:33:57 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:33:57.946837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:33:58 np0005465987 nova_compute[230713]: 2025-10-02 12:33:58.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:59.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:33:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:59.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:59 np0005465987 nova_compute[230713]: 2025-10-02 12:33:59.548 2 DEBUG nova.network.neutron [None req-49b5098b-ca42-44d2-8519-bdd7f97b9206 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Updating instance_info_cache with network_info: [{"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:59 np0005465987 nova_compute[230713]: 2025-10-02 12:33:59.577 2 DEBUG oslo_concurrency.lockutils [None req-49b5098b-ca42-44d2-8519-bdd7f97b9206 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Releasing lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:59 np0005465987 nova_compute[230713]: 2025-10-02 12:33:59.579 2 DEBUG nova.objects.instance [None req-49b5098b-ca42-44d2-8519-bdd7f97b9206 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'flavor' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:59 np0005465987 kernel: tap738a4094-d8 (unregistering): left promiscuous mode
Oct  2 08:33:59 np0005465987 NetworkManager[44910]: <info>  [1759408439.9008] device (tap738a4094-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:33:59 np0005465987 nova_compute[230713]: 2025-10-02 12:33:59.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:59 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:59Z|00498|binding|INFO|Releasing lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 from this chassis (sb_readonly=0)
Oct  2 08:33:59 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:59Z|00499|binding|INFO|Setting lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 down in Southbound
Oct  2 08:33:59 np0005465987 ovn_controller[129172]: 2025-10-02T12:33:59Z|00500|binding|INFO|Removing iface tap738a4094-d8 ovn-installed in OVS
Oct  2 08:33:59 np0005465987 nova_compute[230713]: 2025-10-02 12:33:59.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:59.919 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:e7:27 10.100.0.9'], port_security=['fa:16:3e:a7:e7:27 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6ec65b83-acd5-4790-8df0-0dfa903d2f84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=738a4094-d8e6-4b4c-9cfc-440659391fa0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:59.921 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 738a4094-d8e6-4b4c-9cfc-440659391fa0 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 unbound from our chassis#033[00m
Oct  2 08:33:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:59.923 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e58f4ba2-c72c-42b8-acea-ca6241431726, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:33:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:59.926 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[720420e9-bc6e-4216-b32c-11b1b8027def]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:33:59.927 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace which is not needed anymore#033[00m
Oct  2 08:33:59 np0005465987 nova_compute[230713]: 2025-10-02 12:33:59.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:59 np0005465987 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct  2 08:33:59 np0005465987 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000007d.scope: Consumed 6.887s CPU time.
Oct  2 08:33:59 np0005465987 systemd-machined[188335]: Machine qemu-57-instance-0000007d terminated.
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.037 2 INFO nova.virt.libvirt.driver [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance destroyed successfully.#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.037 2 DEBUG nova.objects.instance [None req-49b5098b-ca42-44d2-8519-bdd7f97b9206 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:00 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278220]: [NOTICE]   (278224) : haproxy version is 2.8.14-c23fe91
Oct  2 08:34:00 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278220]: [NOTICE]   (278224) : path to executable is /usr/sbin/haproxy
Oct  2 08:34:00 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278220]: [WARNING]  (278224) : Exiting Master process...
Oct  2 08:34:00 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278220]: [ALERT]    (278224) : Current worker (278226) exited with code 143 (Terminated)
Oct  2 08:34:00 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278220]: [WARNING]  (278224) : All workers exited. Exiting... (0)
Oct  2 08:34:00 np0005465987 systemd[1]: libpod-04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a.scope: Deactivated successfully.
Oct  2 08:34:00 np0005465987 podman[278259]: 2025-10-02 12:34:00.076555832 +0000 UTC m=+0.046791130 container died 04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:34:00 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:34:00 np0005465987 systemd[1]: var-lib-containers-storage-overlay-f528fc36683fe1616d692cdd56b4c06b64385de015fdfe722d513dbc5d78e8cf-merged.mount: Deactivated successfully.
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:00 np0005465987 podman[278259]: 2025-10-02 12:34:00.107098533 +0000 UTC m=+0.077333801 container cleanup 04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:34:00 np0005465987 systemd[1]: libpod-conmon-04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a.scope: Deactivated successfully.
Oct  2 08:34:00 np0005465987 kernel: tap738a4094-d8: entered promiscuous mode
Oct  2 08:34:00 np0005465987 systemd-udevd[278239]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:34:00 np0005465987 NetworkManager[44910]: <info>  [1759408440.1234] manager: (tap738a4094-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Oct  2 08:34:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:00Z|00501|binding|INFO|Claiming lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 for this chassis.
Oct  2 08:34:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:00Z|00502|binding|INFO|738a4094-d8e6-4b4c-9cfc-440659391fa0: Claiming fa:16:3e:a7:e7:27 10.100.0.9
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:00 np0005465987 NetworkManager[44910]: <info>  [1759408440.1339] device (tap738a4094-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:34:00 np0005465987 NetworkManager[44910]: <info>  [1759408440.1351] device (tap738a4094-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.137 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:e7:27 10.100.0.9'], port_security=['fa:16:3e:a7:e7:27 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6ec65b83-acd5-4790-8df0-0dfa903d2f84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=738a4094-d8e6-4b4c-9cfc-440659391fa0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:00Z|00503|binding|INFO|Setting lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 ovn-installed in OVS
Oct  2 08:34:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:00Z|00504|binding|INFO|Setting lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 up in Southbound
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:00 np0005465987 systemd-machined[188335]: New machine qemu-58-instance-0000007d.
Oct  2 08:34:00 np0005465987 systemd[1]: Started Virtual Machine qemu-58-instance-0000007d.
Oct  2 08:34:00 np0005465987 podman[278303]: 2025-10-02 12:34:00.178084857 +0000 UTC m=+0.049293769 container remove 04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.183 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[58a45efd-3cb4-48a7-a7c0-d5174532a18f]: (4, ('Thu Oct  2 12:34:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a)\n04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a\nThu Oct  2 12:34:00 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a)\n04d261909e3caae711183dd34d61908de1e2e57053b147c3cc50fa5173c1116a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.185 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[61160c85-4513-4b4f-871c-7636cef6849f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.186 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:00 np0005465987 kernel: tape58f4ba2-c0: left promiscuous mode
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.205 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cb90a7dc-658c-40b4-a56b-288999f52400]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.227 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3b652572-c8d7-43b5-92f5-9662fe40b514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.228 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4e012ce6-e8e6-4b6f-be86-3405eb316225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.242 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a30550fb-f506-4ff7-b25c-a1acb739d329]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650125, 'reachable_time': 15709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278328, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.244 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.244 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[32c842b6-75a5-4c48-a53c-452b5f6967db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.245 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 738a4094-d8e6-4b4c-9cfc-440659391fa0 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 unbound from our chassis#033[00m
Oct  2 08:34:00 np0005465987 systemd[1]: run-netns-ovnmeta\x2de58f4ba2\x2dc72c\x2d42b8\x2dacea\x2dca6241431726.mount: Deactivated successfully.
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.246 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e58f4ba2-c72c-42b8-acea-ca6241431726#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.256 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a77aff2d-93fb-463f-a441-3016e76c6088]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.256 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape58f4ba2-c1 in ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.258 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape58f4ba2-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.258 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[336ee675-46d7-4f77-80e5-01299a7cd874]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.259 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6c143f-dc08-4956-a81b-a7e631f4592f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.270 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd5c71f-a111-41f3-abf6-c92d59d1ba82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.292 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[39826d5c-9698-4b9d-a280-4cd0e1fe2833]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.329 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d355ac-09cd-47c3-9ce0-a84d671fa286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.334 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[985158f6-83e6-41c2-8f57-d93333537e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 NetworkManager[44910]: <info>  [1759408440.3381] manager: (tape58f4ba2-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/234)
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.367 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8a0461-89cd-4cd8-9fca-ab353722ab61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.370 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbe002e-391d-4870-a444-7bbc45871dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 NetworkManager[44910]: <info>  [1759408440.3918] device (tape58f4ba2-c0): carrier: link connected
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.398 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[8871a370-54c2-4747-b6e9-f8942c1d6d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.416 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a40370-a4e7-471c-9479-7cc78bffcec8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650896, 'reachable_time': 33547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278353, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.434 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cd28e4b9-227c-4569-a294-5d8625087212]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:ad0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650896, 'tstamp': 650896}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278354, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.450 2 DEBUG nova.compute.manager [req-ecab09bc-9a84-4485-9fac-9c85d8ac9a1f req-5304c3ef-021e-4460-82a5-26ad51c0d7c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-unplugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.450 2 DEBUG oslo_concurrency.lockutils [req-ecab09bc-9a84-4485-9fac-9c85d8ac9a1f req-5304c3ef-021e-4460-82a5-26ad51c0d7c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.451 2 DEBUG oslo_concurrency.lockutils [req-ecab09bc-9a84-4485-9fac-9c85d8ac9a1f req-5304c3ef-021e-4460-82a5-26ad51c0d7c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.451 2 DEBUG oslo_concurrency.lockutils [req-ecab09bc-9a84-4485-9fac-9c85d8ac9a1f req-5304c3ef-021e-4460-82a5-26ad51c0d7c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.451 2 DEBUG nova.compute.manager [req-ecab09bc-9a84-4485-9fac-9c85d8ac9a1f req-5304c3ef-021e-4460-82a5-26ad51c0d7c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-unplugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.452 2 WARNING nova.compute.manager [req-ecab09bc-9a84-4485-9fac-9c85d8ac9a1f req-5304c3ef-021e-4460-82a5-26ad51c0d7c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-unplugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.459 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[23616789-ee78-4e27-9d1c-e0a12e27cf8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape58f4ba2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650896, 'reachable_time': 33547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278355, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.489 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4839ac36-7b2f-4039-a7f2-d7d74a5e89ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.558 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b4da5cbc-5c46-4105-8f03-476e2ddebec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.560 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.560 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.561 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58f4ba2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:00 np0005465987 kernel: tape58f4ba2-c0: entered promiscuous mode
Oct  2 08:34:00 np0005465987 NetworkManager[44910]: <info>  [1759408440.5643] manager: (tape58f4ba2-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.571 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape58f4ba2-c0, col_values=(('external_ids', {'iface-id': '81a5a13b-b81c-444a-8751-b35a35cdf3dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:00Z|00505|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.573 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.574 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8467cc2c-4c80-475a-91a1-0b69888954fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.575 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-e58f4ba2-c72c-42b8-acea-ca6241431726
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/e58f4ba2-c72c-42b8-acea-ca6241431726.pid.haproxy
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID e58f4ba2-c72c-42b8-acea-ca6241431726
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:34:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:00.575 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'env', 'PROCESS_TAG=haproxy-e58f4ba2-c72c-42b8-acea-ca6241431726', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e58f4ba2-c72c-42b8-acea-ca6241431726.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:34:00 np0005465987 nova_compute[230713]: 2025-10-02 12:34:00.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:00 np0005465987 podman[278429]: 2025-10-02 12:34:00.899964154 +0000 UTC m=+0.044684442 container create bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:34:00 np0005465987 systemd[1]: Started libpod-conmon-bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365.scope.
Oct  2 08:34:00 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:34:00 np0005465987 podman[278429]: 2025-10-02 12:34:00.875101439 +0000 UTC m=+0.019821747 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:34:00 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d43223992921cd704841b4fcb74d44e85107e6df5b6e5d8ed0a513a5eb2e96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:00 np0005465987 podman[278429]: 2025-10-02 12:34:00.985896009 +0000 UTC m=+0.130616327 container init bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:34:00 np0005465987 podman[278429]: 2025-10-02 12:34:00.991130654 +0000 UTC m=+0.135850942 container start bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:34:01 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278444]: [NOTICE]   (278448) : New worker (278450) forked
Oct  2 08:34:01 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278444]: [NOTICE]   (278448) : Loading success.
Oct  2 08:34:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:01.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.089 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 6ec65b83-acd5-4790-8df0-0dfa903d2f84 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.089 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408441.0888724, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.089 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.116 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.121 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.153 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.154 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408441.0911677, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.154 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Started (Lifecycle Event)#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.175 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.178 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:01 np0005465987 nova_compute[230713]: 2025-10-02 12:34:01.203 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:34:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:01.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.122 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.645 2 DEBUG nova.compute.manager [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.646 2 DEBUG oslo_concurrency.lockutils [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.646 2 DEBUG oslo_concurrency.lockutils [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.646 2 DEBUG oslo_concurrency.lockutils [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.647 2 DEBUG nova.compute.manager [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.647 2 WARNING nova.compute.manager [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.647 2 DEBUG nova.compute.manager [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.648 2 DEBUG oslo_concurrency.lockutils [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.648 2 DEBUG oslo_concurrency.lockutils [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.648 2 DEBUG oslo_concurrency.lockutils [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.649 2 DEBUG nova.compute.manager [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.649 2 WARNING nova.compute.manager [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.649 2 DEBUG nova.compute.manager [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.650 2 DEBUG oslo_concurrency.lockutils [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.650 2 DEBUG oslo_concurrency.lockutils [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.651 2 DEBUG oslo_concurrency.lockutils [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.651 2 DEBUG nova.compute.manager [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:02 np0005465987 nova_compute[230713]: 2025-10-02 12:34:02.651 2 WARNING nova.compute.manager [req-78cd5ce1-e938-4c6c-88b9-ded554512c49 req-fbc02b68-0350-44f1-b08c-3a32408d32a2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:34:03 np0005465987 nova_compute[230713]: 2025-10-02 12:34:03.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:03.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:03.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:04 np0005465987 ceph-mgr[81180]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct  2 08:34:04 np0005465987 nova_compute[230713]: 2025-10-02 12:34:04.967 2 DEBUG nova.compute.manager [None req-49b5098b-ca42-44d2-8519-bdd7f97b9206 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:05.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:05 np0005465987 nova_compute[230713]: 2025-10-02 12:34:05.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:34:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:05.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:34:05 np0005465987 nova_compute[230713]: 2025-10-02 12:34:05.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:05 np0005465987 nova_compute[230713]: 2025-10-02 12:34:05.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.023 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.024 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.024 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.025 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.025 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4114615551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.492 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.626 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.627 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.790 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.791 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4284MB free_disk=20.768699645996094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.791 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:06 np0005465987 nova_compute[230713]: 2025-10-02 12:34:06.791 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:07.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:07 np0005465987 nova_compute[230713]: 2025-10-02 12:34:07.114 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 6ec65b83-acd5-4790-8df0-0dfa903d2f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:34:07 np0005465987 nova_compute[230713]: 2025-10-02 12:34:07.115 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:34:07 np0005465987 nova_compute[230713]: 2025-10-02 12:34:07.115 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:34:07 np0005465987 nova_compute[230713]: 2025-10-02 12:34:07.232 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:07.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4001065409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:07 np0005465987 nova_compute[230713]: 2025-10-02 12:34:07.728 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:07 np0005465987 nova_compute[230713]: 2025-10-02 12:34:07.734 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:07 np0005465987 nova_compute[230713]: 2025-10-02 12:34:07.751 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:07 np0005465987 nova_compute[230713]: 2025-10-02 12:34:07.774 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:34:07 np0005465987 nova_compute[230713]: 2025-10-02 12:34:07.774 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:08 np0005465987 nova_compute[230713]: 2025-10-02 12:34:08.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:09.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:09.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:09 np0005465987 nova_compute[230713]: 2025-10-02 12:34:09.775 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:09 np0005465987 nova_compute[230713]: 2025-10-02 12:34:09.775 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:09 np0005465987 nova_compute[230713]: 2025-10-02 12:34:09.775 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:10 np0005465987 nova_compute[230713]: 2025-10-02 12:34:10.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.052 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.052 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:11.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.078 2 DEBUG nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.167 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.168 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.175 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.176 2 INFO nova.compute.claims [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.303 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:11.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1876702206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.824 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.834 2 DEBUG nova.compute.provider_tree [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.903 2 DEBUG nova.scheduler.client.report [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.967 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:11 np0005465987 nova_compute[230713]: 2025-10-02 12:34:11.968 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.035 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.036 2 DEBUG nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.049 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.127 2 DEBUG nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.129 2 DEBUG nova.network.neutron [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.178 2 INFO nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.225 2 DEBUG nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.347 2 DEBUG nova.policy [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fe9cc788734f406d826446a848700331', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.498 2 DEBUG nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.500 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.501 2 INFO nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Creating image(s)#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.537 2 DEBUG nova.storage.rbd_utils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.575 2 DEBUG nova.storage.rbd_utils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.610 2 DEBUG nova.storage.rbd_utils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.615 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.685 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.687 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.688 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.689 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.732 2 DEBUG nova.storage.rbd_utils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:12 np0005465987 nova_compute[230713]: 2025-10-02 12:34:12.736 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:13 np0005465987 nova_compute[230713]: 2025-10-02 12:34:13.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:13.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:13.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:14 np0005465987 nova_compute[230713]: 2025-10-02 12:34:14.150 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:14 np0005465987 nova_compute[230713]: 2025-10-02 12:34:14.222 2 DEBUG nova.storage.rbd_utils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] resizing rbd image 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:34:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:14 np0005465987 nova_compute[230713]: 2025-10-02 12:34:14.738 2 DEBUG nova.network.neutron [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Successfully created port: 41486814-fe68-4ee4-acaa-661e4bca279a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:34:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:14Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:e7:27 10.100.0.9
Oct  2 08:34:14 np0005465987 nova_compute[230713]: 2025-10-02 12:34:14.864 2 DEBUG nova.objects.instance [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'migration_context' on Instance uuid 58c789e4-1ba5-4001-913d-9aa19bd9c59f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:14 np0005465987 nova_compute[230713]: 2025-10-02 12:34:14.902 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:34:14 np0005465987 nova_compute[230713]: 2025-10-02 12:34:14.903 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Ensure instance console log exists: /var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:34:14 np0005465987 nova_compute[230713]: 2025-10-02 12:34:14.903 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:14 np0005465987 nova_compute[230713]: 2025-10-02 12:34:14.904 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:14 np0005465987 nova_compute[230713]: 2025-10-02 12:34:14.904 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:15.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:15 np0005465987 nova_compute[230713]: 2025-10-02 12:34:15.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:15.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:16 np0005465987 nova_compute[230713]: 2025-10-02 12:34:16.723 2 DEBUG nova.network.neutron [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Successfully updated port: 41486814-fe68-4ee4-acaa-661e4bca279a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:34:16 np0005465987 nova_compute[230713]: 2025-10-02 12:34:16.800 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:16 np0005465987 nova_compute[230713]: 2025-10-02 12:34:16.801 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquired lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:16 np0005465987 nova_compute[230713]: 2025-10-02 12:34:16.801 2 DEBUG nova.network.neutron [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:34:16 np0005465987 nova_compute[230713]: 2025-10-02 12:34:16.885 2 DEBUG nova.compute.manager [req-11f6e9b7-c0b0-4137-b923-2581cbef60c7 req-bdd63562-ef2d-459c-a743-0b13890a0431 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Received event network-changed-41486814-fe68-4ee4-acaa-661e4bca279a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:16 np0005465987 nova_compute[230713]: 2025-10-02 12:34:16.885 2 DEBUG nova.compute.manager [req-11f6e9b7-c0b0-4137-b923-2581cbef60c7 req-bdd63562-ef2d-459c-a743-0b13890a0431 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Refreshing instance network info cache due to event network-changed-41486814-fe68-4ee4-acaa-661e4bca279a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:34:16 np0005465987 nova_compute[230713]: 2025-10-02 12:34:16.885 2 DEBUG oslo_concurrency.lockutils [req-11f6e9b7-c0b0-4137-b923-2581cbef60c7 req-bdd63562-ef2d-459c-a743-0b13890a0431 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:17.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:17 np0005465987 nova_compute[230713]: 2025-10-02 12:34:17.208 2 DEBUG nova.network.neutron [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:34:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:17.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:17 np0005465987 podman[278714]: 2025-10-02 12:34:17.85645598 +0000 UTC m=+0.059758947 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.577 2 DEBUG nova.network.neutron [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updating instance_info_cache with network_info: [{"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.709 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Releasing lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.709 2 DEBUG nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Instance network_info: |[{"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.710 2 DEBUG oslo_concurrency.lockutils [req-11f6e9b7-c0b0-4137-b923-2581cbef60c7 req-bdd63562-ef2d-459c-a743-0b13890a0431 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.710 2 DEBUG nova.network.neutron [req-11f6e9b7-c0b0-4137-b923-2581cbef60c7 req-bdd63562-ef2d-459c-a743-0b13890a0431 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Refreshing network info cache for port 41486814-fe68-4ee4-acaa-661e4bca279a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.713 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Start _get_guest_xml network_info=[{"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.717 2 WARNING nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.731 2 DEBUG nova.virt.libvirt.host [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.732 2 DEBUG nova.virt.libvirt.host [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.745 2 DEBUG nova.virt.libvirt.host [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.746 2 DEBUG nova.virt.libvirt.host [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.747 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.747 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.747 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.747 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.748 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.748 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.748 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.748 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.749 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.749 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.749 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.749 2 DEBUG nova.virt.hardware [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:34:18 np0005465987 nova_compute[230713]: 2025-10-02 12:34:18.752 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:34:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:19.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:34:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1609700800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.217 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.245 2 DEBUG nova.storage.rbd_utils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.248 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:19.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2995388052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.648 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.650 2 DEBUG nova.virt.libvirt.vif [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1772536260',display_name='tempest-₡-1772536260',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--1772536260',id=131,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-j9whf17i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:12Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=58c789e4-1ba5-4001-913d-9aa19bd9c59f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.650 2 DEBUG nova.network.os_vif_util [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.651 2 DEBUG nova.network.os_vif_util [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0d:be,bridge_name='br-int',has_traffic_filtering=True,id=41486814-fe68-4ee4-acaa-661e4bca279a,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41486814-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.652 2 DEBUG nova.objects.instance [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'pci_devices' on Instance uuid 58c789e4-1ba5-4001-913d-9aa19bd9c59f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.696 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <uuid>58c789e4-1ba5-4001-913d-9aa19bd9c59f</uuid>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <name>instance-00000083</name>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <nova:name>tempest-₡-1772536260</nova:name>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:34:18</nova:creationTime>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <nova:user uuid="fe9cc788734f406d826446a848700331">tempest-ServersTestJSON-80077074-project-member</nova:user>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <nova:project uuid="bc0d63d3b4404ef8858166e8836dd0af">tempest-ServersTestJSON-80077074</nova:project>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <nova:port uuid="41486814-fe68-4ee4-acaa-661e4bca279a">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <entry name="serial">58c789e4-1ba5-4001-913d-9aa19bd9c59f</entry>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <entry name="uuid">58c789e4-1ba5-4001-913d-9aa19bd9c59f</entry>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk.config">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:44:0d:be"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <target dev="tap41486814-fe"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f/console.log" append="off"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:34:19 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:34:19 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:34:19 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:34:19 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.699 2 DEBUG nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Preparing to wait for external event network-vif-plugged-41486814-fe68-4ee4-acaa-661e4bca279a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.699 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.700 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.700 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.702 2 DEBUG nova.virt.libvirt.vif [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1772536260',display_name='tempest-₡-1772536260',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--1772536260',id=131,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-j9whf17i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:12Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=58c789e4-1ba5-4001-913d-9aa19bd9c59f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.702 2 DEBUG nova.network.os_vif_util [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.703 2 DEBUG nova.network.os_vif_util [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0d:be,bridge_name='br-int',has_traffic_filtering=True,id=41486814-fe68-4ee4-acaa-661e4bca279a,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41486814-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.704 2 DEBUG os_vif [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0d:be,bridge_name='br-int',has_traffic_filtering=True,id=41486814-fe68-4ee4-acaa-661e4bca279a,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41486814-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41486814-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41486814-fe, col_values=(('external_ids', {'iface-id': '41486814-fe68-4ee4-acaa-661e4bca279a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:0d:be', 'vm-uuid': '58c789e4-1ba5-4001-913d-9aa19bd9c59f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:19 np0005465987 NetworkManager[44910]: <info>  [1759408459.7176] manager: (tap41486814-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.725 2 INFO os_vif [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0d:be,bridge_name='br-int',has_traffic_filtering=True,id=41486814-fe68-4ee4-acaa-661e4bca279a,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41486814-fe')#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.820 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.820 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.821 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No VIF found with MAC fa:16:3e:44:0d:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.821 2 INFO nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Using config drive#033[00m
Oct  2 08:34:19 np0005465987 nova_compute[230713]: 2025-10-02 12:34:19.849 2 DEBUG nova.storage.rbd_utils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:20 np0005465987 nova_compute[230713]: 2025-10-02 12:34:20.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:20 np0005465987 nova_compute[230713]: 2025-10-02 12:34:20.897 2 INFO nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Creating config drive at /var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f/disk.config#033[00m
Oct  2 08:34:20 np0005465987 nova_compute[230713]: 2025-10-02 12:34:20.902 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7caphfzl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:20 np0005465987 nova_compute[230713]: 2025-10-02 12:34:20.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:20 np0005465987 nova_compute[230713]: 2025-10-02 12:34:20.991 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.039 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7caphfzl" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.068 2 DEBUG nova.storage.rbd_utils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:21.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.072 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f/disk.config 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.103 2 DEBUG nova.network.neutron [req-11f6e9b7-c0b0-4137-b923-2581cbef60c7 req-bdd63562-ef2d-459c-a743-0b13890a0431 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updated VIF entry in instance network info cache for port 41486814-fe68-4ee4-acaa-661e4bca279a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.103 2 DEBUG nova.network.neutron [req-11f6e9b7-c0b0-4137-b923-2581cbef60c7 req-bdd63562-ef2d-459c-a743-0b13890a0431 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updating instance_info_cache with network_info: [{"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.118 2 DEBUG oslo_concurrency.lockutils [req-11f6e9b7-c0b0-4137-b923-2581cbef60c7 req-bdd63562-ef2d-459c-a743-0b13890a0431 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.362 2 DEBUG oslo_concurrency.processutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f/disk.config 58c789e4-1ba5-4001-913d-9aa19bd9c59f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.363 2 INFO nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Deleting local config drive /var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f/disk.config because it was imported into RBD.#033[00m
Oct  2 08:34:21 np0005465987 kernel: tap41486814-fe: entered promiscuous mode
Oct  2 08:34:21 np0005465987 NetworkManager[44910]: <info>  [1759408461.4072] manager: (tap41486814-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:21Z|00506|binding|INFO|Claiming lport 41486814-fe68-4ee4-acaa-661e4bca279a for this chassis.
Oct  2 08:34:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:21Z|00507|binding|INFO|41486814-fe68-4ee4-acaa-661e4bca279a: Claiming fa:16:3e:44:0d:be 10.100.0.7
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.421 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:0d:be 10.100.0.7'], port_security=['fa:16:3e:44:0d:be 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '58c789e4-1ba5-4001-913d-9aa19bd9c59f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2a11ff87-bec6-4638-b302-adcd655efba9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=797b6af2-473b-4626-9e97-a0a489119419, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=41486814-fe68-4ee4-acaa-661e4bca279a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.423 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 41486814-fe68-4ee4-acaa-661e4bca279a in datapath d7203b00-e5e4-402e-b777-ac6280fa23ac bound to our chassis#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.425 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7203b00-e5e4-402e-b777-ac6280fa23ac#033[00m
Oct  2 08:34:21 np0005465987 systemd-udevd[278869]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.439 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[abbcae83-35af-48e6-895a-7f6224d21a9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.440 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7203b00-e1 in ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:34:21 np0005465987 systemd-machined[188335]: New machine qemu-59-instance-00000083.
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.444 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7203b00-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.444 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0231e78d-a9af-43dd-a4b1-d4d589f6b769]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.445 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dacd2496-3164-44fb-ad79-327f08937604]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 NetworkManager[44910]: <info>  [1759408461.4541] device (tap41486814-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:34:21 np0005465987 NetworkManager[44910]: <info>  [1759408461.4548] device (tap41486814-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.460 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[101db64a-55de-4c45-9ec3-b8e8420e7179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 systemd[1]: Started Virtual Machine qemu-59-instance-00000083.
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.485 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0f1f54-dda9-441c-96e2-43e3dd27eba9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:21Z|00508|binding|INFO|Setting lport 41486814-fe68-4ee4-acaa-661e4bca279a ovn-installed in OVS
Oct  2 08:34:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:21Z|00509|binding|INFO|Setting lport 41486814-fe68-4ee4-acaa-661e4bca279a up in Southbound
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.514 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd1cfc6-1494-4001-bd9e-fde8c9465260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.519 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ea095d28-0def-4eca-ae96-d666f89486b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 NetworkManager[44910]: <info>  [1759408461.5207] manager: (tapd7203b00-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/238)
Oct  2 08:34:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:21.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.552 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5516d094-ecae-471d-8e4b-bc9c8c6ece24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.555 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7248b1e0-9321-4b98-b0ca-3ecdee8524f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 NetworkManager[44910]: <info>  [1759408461.5780] device (tapd7203b00-e0): carrier: link connected
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.587 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d538fabd-9c13-4597-a41e-a81f0e07e987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.608 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bebe49cb-b1f7-4e9d-8a36-04b5df4dbdeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7203b00-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:c4:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653015, 'reachable_time': 15901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278901, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.624 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1e691ad6-a1e2-453f-af95-99ac1259058c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:c4e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653015, 'tstamp': 653015}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278902, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.642 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f71a708e-8960-4a45-852c-b4953ab33121]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7203b00-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:c4:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653015, 'reachable_time': 15901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278903, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.684 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[82246716-fb28-4b9b-883b-a27e29f93c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.761 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a02fe7-cabf-41d9-97d4-62fe2f8d5d7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.762 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7203b00-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.762 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.763 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7203b00-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005465987 NetworkManager[44910]: <info>  [1759408461.7659] manager: (tapd7203b00-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Oct  2 08:34:21 np0005465987 kernel: tapd7203b00-e0: entered promiscuous mode
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.768 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7203b00-e0, col_values=(('external_ids', {'iface-id': '6f9d54ba-3cfb-48b9-bef7-b2077e6931d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:21Z|00510|binding|INFO|Releasing lport 6f9d54ba-3cfb-48b9-bef7-b2077e6931d7 from this chassis (sb_readonly=0)
Oct  2 08:34:21 np0005465987 nova_compute[230713]: 2025-10-02 12:34:21.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.801 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7203b00-e5e4-402e-b777-ac6280fa23ac.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7203b00-e5e4-402e-b777-ac6280fa23ac.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.802 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb258e5-e705-4820-8dc2-44835fb84501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.803 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-d7203b00-e5e4-402e-b777-ac6280fa23ac
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/d7203b00-e5e4-402e-b777-ac6280fa23ac.pid.haproxy
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID d7203b00-e5e4-402e-b777-ac6280fa23ac
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:34:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:21.804 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'env', 'PROCESS_TAG=haproxy-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7203b00-e5e4-402e-b777-ac6280fa23ac.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.116 2 DEBUG nova.compute.manager [req-86f04381-e646-4239-aafa-1d84339b2cdd req-524a6983-4d2d-49a8-9c8d-cec07eba6de4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Received event network-vif-plugged-41486814-fe68-4ee4-acaa-661e4bca279a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.117 2 DEBUG oslo_concurrency.lockutils [req-86f04381-e646-4239-aafa-1d84339b2cdd req-524a6983-4d2d-49a8-9c8d-cec07eba6de4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.117 2 DEBUG oslo_concurrency.lockutils [req-86f04381-e646-4239-aafa-1d84339b2cdd req-524a6983-4d2d-49a8-9c8d-cec07eba6de4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.118 2 DEBUG oslo_concurrency.lockutils [req-86f04381-e646-4239-aafa-1d84339b2cdd req-524a6983-4d2d-49a8-9c8d-cec07eba6de4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.118 2 DEBUG nova.compute.manager [req-86f04381-e646-4239-aafa-1d84339b2cdd req-524a6983-4d2d-49a8-9c8d-cec07eba6de4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Processing event network-vif-plugged-41486814-fe68-4ee4-acaa-661e4bca279a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:34:22 np0005465987 podman[278977]: 2025-10-02 12:34:22.243410462 +0000 UTC m=+0.051318004 container create 5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:34:22 np0005465987 systemd[1]: Started libpod-conmon-5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3.scope.
Oct  2 08:34:22 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:34:22 np0005465987 podman[278977]: 2025-10-02 12:34:22.218191208 +0000 UTC m=+0.026098770 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:34:22 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ebc52595e9e89d09fcf283d5fe2687c0c007e6464125bf77a71d404cf5b91d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:22 np0005465987 podman[278977]: 2025-10-02 12:34:22.333954425 +0000 UTC m=+0.141861987 container init 5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:34:22 np0005465987 podman[278977]: 2025-10-02 12:34:22.339167069 +0000 UTC m=+0.147074611 container start 5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:34:22 np0005465987 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[278993]: [NOTICE]   (278997) : New worker (278999) forked
Oct  2 08:34:22 np0005465987 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[278993]: [NOTICE]   (278997) : Loading success.
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.486 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408462.4859374, 58c789e4-1ba5-4001-913d-9aa19bd9c59f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.486 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] VM Started (Lifecycle Event)#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.488 2 DEBUG nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.492 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.494 2 INFO nova.virt.libvirt.driver [-] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Instance spawned successfully.#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.494 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.509 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.514 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.519 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.519 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.520 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.520 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.521 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.521 2 DEBUG nova.virt.libvirt.driver [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.602 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.602 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408462.4879541, 58c789e4-1ba5-4001-913d-9aa19bd9c59f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.602 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.636 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.639 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408462.4918938, 58c789e4-1ba5-4001-913d-9aa19bd9c59f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.639 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.657 2 INFO nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Took 10.16 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.658 2 DEBUG nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.674 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.680 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.706 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.742 2 INFO nova.compute.manager [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Took 11.60 seconds to build instance.#033[00m
Oct  2 08:34:22 np0005465987 nova_compute[230713]: 2025-10-02 12:34:22.775 2 DEBUG oslo_concurrency.lockutils [None req-a438d0de-c502-47c2-bcce-1ece68574b75 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:22 np0005465987 podman[279010]: 2025-10-02 12:34:22.858350014 +0000 UTC m=+0.061150145 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 08:34:22 np0005465987 podman[279009]: 2025-10-02 12:34:22.883977099 +0000 UTC m=+0.089886626 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct  2 08:34:22 np0005465987 podman[279008]: 2025-10-02 12:34:22.884819533 +0000 UTC m=+0.091780588 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller)
Oct  2 08:34:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:23.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:23.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:24 np0005465987 nova_compute[230713]: 2025-10-02 12:34:24.544 2 DEBUG nova.compute.manager [req-0a861d01-b92a-4256-ae59-325e3bc29dab req-99ac2c0e-4352-4227-9903-4c514d1010b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Received event network-vif-plugged-41486814-fe68-4ee4-acaa-661e4bca279a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:24 np0005465987 nova_compute[230713]: 2025-10-02 12:34:24.544 2 DEBUG oslo_concurrency.lockutils [req-0a861d01-b92a-4256-ae59-325e3bc29dab req-99ac2c0e-4352-4227-9903-4c514d1010b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:24 np0005465987 nova_compute[230713]: 2025-10-02 12:34:24.545 2 DEBUG oslo_concurrency.lockutils [req-0a861d01-b92a-4256-ae59-325e3bc29dab req-99ac2c0e-4352-4227-9903-4c514d1010b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:24 np0005465987 nova_compute[230713]: 2025-10-02 12:34:24.545 2 DEBUG oslo_concurrency.lockutils [req-0a861d01-b92a-4256-ae59-325e3bc29dab req-99ac2c0e-4352-4227-9903-4c514d1010b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:24 np0005465987 nova_compute[230713]: 2025-10-02 12:34:24.545 2 DEBUG nova.compute.manager [req-0a861d01-b92a-4256-ae59-325e3bc29dab req-99ac2c0e-4352-4227-9903-4c514d1010b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] No waiting events found dispatching network-vif-plugged-41486814-fe68-4ee4-acaa-661e4bca279a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:24 np0005465987 nova_compute[230713]: 2025-10-02 12:34:24.545 2 WARNING nova.compute.manager [req-0a861d01-b92a-4256-ae59-325e3bc29dab req-99ac2c0e-4352-4227-9903-4c514d1010b8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Received unexpected event network-vif-plugged-41486814-fe68-4ee4-acaa-661e4bca279a for instance with vm_state active and task_state None.#033[00m
Oct  2 08:34:24 np0005465987 nova_compute[230713]: 2025-10-02 12:34:24.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:25.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:25 np0005465987 nova_compute[230713]: 2025-10-02 12:34:25.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:25.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:27.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:27.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:27.959 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:27.960 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:27.961 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:29.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:29.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:29 np0005465987 nova_compute[230713]: 2025-10-02 12:34:29.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:30 np0005465987 nova_compute[230713]: 2025-10-02 12:34:30.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:31.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:31.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:32 np0005465987 NetworkManager[44910]: <info>  [1759408472.1052] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Oct  2 08:34:32 np0005465987 nova_compute[230713]: 2025-10-02 12:34:32.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:32 np0005465987 NetworkManager[44910]: <info>  [1759408472.1059] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Oct  2 08:34:32 np0005465987 nova_compute[230713]: 2025-10-02 12:34:32.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:32 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:32Z|00511|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct  2 08:34:32 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:32Z|00512|binding|INFO|Releasing lport 6f9d54ba-3cfb-48b9-bef7-b2077e6931d7 from this chassis (sb_readonly=0)
Oct  2 08:34:32 np0005465987 nova_compute[230713]: 2025-10-02 12:34:32.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:33 np0005465987 nova_compute[230713]: 2025-10-02 12:34:33.058 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:33 np0005465987 nova_compute[230713]: 2025-10-02 12:34:33.081 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Triggering sync for uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:34:33 np0005465987 nova_compute[230713]: 2025-10-02 12:34:33.081 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Triggering sync for uuid 58c789e4-1ba5-4001-913d-9aa19bd9c59f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:34:33 np0005465987 nova_compute[230713]: 2025-10-02 12:34:33.081 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:33 np0005465987 nova_compute[230713]: 2025-10-02 12:34:33.081 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:33 np0005465987 nova_compute[230713]: 2025-10-02 12:34:33.082 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:33 np0005465987 nova_compute[230713]: 2025-10-02 12:34:33.082 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:33.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:33 np0005465987 nova_compute[230713]: 2025-10-02 12:34:33.109 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:33 np0005465987 nova_compute[230713]: 2025-10-02 12:34:33.111 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:33.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:34 np0005465987 nova_compute[230713]: 2025-10-02 12:34:34.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:34 np0005465987 nova_compute[230713]: 2025-10-02 12:34:34.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:35.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:35 np0005465987 nova_compute[230713]: 2025-10-02 12:34:35.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:35.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:36 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:36Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:0d:be 10.100.0.7
Oct  2 08:34:36 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:36Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:0d:be 10.100.0.7
Oct  2 08:34:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:37.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:34:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:34:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000055s ======
Oct  2 08:34:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:39.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Oct  2 08:34:39 np0005465987 nova_compute[230713]: 2025-10-02 12:34:39.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:40 np0005465987 nova_compute[230713]: 2025-10-02 12:34:40.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:40 np0005465987 nova_compute[230713]: 2025-10-02 12:34:40.169 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:40 np0005465987 nova_compute[230713]: 2025-10-02 12:34:40.169 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:40 np0005465987 nova_compute[230713]: 2025-10-02 12:34:40.230 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:34:40 np0005465987 nova_compute[230713]: 2025-10-02 12:34:40.412 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:40 np0005465987 nova_compute[230713]: 2025-10-02 12:34:40.412 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:40 np0005465987 nova_compute[230713]: 2025-10-02 12:34:40.422 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:34:40 np0005465987 nova_compute[230713]: 2025-10-02 12:34:40.422 2 INFO nova.compute.claims [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:34:40 np0005465987 nova_compute[230713]: 2025-10-02 12:34:40.712 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:41 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/112747279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.165 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.171 2 DEBUG nova.compute.provider_tree [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.195 2 DEBUG nova.scheduler.client.report [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.266 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.268 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.507 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.508 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:34:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:41.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.595 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.625 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.719 2 DEBUG nova.policy [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f02a0ac23d9e44d5a6205e853818fa50', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.806 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.807 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.808 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Creating image(s)#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.833 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.869 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.893 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.897 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.960 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.961 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.962 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.962 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.984 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:41 np0005465987 nova_compute[230713]: 2025-10-02 12:34:41.987 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:42 np0005465987 nova_compute[230713]: 2025-10-02 12:34:42.802 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Successfully created port: 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:34:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:43.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:43 np0005465987 nova_compute[230713]: 2025-10-02 12:34:43.194 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:43 np0005465987 nova_compute[230713]: 2025-10-02 12:34:43.259 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] resizing rbd image cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:34:43 np0005465987 nova_compute[230713]: 2025-10-02 12:34:43.512 2 DEBUG nova.objects.instance [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'migration_context' on Instance uuid cb3a492d-7de2-4412-934f-72c7c2e43d7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:43 np0005465987 nova_compute[230713]: 2025-10-02 12:34:43.546 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:34:43 np0005465987 nova_compute[230713]: 2025-10-02 12:34:43.547 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Ensure instance console log exists: /var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:34:43 np0005465987 nova_compute[230713]: 2025-10-02 12:34:43.547 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:43 np0005465987 nova_compute[230713]: 2025-10-02 12:34:43.548 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:43 np0005465987 nova_compute[230713]: 2025-10-02 12:34:43.548 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:43.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:44 np0005465987 nova_compute[230713]: 2025-10-02 12:34:44.247 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Successfully updated port: 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:34:44 np0005465987 nova_compute[230713]: 2025-10-02 12:34:44.309 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "refresh_cache-cb3a492d-7de2-4412-934f-72c7c2e43d7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:44 np0005465987 nova_compute[230713]: 2025-10-02 12:34:44.309 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquired lock "refresh_cache-cb3a492d-7de2-4412-934f-72c7c2e43d7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:44 np0005465987 nova_compute[230713]: 2025-10-02 12:34:44.310 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:34:44 np0005465987 nova_compute[230713]: 2025-10-02 12:34:44.416 2 DEBUG nova.compute.manager [req-ee2aa234-3716-4cf4-969c-665c657c1288 req-33cb49fb-0c08-4c14-8e51-b881e4b2bd62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Received event network-changed-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:44 np0005465987 nova_compute[230713]: 2025-10-02 12:34:44.417 2 DEBUG nova.compute.manager [req-ee2aa234-3716-4cf4-969c-665c657c1288 req-33cb49fb-0c08-4c14-8e51-b881e4b2bd62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Refreshing instance network info cache due to event network-changed-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:34:44 np0005465987 nova_compute[230713]: 2025-10-02 12:34:44.418 2 DEBUG oslo_concurrency.lockutils [req-ee2aa234-3716-4cf4-969c-665c657c1288 req-33cb49fb-0c08-4c14-8e51-b881e4b2bd62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-cb3a492d-7de2-4412-934f-72c7c2e43d7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:44 np0005465987 nova_compute[230713]: 2025-10-02 12:34:44.656 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:34:44 np0005465987 nova_compute[230713]: 2025-10-02 12:34:44.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:45.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:45.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.758 2 DEBUG nova.network.neutron [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Updating instance_info_cache with network_info: [{"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.787 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Releasing lock "refresh_cache-cb3a492d-7de2-4412-934f-72c7c2e43d7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.787 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Instance network_info: |[{"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.787 2 DEBUG oslo_concurrency.lockutils [req-ee2aa234-3716-4cf4-969c-665c657c1288 req-33cb49fb-0c08-4c14-8e51-b881e4b2bd62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-cb3a492d-7de2-4412-934f-72c7c2e43d7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.788 2 DEBUG nova.network.neutron [req-ee2aa234-3716-4cf4-969c-665c657c1288 req-33cb49fb-0c08-4c14-8e51-b881e4b2bd62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Refreshing network info cache for port 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.791 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Start _get_guest_xml network_info=[{"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.797 2 WARNING nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.802 2 DEBUG nova.virt.libvirt.host [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.803 2 DEBUG nova.virt.libvirt.host [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.806 2 DEBUG nova.virt.libvirt.host [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.806 2 DEBUG nova.virt.libvirt.host [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.807 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.808 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.808 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.808 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.809 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.809 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.809 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.809 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.810 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.810 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.810 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.810 2 DEBUG nova.virt.hardware [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:34:45 np0005465987 nova_compute[230713]: 2025-10-02 12:34:45.814 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/41007933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.366 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.401 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.405 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/698035241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.835 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.839 2 DEBUG nova.virt.libvirt.vif [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783180319',display_name='tempest-tempest.common.compute-instance-783180319-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783180319-1',id=134,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-bdoqg484',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:41Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=cb3a492d-7de2-4412-934f-72c7c2e43d7a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.840 2 DEBUG nova.network.os_vif_util [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.842 2 DEBUG nova.network.os_vif_util [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:75:b2,bridge_name='br-int',has_traffic_filtering=True,id=4c98a77a-e20a-42e7-9e16-76a3d7c48e5e,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c98a77a-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.844 2 DEBUG nova.objects.instance [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb3a492d-7de2-4412-934f-72c7c2e43d7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.870 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <uuid>cb3a492d-7de2-4412-934f-72c7c2e43d7a</uuid>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <name>instance-00000086</name>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <nova:name>tempest-tempest.common.compute-instance-783180319-1</nova:name>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:34:45</nova:creationTime>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <nova:user uuid="f02a0ac23d9e44d5a6205e853818fa50">tempest-MultipleCreateTestJSON-822408607-project-member</nova:user>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <nova:project uuid="c3b4ed8f5ff54d4cb9f232e285155ca0">tempest-MultipleCreateTestJSON-822408607</nova:project>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <nova:port uuid="4c98a77a-e20a-42e7-9e16-76a3d7c48e5e">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <entry name="serial">cb3a492d-7de2-4412-934f-72c7c2e43d7a</entry>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <entry name="uuid">cb3a492d-7de2-4412-934f-72c7c2e43d7a</entry>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk.config">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:e0:75:b2"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <target dev="tap4c98a77a-e2"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a/console.log" append="off"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:34:46 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:34:46 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:34:46 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:34:46 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.871 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Preparing to wait for external event network-vif-plugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.872 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.872 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.873 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.874 2 DEBUG nova.virt.libvirt.vif [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783180319',display_name='tempest-tempest.common.compute-instance-783180319-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783180319-1',id=134,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-bdoqg484',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:41Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=cb3a492d-7de2-4412-934f-72c7c2e43d7a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.874 2 DEBUG nova.network.os_vif_util [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.875 2 DEBUG nova.network.os_vif_util [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:75:b2,bridge_name='br-int',has_traffic_filtering=True,id=4c98a77a-e20a-42e7-9e16-76a3d7c48e5e,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c98a77a-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.876 2 DEBUG os_vif [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:75:b2,bridge_name='br-int',has_traffic_filtering=True,id=4c98a77a-e20a-42e7-9e16-76a3d7c48e5e,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c98a77a-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.878 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.879 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.884 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c98a77a-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.885 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c98a77a-e2, col_values=(('external_ids', {'iface-id': '4c98a77a-e20a-42e7-9e16-76a3d7c48e5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:75:b2', 'vm-uuid': 'cb3a492d-7de2-4412-934f-72c7c2e43d7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:46 np0005465987 NetworkManager[44910]: <info>  [1759408486.8892] manager: (tap4c98a77a-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.897 2 INFO os_vif [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:75:b2,bridge_name='br-int',has_traffic_filtering=True,id=4c98a77a-e20a-42e7-9e16-76a3d7c48e5e,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c98a77a-e2')#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.985 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.986 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.986 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No VIF found with MAC fa:16:3e:e0:75:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:34:46 np0005465987 nova_compute[230713]: 2025-10-02 12:34:46.987 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Using config drive#033[00m
Oct  2 08:34:47 np0005465987 nova_compute[230713]: 2025-10-02 12:34:47.014 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:47.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:47.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:47 np0005465987 nova_compute[230713]: 2025-10-02 12:34:47.612 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Creating config drive at /var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a/disk.config#033[00m
Oct  2 08:34:47 np0005465987 nova_compute[230713]: 2025-10-02 12:34:47.622 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph1biv6xd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:47 np0005465987 nova_compute[230713]: 2025-10-02 12:34:47.776 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph1biv6xd" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:47 np0005465987 nova_compute[230713]: 2025-10-02 12:34:47.813 2 DEBUG nova.storage.rbd_utils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:47 np0005465987 nova_compute[230713]: 2025-10-02 12:34:47.817 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a/disk.config cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.172 2 DEBUG oslo_concurrency.processutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a/disk.config cb3a492d-7de2-4412-934f-72c7c2e43d7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.173 2 INFO nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Deleting local config drive /var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a/disk.config because it was imported into RBD.#033[00m
Oct  2 08:34:48 np0005465987 kernel: tap4c98a77a-e2: entered promiscuous mode
Oct  2 08:34:48 np0005465987 NetworkManager[44910]: <info>  [1759408488.2287] manager: (tap4c98a77a-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:48Z|00513|binding|INFO|Claiming lport 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e for this chassis.
Oct  2 08:34:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:48Z|00514|binding|INFO|4c98a77a-e20a-42e7-9e16-76a3d7c48e5e: Claiming fa:16:3e:e0:75:b2 10.100.0.5
Oct  2 08:34:48 np0005465987 systemd-machined[188335]: New machine qemu-60-instance-00000086.
Oct  2 08:34:48 np0005465987 systemd-udevd[279401]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:34:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:48Z|00515|binding|INFO|Setting lport 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e ovn-installed in OVS
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.276 2 DEBUG nova.network.neutron [req-ee2aa234-3716-4cf4-969c-665c657c1288 req-33cb49fb-0c08-4c14-8e51-b881e4b2bd62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Updated VIF entry in instance network info cache for port 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.276 2 DEBUG nova.network.neutron [req-ee2aa234-3716-4cf4-969c-665c657c1288 req-33cb49fb-0c08-4c14-8e51-b881e4b2bd62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Updating instance_info_cache with network_info: [{"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:48 np0005465987 systemd[1]: Started Virtual Machine qemu-60-instance-00000086.
Oct  2 08:34:48 np0005465987 NetworkManager[44910]: <info>  [1759408488.2833] device (tap4c98a77a-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:34:48 np0005465987 NetworkManager[44910]: <info>  [1759408488.2843] device (tap4c98a77a-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:34:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:48Z|00516|binding|INFO|Setting lport 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e up in Southbound
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.305 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:75:b2 10.100.0.5'], port_security=['fa:16:3e:e0:75:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cb3a492d-7de2-4412-934f-72c7c2e43d7a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe054fa7-d7e9-483c-baf2-6f0a11c4cd59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fe18a99-aad3-484c-abd2-6e2374240160, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4c98a77a-e20a-42e7-9e16-76a3d7c48e5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.306 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e in datapath 71fb4dcc-12bb-458e-9241-e19e223ca96d bound to our chassis#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.310 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71fb4dcc-12bb-458e-9241-e19e223ca96d#033[00m
Oct  2 08:34:48 np0005465987 podman[279388]: 2025-10-02 12:34:48.315654776 +0000 UTC m=+0.064714404 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.323 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9a5f92-ca1e-4c1f-8ac5-374743f04f2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.323 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap71fb4dcc-11 in ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.327 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap71fb4dcc-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.327 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3cefa587-0188-45e2-bc99-672cc2d6fed3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.328 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0723971a-da77-443d-828d-7472ff85fa64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.338 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[2346ff6d-2a9e-41b8-aa26-14fb2916e020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.362 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b63ddc8e-7834-4f8d-a323-08d43407369d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e326 e326: 3 total, 3 up, 3 in
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.397 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[2f028f98-586a-4da3-8177-1fa6d6845ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.402 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cf099621-7e0d-4b1a-9ebf-493033ca92d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 NetworkManager[44910]: <info>  [1759408488.4029] manager: (tap71fb4dcc-10): new Veth device (/org/freedesktop/NetworkManager/Devices/244)
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.442 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[53a98549-16cc-4ba0-904a-84d9b143a980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.445 2 DEBUG oslo_concurrency.lockutils [req-ee2aa234-3716-4cf4-969c-665c657c1288 req-33cb49fb-0c08-4c14-8e51-b881e4b2bd62 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-cb3a492d-7de2-4412-934f-72c7c2e43d7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.447 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9a97f9a7-164a-4e2b-8f4c-cbd50a2ef7d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 NetworkManager[44910]: <info>  [1759408488.4776] device (tap71fb4dcc-10): carrier: link connected
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.483 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6ec304-cc7f-4438-9b98-13bc64bab6da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.497 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9de6ec46-8c3c-4ed4-bd83-3b49486dbf4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71fb4dcc-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:02:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655705, 'reachable_time': 28857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279442, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.508 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[33f65176-3052-4595-b61a-49f3323324a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:264'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655705, 'tstamp': 655705}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279443, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.520 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[73353c0c-22e2-41b0-b9d7-2e0bd10bfa66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71fb4dcc-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:02:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655705, 'reachable_time': 28857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279444, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.551 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6524ff07-3d14-446a-a79a-8387ad332446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.612 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc6efce-8b9b-46cc-853a-31eb04987c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.613 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71fb4dcc-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.614 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.614 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71fb4dcc-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005465987 NetworkManager[44910]: <info>  [1759408488.6170] manager: (tap71fb4dcc-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Oct  2 08:34:48 np0005465987 kernel: tap71fb4dcc-10: entered promiscuous mode
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.619 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71fb4dcc-10, col_values=(('external_ids', {'iface-id': 'c32f3592-2fbc-4d81-bf91-681fef1a2dd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:48Z|00517|binding|INFO|Releasing lport c32f3592-2fbc-4d81-bf91-681fef1a2dd1 from this chassis (sb_readonly=0)
Oct  2 08:34:48 np0005465987 nova_compute[230713]: 2025-10-02 12:34:48.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.641 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.642 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd66ef6-7b2f-4f5e-96d9-cca1789fb155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.642 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:34:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:48.643 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'env', 'PROCESS_TAG=haproxy-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/71fb4dcc-12bb-458e-9241-e19e223ca96d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:34:49 np0005465987 podman[279476]: 2025-10-02 12:34:49.018681872 +0000 UTC m=+0.057374380 container create 44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:34:49 np0005465987 systemd[1]: Started libpod-conmon-44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6.scope.
Oct  2 08:34:49 np0005465987 podman[279476]: 2025-10-02 12:34:48.982062334 +0000 UTC m=+0.020754842 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:34:49 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:34:49 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2dd156d0ecbb70e2dd0f938dc1f51bea05459032e4d94db910f5445fdeab5a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:49.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:49 np0005465987 podman[279476]: 2025-10-02 12:34:49.133422131 +0000 UTC m=+0.172114639 container init 44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:34:49 np0005465987 podman[279476]: 2025-10-02 12:34:49.14099084 +0000 UTC m=+0.179683328 container start 44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:34:49 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[279507]: [NOTICE]   (279513) : New worker (279515) forked
Oct  2 08:34:49 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[279507]: [NOTICE]   (279513) : Loading success.
Oct  2 08:34:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:49.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.811 2 DEBUG nova.compute.manager [req-00463a9c-230e-46d3-8a6a-07182579a866 req-7a4c329f-1805-4ad5-8eba-03b7f6a28fba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Received event network-vif-plugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.812 2 DEBUG oslo_concurrency.lockutils [req-00463a9c-230e-46d3-8a6a-07182579a866 req-7a4c329f-1805-4ad5-8eba-03b7f6a28fba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.812 2 DEBUG oslo_concurrency.lockutils [req-00463a9c-230e-46d3-8a6a-07182579a866 req-7a4c329f-1805-4ad5-8eba-03b7f6a28fba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.813 2 DEBUG oslo_concurrency.lockutils [req-00463a9c-230e-46d3-8a6a-07182579a866 req-7a4c329f-1805-4ad5-8eba-03b7f6a28fba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.813 2 DEBUG nova.compute.manager [req-00463a9c-230e-46d3-8a6a-07182579a866 req-7a4c329f-1805-4ad5-8eba-03b7f6a28fba d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Processing event network-vif-plugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.817 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408489.817481, cb3a492d-7de2-4412-934f-72c7c2e43d7a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.818 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] VM Started (Lifecycle Event)#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.821 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.826 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.830 2 INFO nova.virt.libvirt.driver [-] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Instance spawned successfully.#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.831 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:49.839 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:49.840 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.912 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.918 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.930 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.930 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.931 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.931 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.932 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:49 np0005465987 nova_compute[230713]: 2025-10-02 12:34:49.932 2 DEBUG nova.virt.libvirt.driver [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.020 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.020 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408489.8204465, cb3a492d-7de2-4412-934f-72c7c2e43d7a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.021 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.088 2 INFO nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Took 8.28 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.089 2 DEBUG nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.174 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.179 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408489.8249068, cb3a492d-7de2-4412-934f-72c7c2e43d7a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.179 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.242 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.247 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.282 2 INFO nova.compute.manager [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Took 9.90 seconds to build instance.#033[00m
Oct  2 08:34:50 np0005465987 nova_compute[230713]: 2025-10-02 12:34:50.337 2 DEBUG oslo_concurrency.lockutils [None req-44452340-b56c-41fd-8e29-ee3124710922 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e327 e327: 3 total, 3 up, 3 in
Oct  2 08:34:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:51.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:51 np0005465987 nova_compute[230713]: 2025-10-02 12:34:51.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:52 np0005465987 nova_compute[230713]: 2025-10-02 12:34:52.212 2 DEBUG nova.compute.manager [req-c39864fd-d5cf-4d12-a877-836be3e44045 req-35020e04-17c5-4e46-98c9-aa35d5c5a256 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Received event network-vif-plugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:52 np0005465987 nova_compute[230713]: 2025-10-02 12:34:52.212 2 DEBUG oslo_concurrency.lockutils [req-c39864fd-d5cf-4d12-a877-836be3e44045 req-35020e04-17c5-4e46-98c9-aa35d5c5a256 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:52 np0005465987 nova_compute[230713]: 2025-10-02 12:34:52.213 2 DEBUG oslo_concurrency.lockutils [req-c39864fd-d5cf-4d12-a877-836be3e44045 req-35020e04-17c5-4e46-98c9-aa35d5c5a256 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:52 np0005465987 nova_compute[230713]: 2025-10-02 12:34:52.213 2 DEBUG oslo_concurrency.lockutils [req-c39864fd-d5cf-4d12-a877-836be3e44045 req-35020e04-17c5-4e46-98c9-aa35d5c5a256 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:52 np0005465987 nova_compute[230713]: 2025-10-02 12:34:52.214 2 DEBUG nova.compute.manager [req-c39864fd-d5cf-4d12-a877-836be3e44045 req-35020e04-17c5-4e46-98c9-aa35d5c5a256 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] No waiting events found dispatching network-vif-plugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:52 np0005465987 nova_compute[230713]: 2025-10-02 12:34:52.214 2 WARNING nova.compute.manager [req-c39864fd-d5cf-4d12-a877-836be3e44045 req-35020e04-17c5-4e46-98c9-aa35d5c5a256 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Received unexpected event network-vif-plugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e for instance with vm_state active and task_state None.#033[00m
Oct  2 08:34:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e328 e328: 3 total, 3 up, 3 in
Oct  2 08:34:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:52.843 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:53.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.307 2 DEBUG oslo_concurrency.lockutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.308 2 DEBUG oslo_concurrency.lockutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.309 2 DEBUG oslo_concurrency.lockutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.309 2 DEBUG oslo_concurrency.lockutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.310 2 DEBUG oslo_concurrency.lockutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.313 2 INFO nova.compute.manager [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Terminating instance#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.315 2 DEBUG nova.compute.manager [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:34:53 np0005465987 kernel: tap4c98a77a-e2 (unregistering): left promiscuous mode
Oct  2 08:34:53 np0005465987 NetworkManager[44910]: <info>  [1759408493.3727] device (tap4c98a77a-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:53Z|00518|binding|INFO|Releasing lport 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e from this chassis (sb_readonly=0)
Oct  2 08:34:53 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:53Z|00519|binding|INFO|Setting lport 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e down in Southbound
Oct  2 08:34:53 np0005465987 ovn_controller[129172]: 2025-10-02T12:34:53Z|00520|binding|INFO|Removing iface tap4c98a77a-e2 ovn-installed in OVS
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.394 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:75:b2 10.100.0.5'], port_security=['fa:16:3e:e0:75:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cb3a492d-7de2-4412-934f-72c7c2e43d7a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe054fa7-d7e9-483c-baf2-6f0a11c4cd59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fe18a99-aad3-484c-abd2-6e2374240160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4c98a77a-e20a-42e7-9e16-76a3d7c48e5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.396 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4c98a77a-e20a-42e7-9e16-76a3d7c48e5e in datapath 71fb4dcc-12bb-458e-9241-e19e223ca96d unbound from our chassis#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.398 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71fb4dcc-12bb-458e-9241-e19e223ca96d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.401 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf19f11-432c-4c70-95cf-e40246b77b44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.402 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d namespace which is not needed anymore#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005465987 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct  2 08:34:53 np0005465987 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000086.scope: Consumed 4.802s CPU time.
Oct  2 08:34:53 np0005465987 systemd-machined[188335]: Machine qemu-60-instance-00000086 terminated.
Oct  2 08:34:53 np0005465987 podman[279556]: 2025-10-02 12:34:53.494636265 +0000 UTC m=+0.084465107 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:34:53 np0005465987 podman[279553]: 2025-10-02 12:34:53.502894832 +0000 UTC m=+0.086497102 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:34:53 np0005465987 podman[279548]: 2025-10-02 12:34:53.535286715 +0000 UTC m=+0.117353563 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[279507]: [NOTICE]   (279513) : haproxy version is 2.8.14-c23fe91
Oct  2 08:34:53 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[279507]: [NOTICE]   (279513) : path to executable is /usr/sbin/haproxy
Oct  2 08:34:53 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[279507]: [WARNING]  (279513) : Exiting Master process...
Oct  2 08:34:53 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[279507]: [WARNING]  (279513) : Exiting Master process...
Oct  2 08:34:53 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[279507]: [ALERT]    (279513) : Current worker (279515) exited with code 143 (Terminated)
Oct  2 08:34:53 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[279507]: [WARNING]  (279513) : All workers exited. Exiting... (0)
Oct  2 08:34:53 np0005465987 systemd[1]: libpod-44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6.scope: Deactivated successfully.
Oct  2 08:34:53 np0005465987 podman[279656]: 2025-10-02 12:34:53.561358082 +0000 UTC m=+0.057216136 container died 44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.562 2 INFO nova.virt.libvirt.driver [-] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Instance destroyed successfully.#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.562 2 DEBUG nova.objects.instance [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'resources' on Instance uuid cb3a492d-7de2-4412-934f-72c7c2e43d7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:53.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.585 2 DEBUG nova.virt.libvirt.vif [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783180319',display_name='tempest-tempest.common.compute-instance-783180319-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783180319-1',id=134,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-bdoqg484',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:50Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=cb3a492d-7de2-4412-934f-72c7c2e43d7a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.586 2 DEBUG nova.network.os_vif_util [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "address": "fa:16:3e:e0:75:b2", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c98a77a-e2", "ovs_interfaceid": "4c98a77a-e20a-42e7-9e16-76a3d7c48e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.586 2 DEBUG nova.network.os_vif_util [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:75:b2,bridge_name='br-int',has_traffic_filtering=True,id=4c98a77a-e20a-42e7-9e16-76a3d7c48e5e,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c98a77a-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.587 2 DEBUG os_vif [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:75:b2,bridge_name='br-int',has_traffic_filtering=True,id=4c98a77a-e20a-42e7-9e16-76a3d7c48e5e,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c98a77a-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.588 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c98a77a-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:53 np0005465987 systemd[1]: var-lib-containers-storage-overlay-a2dd156d0ecbb70e2dd0f938dc1f51bea05459032e4d94db910f5445fdeab5a0-merged.mount: Deactivated successfully.
Oct  2 08:34:53 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6-userdata-shm.mount: Deactivated successfully.
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.595 2 INFO os_vif [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:75:b2,bridge_name='br-int',has_traffic_filtering=True,id=4c98a77a-e20a-42e7-9e16-76a3d7c48e5e,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c98a77a-e2')#033[00m
Oct  2 08:34:53 np0005465987 podman[279656]: 2025-10-02 12:34:53.606398392 +0000 UTC m=+0.102256446 container cleanup 44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:34:53 np0005465987 systemd[1]: libpod-conmon-44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6.scope: Deactivated successfully.
Oct  2 08:34:53 np0005465987 podman[279758]: 2025-10-02 12:34:53.671418452 +0000 UTC m=+0.039987541 container remove 44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.677 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[69627043-f7fe-4d3a-8d95-dc70076c703f]: (4, ('Thu Oct  2 12:34:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d (44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6)\n44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6\nThu Oct  2 12:34:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d (44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6)\n44091f0aa2f938cabb04ea04e59a3e97389ac4b77c2d7cb258fc146e7a1e03b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.679 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c4591137-7034-4741-9392-4f1019f5aa0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.680 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71fb4dcc-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005465987 kernel: tap71fb4dcc-10: left promiscuous mode
Oct  2 08:34:53 np0005465987 nova_compute[230713]: 2025-10-02 12:34:53.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.705 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4dc9f9-28df-46e6-ac84-d5397dd0e1bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.742 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6eb42e-69db-4c8b-9ced-56927dc289de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.743 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b9af6620-b488-472d-8d28-8b58815b2763]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.757 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f06bf52f-c8f4-4b1a-914e-05e6716e3bff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655696, 'reachable_time': 20630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279801, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:53 np0005465987 systemd[1]: run-netns-ovnmeta\x2d71fb4dcc\x2d12bb\x2d458e\x2d9241\x2de19e223ca96d.mount: Deactivated successfully.
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.761 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:34:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:34:53.761 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd26f3b-07f5-4cbc-afff-87ff6c5dd594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.332 2 DEBUG nova.compute.manager [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Received event network-vif-unplugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.332 2 DEBUG oslo_concurrency.lockutils [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.332 2 DEBUG oslo_concurrency.lockutils [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.332 2 DEBUG oslo_concurrency.lockutils [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.333 2 DEBUG nova.compute.manager [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] No waiting events found dispatching network-vif-unplugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.333 2 DEBUG nova.compute.manager [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Received event network-vif-unplugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.333 2 DEBUG nova.compute.manager [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Received event network-vif-plugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.333 2 DEBUG oslo_concurrency.lockutils [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.333 2 DEBUG oslo_concurrency.lockutils [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.333 2 DEBUG oslo_concurrency.lockutils [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.333 2 DEBUG nova.compute.manager [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] No waiting events found dispatching network-vif-plugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.334 2 WARNING nova.compute.manager [req-dee87fc8-6b97-4cef-8b85-77d63ed108d0 req-d41e5b68-bc27-42f2-ac78-7b56bd642d67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Received unexpected event network-vif-plugged-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:34:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.849 2 INFO nova.virt.libvirt.driver [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Deleting instance files /var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a_del#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.850 2 INFO nova.virt.libvirt.driver [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Deletion of /var/lib/nova/instances/cb3a492d-7de2-4412-934f-72c7c2e43d7a_del complete#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.926 2 INFO nova.compute.manager [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Took 1.61 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.926 2 DEBUG oslo.service.loopingcall [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.926 2 DEBUG nova.compute.manager [-] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:34:54 np0005465987 nova_compute[230713]: 2025-10-02 12:34:54.926 2 DEBUG nova.network.neutron [-] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:34:54 np0005465987 podman[279972]: 2025-10-02 12:34:54.977394362 +0000 UTC m=+0.053888155 container create 62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:34:55 np0005465987 systemd[1]: Started libpod-conmon-62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574.scope.
Oct  2 08:34:55 np0005465987 podman[279972]: 2025-10-02 12:34:54.958948044 +0000 UTC m=+0.035441807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:34:55 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:34:55 np0005465987 podman[279972]: 2025-10-02 12:34:55.07247985 +0000 UTC m=+0.148973593 container init 62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:34:55 np0005465987 podman[279972]: 2025-10-02 12:34:55.079176875 +0000 UTC m=+0.155670618 container start 62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:34:55 np0005465987 podman[279972]: 2025-10-02 12:34:55.082468585 +0000 UTC m=+0.158962348 container attach 62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 08:34:55 np0005465987 pensive_williamson[279988]: 167 167
Oct  2 08:34:55 np0005465987 systemd[1]: libpod-62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574.scope: Deactivated successfully.
Oct  2 08:34:55 np0005465987 conmon[279988]: conmon 62e4665523d952703a17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574.scope/container/memory.events
Oct  2 08:34:55 np0005465987 podman[279972]: 2025-10-02 12:34:55.087189585 +0000 UTC m=+0.163683328 container died 62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 08:34:55 np0005465987 systemd[1]: var-lib-containers-storage-overlay-3e2a9959ea1e4fdea81f16553cd44210cbc31506e6710d9b6ed80e418086d000-merged.mount: Deactivated successfully.
Oct  2 08:34:55 np0005465987 podman[279972]: 2025-10-02 12:34:55.126405475 +0000 UTC m=+0.202899218 container remove 62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 08:34:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:55.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:55 np0005465987 nova_compute[230713]: 2025-10-02 12:34:55.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:55 np0005465987 systemd[1]: libpod-conmon-62e4665523d952703a173e514f0d1534bd857221b2409587deb8da38c17c3574.scope: Deactivated successfully.
Oct  2 08:34:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:34:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/654346124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:34:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:34:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/654346124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:34:55 np0005465987 podman[280012]: 2025-10-02 12:34:55.335890812 +0000 UTC m=+0.051633992 container create 0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 08:34:55 np0005465987 systemd[1]: Started libpod-conmon-0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae.scope.
Oct  2 08:34:55 np0005465987 podman[280012]: 2025-10-02 12:34:55.312634672 +0000 UTC m=+0.028377902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:34:55 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:34:55 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa04e7aebec81ad4addf82f8ae710d191fe64e5c46a8c1d55e28906322c0119/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:55 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa04e7aebec81ad4addf82f8ae710d191fe64e5c46a8c1d55e28906322c0119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:55 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa04e7aebec81ad4addf82f8ae710d191fe64e5c46a8c1d55e28906322c0119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:55 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa04e7aebec81ad4addf82f8ae710d191fe64e5c46a8c1d55e28906322c0119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:55 np0005465987 podman[280012]: 2025-10-02 12:34:55.445552993 +0000 UTC m=+0.161296223 container init 0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 08:34:55 np0005465987 podman[280012]: 2025-10-02 12:34:55.457349427 +0000 UTC m=+0.173092607 container start 0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 08:34:55 np0005465987 podman[280012]: 2025-10-02 12:34:55.460689779 +0000 UTC m=+0.176433009 container attach 0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:34:55 np0005465987 nova_compute[230713]: 2025-10-02 12:34:55.570 2 DEBUG nova.network.neutron [-] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:55 np0005465987 nova_compute[230713]: 2025-10-02 12:34:55.603 2 INFO nova.compute.manager [-] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Took 0.68 seconds to deallocate network for instance.#033[00m
Oct  2 08:34:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:34:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:55.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:34:55 np0005465987 nova_compute[230713]: 2025-10-02 12:34:55.837 2 DEBUG oslo_concurrency.lockutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:55 np0005465987 nova_compute[230713]: 2025-10-02 12:34:55.838 2 DEBUG oslo_concurrency.lockutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:55 np0005465987 nova_compute[230713]: 2025-10-02 12:34:55.950 2 DEBUG oslo_concurrency.processutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:56 np0005465987 nova_compute[230713]: 2025-10-02 12:34:56.054 2 DEBUG nova.compute.manager [req-3b172a8f-95c8-43c7-8aa7-f2ff379cccfe req-2fb6152c-8fc4-43f2-a521-f257ee556e2d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Received event network-vif-deleted-4c98a77a-e20a-42e7-9e16-76a3d7c48e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:34:56 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:34:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2013956393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:56 np0005465987 nova_compute[230713]: 2025-10-02 12:34:56.492 2 DEBUG oslo_concurrency.processutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:56 np0005465987 nova_compute[230713]: 2025-10-02 12:34:56.498 2 DEBUG nova.compute.provider_tree [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:56 np0005465987 nova_compute[230713]: 2025-10-02 12:34:56.645 2 DEBUG nova.scheduler.client.report [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]: [
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:    {
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        "available": false,
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        "ceph_device": false,
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        "lsm_data": {},
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        "lvs": [],
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        "path": "/dev/sr0",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        "rejected_reasons": [
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "Insufficient space (<5GB)",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "Has a FileSystem"
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        ],
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        "sys_api": {
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "actuators": null,
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "device_nodes": "sr0",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "devname": "sr0",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "human_readable_size": "482.00 KB",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "id_bus": "ata",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "model": "QEMU DVD-ROM",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "nr_requests": "2",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "parent": "/dev/sr0",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "partitions": {},
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "path": "/dev/sr0",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "removable": "1",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "rev": "2.5+",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "ro": "0",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "rotational": "0",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "sas_address": "",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "sas_device_handle": "",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "scheduler_mode": "mq-deadline",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "sectors": 0,
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "sectorsize": "2048",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "size": 493568.0,
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "support_discard": "2048",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "type": "disk",
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:            "vendor": "QEMU"
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:        }
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]:    }
Oct  2 08:34:56 np0005465987 elegant_mendel[280028]: ]
Oct  2 08:34:56 np0005465987 systemd[1]: libpod-0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae.scope: Deactivated successfully.
Oct  2 08:34:56 np0005465987 systemd[1]: libpod-0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae.scope: Consumed 1.280s CPU time.
Oct  2 08:34:56 np0005465987 podman[281452]: 2025-10-02 12:34:56.811702408 +0000 UTC m=+0.043412806 container died 0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 08:34:56 np0005465987 systemd[1]: var-lib-containers-storage-overlay-8fa04e7aebec81ad4addf82f8ae710d191fe64e5c46a8c1d55e28906322c0119-merged.mount: Deactivated successfully.
Oct  2 08:34:56 np0005465987 nova_compute[230713]: 2025-10-02 12:34:56.865 2 DEBUG oslo_concurrency.lockutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:56 np0005465987 podman[281452]: 2025-10-02 12:34:56.899503186 +0000 UTC m=+0.131213514 container remove 0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 08:34:56 np0005465987 systemd[1]: libpod-conmon-0afd9bc28264f742fa1bd6ec8605cabcd546c7bb485ddb74a98ac5e57a4cf6ae.scope: Deactivated successfully.
Oct  2 08:34:56 np0005465987 nova_compute[230713]: 2025-10-02 12:34:56.940 2 INFO nova.scheduler.client.report [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Deleted allocations for instance cb3a492d-7de2-4412-934f-72c7c2e43d7a#033[00m
Oct  2 08:34:56 np0005465987 nova_compute[230713]: 2025-10-02 12:34:56.988 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:57.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:57 np0005465987 nova_compute[230713]: 2025-10-02 12:34:57.317 2 DEBUG oslo_concurrency.lockutils [None req-41c89288-b127-4999-bd14-12ddf61ed577 f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "cb3a492d-7de2-4412-934f-72c7c2e43d7a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:57.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:57 np0005465987 nova_compute[230713]: 2025-10-02 12:34:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:57 np0005465987 nova_compute[230713]: 2025-10-02 12:34:57.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:34:57 np0005465987 nova_compute[230713]: 2025-10-02 12:34:57.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:34:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:34:58 np0005465987 nova_compute[230713]: 2025-10-02 12:34:58.190 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:58 np0005465987 nova_compute[230713]: 2025-10-02 12:34:58.191 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:58 np0005465987 nova_compute[230713]: 2025-10-02 12:34:58.191 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:34:58 np0005465987 nova_compute[230713]: 2025-10-02 12:34:58.191 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:58 np0005465987 nova_compute[230713]: 2025-10-02 12:34:58.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:59.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 e329: 3 total, 3 up, 3 in
Oct  2 08:34:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:34:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:34:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:34:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:34:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:59.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:00 np0005465987 nova_compute[230713]: 2025-10-02 12:35:00.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:35:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:35:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:01.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:01 np0005465987 nova_compute[230713]: 2025-10-02 12:35:01.280 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Updating instance_info_cache with network_info: [{"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:01 np0005465987 nova_compute[230713]: 2025-10-02 12:35:01.553 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-6ec65b83-acd5-4790-8df0-0dfa903d2f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:01 np0005465987 nova_compute[230713]: 2025-10-02 12:35:01.554 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:35:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:01.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:03.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:03 np0005465987 nova_compute[230713]: 2025-10-02 12:35:03.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:03.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.372 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.372 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.452 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:35:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.665 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.665 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.671 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.672 2 INFO nova.compute.claims [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.936 2 DEBUG nova.scheduler.client.report [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.954 2 DEBUG nova.scheduler.client.report [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.955 2 DEBUG nova.compute.provider_tree [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:35:04 np0005465987 nova_compute[230713]: 2025-10-02 12:35:04.983 2 DEBUG nova.scheduler.client.report [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.003 2 DEBUG nova.scheduler.client.report [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.082 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:05.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2464211780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.531 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.537 2 DEBUG nova.compute.provider_tree [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.547 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.567 2 DEBUG nova.scheduler.client.report [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:05.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.723 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.724 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.866 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.868 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:35:05 np0005465987 nova_compute[230713]: 2025-10-02 12:35:05.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.052 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.053 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.053 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.053 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.054 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.094 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.150 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.361 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.363 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.364 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Creating image(s)#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.405 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.444 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.478 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.483 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2365189100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.521 2 DEBUG nova.policy [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f02a0ac23d9e44d5a6205e853818fa50', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.528 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.566 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.566 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.567 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.567 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.594 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.598 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.828 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.829 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.834 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:06 np0005465987 nova_compute[230713]: 2025-10-02 12:35:06.835 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.046 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.048 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4023MB free_disk=20.74102783203125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.048 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.049 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.319 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 6ec65b83-acd5-4790-8df0-0dfa903d2f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.320 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 58c789e4-1ba5-4001-913d-9aa19bd9c59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.321 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance c9cc27fa-48be-4f4e-9573-40e15b1b8fad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.321 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.322 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.409 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:07.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.836 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Successfully created port: 6ef549b3-677c-4a0e-b5ad-dbd433f77137 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:35:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1937057779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.901 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.906 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:07 np0005465987 nova_compute[230713]: 2025-10-02 12:35:07.991 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:08 np0005465987 nova_compute[230713]: 2025-10-02 12:35:08.132 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:35:08 np0005465987 nova_compute[230713]: 2025-10-02 12:35:08.133 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:08 np0005465987 nova_compute[230713]: 2025-10-02 12:35:08.554 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408493.5532942, cb3a492d-7de2-4412-934f-72c7c2e43d7a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:08 np0005465987 nova_compute[230713]: 2025-10-02 12:35:08.554 2 INFO nova.compute.manager [-] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:35:08 np0005465987 nova_compute[230713]: 2025-10-02 12:35:08.590 2 DEBUG nova.compute.manager [None req-24451885-ae14-49b4-b333-3d0046beecfa - - - - - -] [instance: cb3a492d-7de2-4412-934f-72c7c2e43d7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:08 np0005465987 nova_compute[230713]: 2025-10-02 12:35:08.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005465987 nova_compute[230713]: 2025-10-02 12:35:09.133 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:09 np0005465987 nova_compute[230713]: 2025-10-02 12:35:09.134 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:09 np0005465987 nova_compute[230713]: 2025-10-02 12:35:09.134 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:09 np0005465987 nova_compute[230713]: 2025-10-02 12:35:09.373 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Successfully updated port: 6ef549b3-677c-4a0e-b5ad-dbd433f77137 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:35:09 np0005465987 nova_compute[230713]: 2025-10-02 12:35:09.408 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "refresh_cache-c9cc27fa-48be-4f4e-9573-40e15b1b8fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:09 np0005465987 nova_compute[230713]: 2025-10-02 12:35:09.408 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquired lock "refresh_cache-c9cc27fa-48be-4f4e-9573-40e15b1b8fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:09 np0005465987 nova_compute[230713]: 2025-10-02 12:35:09.408 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:35:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:09.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:09 np0005465987 nova_compute[230713]: 2025-10-02 12:35:09.717 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:35:09 np0005465987 nova_compute[230713]: 2025-10-02 12:35:09.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.499 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.900s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.603 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] resizing rbd image c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.745 2 DEBUG nova.objects.instance [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'migration_context' on Instance uuid c9cc27fa-48be-4f4e-9573-40e15b1b8fad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.782 2 DEBUG nova.compute.manager [req-0cd837bb-ac20-4f58-a1f6-1cad78c84d4c req-bd6e92d6-4a8e-4faf-a337-95ea56b1040b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Received event network-changed-6ef549b3-677c-4a0e-b5ad-dbd433f77137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.782 2 DEBUG nova.compute.manager [req-0cd837bb-ac20-4f58-a1f6-1cad78c84d4c req-bd6e92d6-4a8e-4faf-a337-95ea56b1040b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Refreshing instance network info cache due to event network-changed-6ef549b3-677c-4a0e-b5ad-dbd433f77137. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.782 2 DEBUG oslo_concurrency.lockutils [req-0cd837bb-ac20-4f58-a1f6-1cad78c84d4c req-bd6e92d6-4a8e-4faf-a337-95ea56b1040b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-c9cc27fa-48be-4f4e-9573-40e15b1b8fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.785 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.785 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Ensure instance console log exists: /var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.786 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.786 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:10 np0005465987 nova_compute[230713]: 2025-10-02 12:35:10.786 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:35:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:35:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:11.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:12 np0005465987 nova_compute[230713]: 2025-10-02 12:35:12.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:12 np0005465987 nova_compute[230713]: 2025-10-02 12:35:12.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:35:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.340 2 DEBUG nova.network.neutron [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Updating instance_info_cache with network_info: [{"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.492 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Releasing lock "refresh_cache-c9cc27fa-48be-4f4e-9573-40e15b1b8fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.492 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Instance network_info: |[{"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.493 2 DEBUG oslo_concurrency.lockutils [req-0cd837bb-ac20-4f58-a1f6-1cad78c84d4c req-bd6e92d6-4a8e-4faf-a337-95ea56b1040b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-c9cc27fa-48be-4f4e-9573-40e15b1b8fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.493 2 DEBUG nova.network.neutron [req-0cd837bb-ac20-4f58-a1f6-1cad78c84d4c req-bd6e92d6-4a8e-4faf-a337-95ea56b1040b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Refreshing network info cache for port 6ef549b3-677c-4a0e-b5ad-dbd433f77137 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.498 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Start _get_guest_xml network_info=[{"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.504 2 WARNING nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.510 2 DEBUG nova.virt.libvirt.host [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.511 2 DEBUG nova.virt.libvirt.host [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.517 2 DEBUG nova.virt.libvirt.host [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.518 2 DEBUG nova.virt.libvirt.host [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.520 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.520 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.521 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.521 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.522 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.522 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.523 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.523 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.524 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.524 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.525 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.525 2 DEBUG nova.virt.hardware [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.530 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.966 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.995 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:13 np0005465987 nova_compute[230713]: 2025-10-02 12:35:13.999 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1491830790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.442 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.445 2 DEBUG nova.virt.libvirt.vif [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-27244558',display_name='tempest-MultipleCreateTestJSON-server-27244558-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-27244558-2',id=138,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-hd86sqju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:06Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=c9cc27fa-48be-4f4e-9573-40e15b1b8fad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.446 2 DEBUG nova.network.os_vif_util [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.448 2 DEBUG nova.network.os_vif_util [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:8a:9f,bridge_name='br-int',has_traffic_filtering=True,id=6ef549b3-677c-4a0e-b5ad-dbd433f77137,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ef549b3-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.450 2 DEBUG nova.objects.instance [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'pci_devices' on Instance uuid c9cc27fa-48be-4f4e-9573-40e15b1b8fad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.528 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <uuid>c9cc27fa-48be-4f4e-9573-40e15b1b8fad</uuid>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <name>instance-0000008a</name>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <nova:name>tempest-MultipleCreateTestJSON-server-27244558-2</nova:name>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:35:13</nova:creationTime>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <nova:user uuid="f02a0ac23d9e44d5a6205e853818fa50">tempest-MultipleCreateTestJSON-822408607-project-member</nova:user>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <nova:project uuid="c3b4ed8f5ff54d4cb9f232e285155ca0">tempest-MultipleCreateTestJSON-822408607</nova:project>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <nova:port uuid="6ef549b3-677c-4a0e-b5ad-dbd433f77137">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <entry name="serial">c9cc27fa-48be-4f4e-9573-40e15b1b8fad</entry>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <entry name="uuid">c9cc27fa-48be-4f4e-9573-40e15b1b8fad</entry>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk.config">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:96:8a:9f"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <target dev="tap6ef549b3-67"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad/console.log" append="off"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:35:14 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:35:14 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:35:14 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:35:14 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.531 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Preparing to wait for external event network-vif-plugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.532 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.532 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.533 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.534 2 DEBUG nova.virt.libvirt.vif [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-27244558',display_name='tempest-MultipleCreateTestJSON-server-27244558-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-27244558-2',id=138,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-hd86sqju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:06Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=c9cc27fa-48be-4f4e-9573-40e15b1b8fad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.535 2 DEBUG nova.network.os_vif_util [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.536 2 DEBUG nova.network.os_vif_util [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:8a:9f,bridge_name='br-int',has_traffic_filtering=True,id=6ef549b3-677c-4a0e-b5ad-dbd433f77137,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ef549b3-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.537 2 DEBUG os_vif [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:8a:9f,bridge_name='br-int',has_traffic_filtering=True,id=6ef549b3-677c-4a0e-b5ad-dbd433f77137,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ef549b3-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.538 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ef549b3-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.545 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6ef549b3-67, col_values=(('external_ids', {'iface-id': '6ef549b3-677c-4a0e-b5ad-dbd433f77137', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:8a:9f', 'vm-uuid': 'c9cc27fa-48be-4f4e-9573-40e15b1b8fad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:14 np0005465987 NetworkManager[44910]: <info>  [1759408514.5482] manager: (tap6ef549b3-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.556 2 INFO os_vif [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:8a:9f,bridge_name='br-int',has_traffic_filtering=True,id=6ef549b3-677c-4a0e-b5ad-dbd433f77137,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ef549b3-67')#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.918 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.919 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.919 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] No VIF found with MAC fa:16:3e:96:8a:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.920 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Using config drive#033[00m
Oct  2 08:35:14 np0005465987 nova_compute[230713]: 2025-10-02 12:35:14.950 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:15 np0005465987 nova_compute[230713]: 2025-10-02 12:35:15.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:15.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:16 np0005465987 nova_compute[230713]: 2025-10-02 12:35:16.349 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Creating config drive at /var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad/disk.config#033[00m
Oct  2 08:35:16 np0005465987 nova_compute[230713]: 2025-10-02 12:35:16.353 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsdlvwzwl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:16 np0005465987 nova_compute[230713]: 2025-10-02 12:35:16.484 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsdlvwzwl" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:16 np0005465987 nova_compute[230713]: 2025-10-02 12:35:16.512 2 DEBUG nova.storage.rbd_utils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] rbd image c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:16 np0005465987 nova_compute[230713]: 2025-10-02 12:35:16.516 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad/disk.config c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:17 np0005465987 nova_compute[230713]: 2025-10-02 12:35:17.157 2 DEBUG nova.network.neutron [req-0cd837bb-ac20-4f58-a1f6-1cad78c84d4c req-bd6e92d6-4a8e-4faf-a337-95ea56b1040b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Updated VIF entry in instance network info cache for port 6ef549b3-677c-4a0e-b5ad-dbd433f77137. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:35:17 np0005465987 nova_compute[230713]: 2025-10-02 12:35:17.157 2 DEBUG nova.network.neutron [req-0cd837bb-ac20-4f58-a1f6-1cad78c84d4c req-bd6e92d6-4a8e-4faf-a337-95ea56b1040b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Updating instance_info_cache with network_info: [{"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:17.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:17 np0005465987 nova_compute[230713]: 2025-10-02 12:35:17.522 2 DEBUG oslo_concurrency.lockutils [req-0cd837bb-ac20-4f58-a1f6-1cad78c84d4c req-bd6e92d6-4a8e-4faf-a337-95ea56b1040b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-c9cc27fa-48be-4f4e-9573-40e15b1b8fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:18 np0005465987 podman[281822]: 2025-10-02 12:35:18.852633742 +0000 UTC m=+0.062364048 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:35:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:19.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.404 2 DEBUG oslo_concurrency.processutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad/disk.config c9cc27fa-48be-4f4e-9573-40e15b1b8fad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.888s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.404 2 INFO nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Deleting local config drive /var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad/disk.config because it was imported into RBD.#033[00m
Oct  2 08:35:19 np0005465987 kernel: tap6ef549b3-67: entered promiscuous mode
Oct  2 08:35:19 np0005465987 NetworkManager[44910]: <info>  [1759408519.4713] manager: (tap6ef549b3-67): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:19Z|00521|binding|INFO|Claiming lport 6ef549b3-677c-4a0e-b5ad-dbd433f77137 for this chassis.
Oct  2 08:35:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:19Z|00522|binding|INFO|6ef549b3-677c-4a0e-b5ad-dbd433f77137: Claiming fa:16:3e:96:8a:9f 10.100.0.7
Oct  2 08:35:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:19Z|00523|binding|INFO|Setting lport 6ef549b3-677c-4a0e-b5ad-dbd433f77137 ovn-installed in OVS
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005465987 systemd-machined[188335]: New machine qemu-61-instance-0000008a.
Oct  2 08:35:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:19 np0005465987 systemd[1]: Started Virtual Machine qemu-61-instance-0000008a.
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005465987 systemd-udevd[281855]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:35:19 np0005465987 NetworkManager[44910]: <info>  [1759408519.5752] device (tap6ef549b3-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:35:19 np0005465987 NetworkManager[44910]: <info>  [1759408519.5766] device (tap6ef549b3-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:35:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:19Z|00524|binding|INFO|Setting lport 6ef549b3-677c-4a0e-b5ad-dbd433f77137 up in Southbound
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.594 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:8a:9f 10.100.0.7'], port_security=['fa:16:3e:96:8a:9f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c9cc27fa-48be-4f4e-9573-40e15b1b8fad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe054fa7-d7e9-483c-baf2-6f0a11c4cd59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fe18a99-aad3-484c-abd2-6e2374240160, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=6ef549b3-677c-4a0e-b5ad-dbd433f77137) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.597 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 6ef549b3-677c-4a0e-b5ad-dbd433f77137 in datapath 71fb4dcc-12bb-458e-9241-e19e223ca96d bound to our chassis#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.599 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71fb4dcc-12bb-458e-9241-e19e223ca96d#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.612 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a50504b3-0591-438a-9ca3-6dc992210baa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.613 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap71fb4dcc-11 in ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.616 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap71fb4dcc-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.617 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4c21a3-44c9-481e-92e6-e0874573a291]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.618 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[54862fde-1148-4a9e-b791-3dbce1a4576e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.635 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[411ff94c-14a1-4216-a64c-e641f935cc59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:35:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:19.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.658 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e12e3ef8-d4f7-4012-a254-2e5d38016791]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.691 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f6eb8b01-5bd6-469c-9bce-b8fbf2613ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 NetworkManager[44910]: <info>  [1759408519.6992] manager: (tap71fb4dcc-10): new Veth device (/org/freedesktop/NetworkManager/Devices/248)
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.697 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c2ab95-ca41-42d9-bf68-5f2417bfe895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 systemd-udevd[281857]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.743 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5f02dcde-aba8-4f4e-91a3-503024760f7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.746 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5085c108-d091-4057-9c58-49aa24287628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 NetworkManager[44910]: <info>  [1759408519.7725] device (tap71fb4dcc-10): carrier: link connected
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.778 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa669fe-d5d0-4937-baea-28ae8abb5cc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.795 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c0777cda-57f3-4b85-a1d9-b403f0767299]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71fb4dcc-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:02:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658834, 'reachable_time': 26508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281890, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.811 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b72fe98d-5898-40c9-aaa4-e8b46071ce04]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:264'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658834, 'tstamp': 658834}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281891, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.831 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ae434e19-662b-49c7-802d-6decff8d641f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71fb4dcc-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:02:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658834, 'reachable_time': 26508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281892, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.862 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb163eb-c4b6-47d6-bc3c-e9d35413c97e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.917 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7a24e417-b992-49b7-9190-939819153402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.918 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71fb4dcc-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.919 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.919 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71fb4dcc-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005465987 kernel: tap71fb4dcc-10: entered promiscuous mode
Oct  2 08:35:19 np0005465987 NetworkManager[44910]: <info>  [1759408519.9232] manager: (tap71fb4dcc-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.935 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71fb4dcc-10, col_values=(('external_ids', {'iface-id': 'c32f3592-2fbc-4d81-bf91-681fef1a2dd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:19Z|00525|binding|INFO|Releasing lport c32f3592-2fbc-4d81-bf91-681fef1a2dd1 from this chassis (sb_readonly=0)
Oct  2 08:35:19 np0005465987 nova_compute[230713]: 2025-10-02 12:35:19.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.955 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.956 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd1910f-0072-4bf5-aef7-930b265a192c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.957 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/71fb4dcc-12bb-458e-9241-e19e223ca96d.pid.haproxy
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 71fb4dcc-12bb-458e-9241-e19e223ca96d
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:35:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:19.957 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'env', 'PROCESS_TAG=haproxy-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/71fb4dcc-12bb-458e-9241-e19e223ca96d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:20 np0005465987 podman[281974]: 2025-10-02 12:35:20.317507616 +0000 UTC m=+0.052107356 container create b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:35:20 np0005465987 systemd[1]: Started libpod-conmon-b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618.scope.
Oct  2 08:35:20 np0005465987 podman[281974]: 2025-10-02 12:35:20.287722336 +0000 UTC m=+0.022322096 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:35:20 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:35:20 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c38abea662251b2068c6b261fecc6a65a773b89efa80fa026bd3edb752163d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:35:20 np0005465987 podman[281974]: 2025-10-02 12:35:20.406188038 +0000 UTC m=+0.140787798 container init b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:35:20 np0005465987 podman[281974]: 2025-10-02 12:35:20.411596517 +0000 UTC m=+0.146196257 container start b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:35:20 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[281997]: [NOTICE]   (282012) : New worker (282032) forked
Oct  2 08:35:20 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[281997]: [NOTICE]   (282012) : Loading success.
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.669 2 DEBUG nova.compute.manager [req-3ad9bdb2-86b4-4ff3-9a0b-4b57af5c5d70 req-534da64e-015b-41b8-8187-ef2b612ddfa5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Received event network-vif-plugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.669 2 DEBUG oslo_concurrency.lockutils [req-3ad9bdb2-86b4-4ff3-9a0b-4b57af5c5d70 req-534da64e-015b-41b8-8187-ef2b612ddfa5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.669 2 DEBUG oslo_concurrency.lockutils [req-3ad9bdb2-86b4-4ff3-9a0b-4b57af5c5d70 req-534da64e-015b-41b8-8187-ef2b612ddfa5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.670 2 DEBUG oslo_concurrency.lockutils [req-3ad9bdb2-86b4-4ff3-9a0b-4b57af5c5d70 req-534da64e-015b-41b8-8187-ef2b612ddfa5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.670 2 DEBUG nova.compute.manager [req-3ad9bdb2-86b4-4ff3-9a0b-4b57af5c5d70 req-534da64e-015b-41b8-8187-ef2b612ddfa5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Processing event network-vif-plugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:35:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:35:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.989 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408520.9891756, c9cc27fa-48be-4f4e-9573-40e15b1b8fad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.990 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] VM Started (Lifecycle Event)#033[00m
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.992 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:35:20 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.995 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.998 2 INFO nova.virt.libvirt.driver [-] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Instance spawned successfully.#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:20.999 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.151 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.154 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:21.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.184 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.184 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.185 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.185 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.185 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.186 2 DEBUG nova.virt.libvirt.driver [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.292 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.292 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408520.9894607, c9cc27fa-48be-4f4e-9573-40e15b1b8fad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.293 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.407 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.410 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408520.9951253, c9cc27fa-48be-4f4e-9573-40e15b1b8fad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.410 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.486 2 INFO nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Took 15.12 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.487 2 DEBUG nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.554 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.556 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:21.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:21 np0005465987 nova_compute[230713]: 2025-10-02 12:35:21.692 2 INFO nova.compute.manager [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Took 17.05 seconds to build instance.#033[00m
Oct  2 08:35:22 np0005465987 nova_compute[230713]: 2025-10-02 12:35:22.665 2 DEBUG oslo_concurrency.lockutils [None req-3d24b700-7f6c-4458-9c75-dfce883fbcdb f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:23.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:23 np0005465987 nova_compute[230713]: 2025-10-02 12:35:23.456 2 DEBUG nova.compute.manager [req-41e95fd0-b9ad-4df9-8918-04718840390c req-1f1596c7-e461-45bd-8e76-502d8c8b5323 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Received event network-vif-plugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:23 np0005465987 nova_compute[230713]: 2025-10-02 12:35:23.457 2 DEBUG oslo_concurrency.lockutils [req-41e95fd0-b9ad-4df9-8918-04718840390c req-1f1596c7-e461-45bd-8e76-502d8c8b5323 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:23 np0005465987 nova_compute[230713]: 2025-10-02 12:35:23.457 2 DEBUG oslo_concurrency.lockutils [req-41e95fd0-b9ad-4df9-8918-04718840390c req-1f1596c7-e461-45bd-8e76-502d8c8b5323 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:23 np0005465987 nova_compute[230713]: 2025-10-02 12:35:23.457 2 DEBUG oslo_concurrency.lockutils [req-41e95fd0-b9ad-4df9-8918-04718840390c req-1f1596c7-e461-45bd-8e76-502d8c8b5323 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:23 np0005465987 nova_compute[230713]: 2025-10-02 12:35:23.457 2 DEBUG nova.compute.manager [req-41e95fd0-b9ad-4df9-8918-04718840390c req-1f1596c7-e461-45bd-8e76-502d8c8b5323 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] No waiting events found dispatching network-vif-plugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:23 np0005465987 nova_compute[230713]: 2025-10-02 12:35:23.457 2 WARNING nova.compute.manager [req-41e95fd0-b9ad-4df9-8918-04718840390c req-1f1596c7-e461-45bd-8e76-502d8c8b5323 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Received unexpected event network-vif-plugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:35:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:23.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:23 np0005465987 podman[282048]: 2025-10-02 12:35:23.857472477 +0000 UTC m=+0.067522450 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:35:23 np0005465987 podman[282049]: 2025-10-02 12:35:23.869330994 +0000 UTC m=+0.072098567 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd)
Oct  2 08:35:23 np0005465987 podman[282047]: 2025-10-02 12:35:23.898837507 +0000 UTC m=+0.111024749 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:35:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:24 np0005465987 nova_compute[230713]: 2025-10-02 12:35:24.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005465987 nova_compute[230713]: 2025-10-02 12:35:25.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:25.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:25.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:26 np0005465987 nova_compute[230713]: 2025-10-02 12:35:26.984 2 DEBUG oslo_concurrency.lockutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:26 np0005465987 nova_compute[230713]: 2025-10-02 12:35:26.985 2 DEBUG oslo_concurrency.lockutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:26 np0005465987 nova_compute[230713]: 2025-10-02 12:35:26.991 2 DEBUG oslo_concurrency.lockutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:26 np0005465987 nova_compute[230713]: 2025-10-02 12:35:26.991 2 DEBUG oslo_concurrency.lockutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:26 np0005465987 nova_compute[230713]: 2025-10-02 12:35:26.992 2 DEBUG oslo_concurrency.lockutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:26 np0005465987 nova_compute[230713]: 2025-10-02 12:35:26.993 2 INFO nova.compute.manager [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Terminating instance#033[00m
Oct  2 08:35:26 np0005465987 nova_compute[230713]: 2025-10-02 12:35:26.994 2 DEBUG nova.compute.manager [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:35:27 np0005465987 kernel: tap6ef549b3-67 (unregistering): left promiscuous mode
Oct  2 08:35:27 np0005465987 NetworkManager[44910]: <info>  [1759408527.1746] device (tap6ef549b3-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:35:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:27.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:27Z|00526|binding|INFO|Releasing lport 6ef549b3-677c-4a0e-b5ad-dbd433f77137 from this chassis (sb_readonly=0)
Oct  2 08:35:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:27Z|00527|binding|INFO|Setting lport 6ef549b3-677c-4a0e-b5ad-dbd433f77137 down in Southbound
Oct  2 08:35:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:27Z|00528|binding|INFO|Removing iface tap6ef549b3-67 ovn-installed in OVS
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005465987 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct  2 08:35:27 np0005465987 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000008a.scope: Consumed 7.569s CPU time.
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.227 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:8a:9f 10.100.0.7'], port_security=['fa:16:3e:96:8a:9f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c9cc27fa-48be-4f4e-9573-40e15b1b8fad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3b4ed8f5ff54d4cb9f232e285155ca0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe054fa7-d7e9-483c-baf2-6f0a11c4cd59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fe18a99-aad3-484c-abd2-6e2374240160, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=6ef549b3-677c-4a0e-b5ad-dbd433f77137) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.230 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 6ef549b3-677c-4a0e-b5ad-dbd433f77137 in datapath 71fb4dcc-12bb-458e-9241-e19e223ca96d unbound from our chassis#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.232 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71fb4dcc-12bb-458e-9241-e19e223ca96d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.234 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[af46ef12-2501-4bbf-a57a-3cb111025ff6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.234 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d namespace which is not needed anymore#033[00m
Oct  2 08:35:27 np0005465987 systemd-machined[188335]: Machine qemu-61-instance-0000008a terminated.
Oct  2 08:35:27 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[281997]: [NOTICE]   (282012) : haproxy version is 2.8.14-c23fe91
Oct  2 08:35:27 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[281997]: [NOTICE]   (282012) : path to executable is /usr/sbin/haproxy
Oct  2 08:35:27 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[281997]: [WARNING]  (282012) : Exiting Master process...
Oct  2 08:35:27 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[281997]: [WARNING]  (282012) : Exiting Master process...
Oct  2 08:35:27 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[281997]: [ALERT]    (282012) : Current worker (282032) exited with code 143 (Terminated)
Oct  2 08:35:27 np0005465987 neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d[281997]: [WARNING]  (282012) : All workers exited. Exiting... (0)
Oct  2 08:35:27 np0005465987 systemd[1]: libpod-b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618.scope: Deactivated successfully.
Oct  2 08:35:27 np0005465987 podman[282135]: 2025-10-02 12:35:27.384334877 +0000 UTC m=+0.053092212 container died b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:35:27 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618-userdata-shm.mount: Deactivated successfully.
Oct  2 08:35:27 np0005465987 systemd[1]: var-lib-containers-storage-overlay-34c38abea662251b2068c6b261fecc6a65a773b89efa80fa026bd3edb752163d-merged.mount: Deactivated successfully.
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005465987 podman[282135]: 2025-10-02 12:35:27.424328858 +0000 UTC m=+0.093086183 container cleanup b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.431 2 INFO nova.virt.libvirt.driver [-] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Instance destroyed successfully.#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.433 2 DEBUG nova.objects.instance [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lazy-loading 'resources' on Instance uuid c9cc27fa-48be-4f4e-9573-40e15b1b8fad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:27 np0005465987 systemd[1]: libpod-conmon-b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618.scope: Deactivated successfully.
Oct  2 08:35:27 np0005465987 podman[282171]: 2025-10-02 12:35:27.505482133 +0000 UTC m=+0.051495849 container remove b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.513 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[691bc926-62f7-43d6-936b-899bd7fab00c]: (4, ('Thu Oct  2 12:35:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d (b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618)\nb56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618\nThu Oct  2 12:35:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d (b56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618)\nb56b54f34487dbe7adc42e1d19f563d5efd8e8ffb7fa5f3cdd8a2cff2606f618\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.515 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d8a0d8-0e6b-4874-8168-b223b8a8e2ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.516 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71fb4dcc-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005465987 kernel: tap71fb4dcc-10: left promiscuous mode
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.542 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0da3dc95-aa3c-4e59-b8c4-7658ea02e260]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.571 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7b10e0-cf4a-4153-96de-0b584cfe6513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.573 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[366e84c5-ef8e-4e49-81c6-250b217e3b2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.592 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9e978b04-f6dc-4a57-b2f0-385899720b2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658826, 'reachable_time': 30191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282190, 'error': None, 'target': 'ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.595 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-71fb4dcc-12bb-458e-9241-e19e223ca96d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.595 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae899f9-5cef-47a2-881e-490e6e1aa63d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005465987 systemd[1]: run-netns-ovnmeta\x2d71fb4dcc\x2d12bb\x2d458e\x2d9241\x2de19e223ca96d.mount: Deactivated successfully.
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.659 2 DEBUG nova.virt.libvirt.vif [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-27244558',display_name='tempest-MultipleCreateTestJSON-server-27244558-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-27244558-2',id=138,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T12:35:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3b4ed8f5ff54d4cb9f232e285155ca0',ramdisk_id='',reservation_id='r-hd86sqju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-822408607',owner_user_name='tempest-MultipleCreateTestJSON-822408607-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:21Z,user_data=None,user_id='f02a0ac23d9e44d5a6205e853818fa50',uuid=c9cc27fa-48be-4f4e-9573-40e15b1b8fad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.660 2 DEBUG nova.network.os_vif_util [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converting VIF {"id": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "address": "fa:16:3e:96:8a:9f", "network": {"id": "71fb4dcc-12bb-458e-9241-e19e223ca96d", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1258573921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3b4ed8f5ff54d4cb9f232e285155ca0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ef549b3-67", "ovs_interfaceid": "6ef549b3-677c-4a0e-b5ad-dbd433f77137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.661 2 DEBUG nova.network.os_vif_util [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:8a:9f,bridge_name='br-int',has_traffic_filtering=True,id=6ef549b3-677c-4a0e-b5ad-dbd433f77137,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ef549b3-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.661 2 DEBUG os_vif [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:8a:9f,bridge_name='br-int',has_traffic_filtering=True,id=6ef549b3-677c-4a0e-b5ad-dbd433f77137,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ef549b3-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ef549b3-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:27.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:27 np0005465987 nova_compute[230713]: 2025-10-02 12:35:27.673 2 INFO os_vif [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:8a:9f,bridge_name='br-int',has_traffic_filtering=True,id=6ef549b3-677c-4a0e-b5ad-dbd433f77137,network=Network(71fb4dcc-12bb-458e-9241-e19e223ca96d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ef549b3-67')#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.960 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.961 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:27.962 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:28 np0005465987 nova_compute[230713]: 2025-10-02 12:35:28.819 2 DEBUG nova.compute.manager [req-75d60890-ba01-4a0e-b52f-907dc299e563 req-efbd037e-fcaf-4d3a-9a4c-1e2122727dfe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Received event network-vif-unplugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:28 np0005465987 nova_compute[230713]: 2025-10-02 12:35:28.819 2 DEBUG oslo_concurrency.lockutils [req-75d60890-ba01-4a0e-b52f-907dc299e563 req-efbd037e-fcaf-4d3a-9a4c-1e2122727dfe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:28 np0005465987 nova_compute[230713]: 2025-10-02 12:35:28.820 2 DEBUG oslo_concurrency.lockutils [req-75d60890-ba01-4a0e-b52f-907dc299e563 req-efbd037e-fcaf-4d3a-9a4c-1e2122727dfe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:28 np0005465987 nova_compute[230713]: 2025-10-02 12:35:28.820 2 DEBUG oslo_concurrency.lockutils [req-75d60890-ba01-4a0e-b52f-907dc299e563 req-efbd037e-fcaf-4d3a-9a4c-1e2122727dfe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:28 np0005465987 nova_compute[230713]: 2025-10-02 12:35:28.820 2 DEBUG nova.compute.manager [req-75d60890-ba01-4a0e-b52f-907dc299e563 req-efbd037e-fcaf-4d3a-9a4c-1e2122727dfe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] No waiting events found dispatching network-vif-unplugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:28 np0005465987 nova_compute[230713]: 2025-10-02 12:35:28.820 2 DEBUG nova.compute.manager [req-75d60890-ba01-4a0e-b52f-907dc299e563 req-efbd037e-fcaf-4d3a-9a4c-1e2122727dfe d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Received event network-vif-unplugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:35:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:30 np0005465987 nova_compute[230713]: 2025-10-02 12:35:30.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005465987 nova_compute[230713]: 2025-10-02 12:35:31.084 2 DEBUG nova.compute.manager [req-b789103f-6496-4aff-b430-d75372d9111d req-b0289b4e-78fa-45d4-90b5-118df3ae475d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Received event network-vif-plugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:31 np0005465987 nova_compute[230713]: 2025-10-02 12:35:31.085 2 DEBUG oslo_concurrency.lockutils [req-b789103f-6496-4aff-b430-d75372d9111d req-b0289b4e-78fa-45d4-90b5-118df3ae475d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:31 np0005465987 nova_compute[230713]: 2025-10-02 12:35:31.085 2 DEBUG oslo_concurrency.lockutils [req-b789103f-6496-4aff-b430-d75372d9111d req-b0289b4e-78fa-45d4-90b5-118df3ae475d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:31 np0005465987 nova_compute[230713]: 2025-10-02 12:35:31.086 2 DEBUG oslo_concurrency.lockutils [req-b789103f-6496-4aff-b430-d75372d9111d req-b0289b4e-78fa-45d4-90b5-118df3ae475d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:31 np0005465987 nova_compute[230713]: 2025-10-02 12:35:31.086 2 DEBUG nova.compute.manager [req-b789103f-6496-4aff-b430-d75372d9111d req-b0289b4e-78fa-45d4-90b5-118df3ae475d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] No waiting events found dispatching network-vif-plugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:31 np0005465987 nova_compute[230713]: 2025-10-02 12:35:31.086 2 WARNING nova.compute.manager [req-b789103f-6496-4aff-b430-d75372d9111d req-b0289b4e-78fa-45d4-90b5-118df3ae475d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Received unexpected event network-vif-plugged-6ef549b3-677c-4a0e-b5ad-dbd433f77137 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:35:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:31.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:31.583 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:31 np0005465987 nova_compute[230713]: 2025-10-02 12:35:31.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:31.584 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:35:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:31.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:32 np0005465987 nova_compute[230713]: 2025-10-02 12:35:32.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:33.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:33.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:35 np0005465987 nova_compute[230713]: 2025-10-02 12:35:35.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:35.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:35.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:35 np0005465987 nova_compute[230713]: 2025-10-02 12:35:35.942 2 INFO nova.virt.libvirt.driver [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Deleting instance files /var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad_del#033[00m
Oct  2 08:35:35 np0005465987 nova_compute[230713]: 2025-10-02 12:35:35.943 2 INFO nova.virt.libvirt.driver [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Deletion of /var/lib/nova/instances/c9cc27fa-48be-4f4e-9573-40e15b1b8fad_del complete#033[00m
Oct  2 08:35:36 np0005465987 nova_compute[230713]: 2025-10-02 12:35:36.241 2 INFO nova.compute.manager [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Took 9.25 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:35:36 np0005465987 nova_compute[230713]: 2025-10-02 12:35:36.242 2 DEBUG oslo.service.loopingcall [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:35:36 np0005465987 nova_compute[230713]: 2025-10-02 12:35:36.242 2 DEBUG nova.compute.manager [-] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:35:36 np0005465987 nova_compute[230713]: 2025-10-02 12:35:36.242 2 DEBUG nova.network.neutron [-] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:35:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:37.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:37 np0005465987 nova_compute[230713]: 2025-10-02 12:35:37.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:35:38.586 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:39.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:39 np0005465987 nova_compute[230713]: 2025-10-02 12:35:39.318 2 DEBUG nova.network.neutron [-] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:39 np0005465987 nova_compute[230713]: 2025-10-02 12:35:39.438 2 DEBUG nova.compute.manager [req-f9b2f7bf-394f-4b1f-b09c-d68c0c146887 req-d6b86dd7-8187-4718-b126-f5aa1e746fd4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Received event network-vif-deleted-6ef549b3-677c-4a0e-b5ad-dbd433f77137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:39 np0005465987 nova_compute[230713]: 2025-10-02 12:35:39.438 2 INFO nova.compute.manager [req-f9b2f7bf-394f-4b1f-b09c-d68c0c146887 req-d6b86dd7-8187-4718-b126-f5aa1e746fd4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Neutron deleted interface 6ef549b3-677c-4a0e-b5ad-dbd433f77137; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:35:39 np0005465987 nova_compute[230713]: 2025-10-02 12:35:39.438 2 DEBUG nova.network.neutron [req-f9b2f7bf-394f-4b1f-b09c-d68c0c146887 req-d6b86dd7-8187-4718-b126-f5aa1e746fd4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:39 np0005465987 nova_compute[230713]: 2025-10-02 12:35:39.583 2 INFO nova.compute.manager [-] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Took 3.34 seconds to deallocate network for instance.#033[00m
Oct  2 08:35:39 np0005465987 nova_compute[230713]: 2025-10-02 12:35:39.587 2 DEBUG nova.compute.manager [req-f9b2f7bf-394f-4b1f-b09c-d68c0c146887 req-d6b86dd7-8187-4718-b126-f5aa1e746fd4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Detach interface failed, port_id=6ef549b3-677c-4a0e-b5ad-dbd433f77137, reason: Instance c9cc27fa-48be-4f4e-9573-40e15b1b8fad could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:35:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:35:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:39.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:35:39 np0005465987 nova_compute[230713]: 2025-10-02 12:35:39.794 2 DEBUG oslo_concurrency.lockutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:39 np0005465987 nova_compute[230713]: 2025-10-02 12:35:39.795 2 DEBUG oslo_concurrency.lockutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:39 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  2 08:35:39 np0005465987 nova_compute[230713]: 2025-10-02 12:35:39.913 2 DEBUG oslo_concurrency.processutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:40 np0005465987 nova_compute[230713]: 2025-10-02 12:35:40.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:40 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3987054870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:40 np0005465987 nova_compute[230713]: 2025-10-02 12:35:40.314 2 DEBUG oslo_concurrency.processutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:40 np0005465987 nova_compute[230713]: 2025-10-02 12:35:40.320 2 DEBUG nova.compute.provider_tree [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:40 np0005465987 nova_compute[230713]: 2025-10-02 12:35:40.544 2 DEBUG nova.scheduler.client.report [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:40 np0005465987 nova_compute[230713]: 2025-10-02 12:35:40.621 2 DEBUG oslo_concurrency.lockutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:40 np0005465987 nova_compute[230713]: 2025-10-02 12:35:40.708 2 INFO nova.scheduler.client.report [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Deleted allocations for instance c9cc27fa-48be-4f4e-9573-40e15b1b8fad#033[00m
Oct  2 08:35:41 np0005465987 nova_compute[230713]: 2025-10-02 12:35:41.075 2 DEBUG oslo_concurrency.lockutils [None req-1efacad4-9161-42d8-9e1d-9faf3756f73c f02a0ac23d9e44d5a6205e853818fa50 c3b4ed8f5ff54d4cb9f232e285155ca0 - - default default] Lock "c9cc27fa-48be-4f4e-9573-40e15b1b8fad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:41.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:41.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:42 np0005465987 nova_compute[230713]: 2025-10-02 12:35:42.430 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408527.4295182, c9cc27fa-48be-4f4e-9573-40e15b1b8fad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:42 np0005465987 nova_compute[230713]: 2025-10-02 12:35:42.431 2 INFO nova.compute.manager [-] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:35:42 np0005465987 nova_compute[230713]: 2025-10-02 12:35:42.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:42 np0005465987 nova_compute[230713]: 2025-10-02 12:35:42.885 2 DEBUG nova.compute.manager [None req-389e650a-caa8-4641-8d79-fb2fb52a3de0 - - - - - -] [instance: c9cc27fa-48be-4f4e-9573-40e15b1b8fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:43.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:43.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:45 np0005465987 nova_compute[230713]: 2025-10-02 12:35:45.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:45.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:35:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:45.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:35:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:47.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:47 np0005465987 nova_compute[230713]: 2025-10-02 12:35:47.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:49.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:49Z|00529|binding|INFO|Releasing lport 81a5a13b-b81c-444a-8751-b35a35cdf3dc from this chassis (sb_readonly=0)
Oct  2 08:35:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:35:49Z|00530|binding|INFO|Releasing lport 6f9d54ba-3cfb-48b9-bef7-b2077e6931d7 from this chassis (sb_readonly=0)
Oct  2 08:35:49 np0005465987 nova_compute[230713]: 2025-10-02 12:35:49.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:49 np0005465987 podman[282233]: 2025-10-02 12:35:49.845739738 +0000 UTC m=+0.050821291 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 08:35:50 np0005465987 nova_compute[230713]: 2025-10-02 12:35:50.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:51.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:52 np0005465987 nova_compute[230713]: 2025-10-02 12:35:52.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:53.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:53.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e330 e330: 3 total, 3 up, 3 in
Oct  2 08:35:54 np0005465987 podman[282255]: 2025-10-02 12:35:54.868635409 +0000 UTC m=+0.063170441 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:35:54 np0005465987 podman[282254]: 2025-10-02 12:35:54.869867563 +0000 UTC m=+0.066487732 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:35:54 np0005465987 podman[282253]: 2025-10-02 12:35:54.906692796 +0000 UTC m=+0.104290732 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:35:55 np0005465987 nova_compute[230713]: 2025-10-02 12:35:55.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:55.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:35:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:55.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:35:56 np0005465987 nova_compute[230713]: 2025-10-02 12:35:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:35:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:57.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:35:57 np0005465987 nova_compute[230713]: 2025-10-02 12:35:57.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:57.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:57 np0005465987 nova_compute[230713]: 2025-10-02 12:35:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:57 np0005465987 nova_compute[230713]: 2025-10-02 12:35:57.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:35:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:59.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:35:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:59.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:00 np0005465987 nova_compute[230713]: 2025-10-02 12:36:00.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:00 np0005465987 nova_compute[230713]: 2025-10-02 12:36:00.833 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:00 np0005465987 nova_compute[230713]: 2025-10-02 12:36:00.834 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:00 np0005465987 nova_compute[230713]: 2025-10-02 12:36:00.834 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:36:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:01.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:01.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:02 np0005465987 nova_compute[230713]: 2025-10-02 12:36:02.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:03.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:03.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:05 np0005465987 nova_compute[230713]: 2025-10-02 12:36:05.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:05.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:05 np0005465987 nova_compute[230713]: 2025-10-02 12:36:05.258 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updating instance_info_cache with network_info: [{"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 e331: 3 total, 3 up, 3 in
Oct  2 08:36:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:05.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:06 np0005465987 nova_compute[230713]: 2025-10-02 12:36:06.916 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:06 np0005465987 nova_compute[230713]: 2025-10-02 12:36:06.917 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:36:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:07.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:07 np0005465987 nova_compute[230713]: 2025-10-02 12:36:07.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:07.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:07 np0005465987 nova_compute[230713]: 2025-10-02 12:36:07.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:07 np0005465987 nova_compute[230713]: 2025-10-02 12:36:07.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:07 np0005465987 nova_compute[230713]: 2025-10-02 12:36:07.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:07 np0005465987 nova_compute[230713]: 2025-10-02 12:36:07.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.076 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.076 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.077 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.077 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.077 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1570981439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.517 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.658 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.659 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.663 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.663 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.805 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.806 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4020MB free_disk=20.718544006347656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.806 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.806 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.992 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 6ec65b83-acd5-4790-8df0-0dfa903d2f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.992 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 58c789e4-1ba5-4001-913d-9aa19bd9c59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.992 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:36:08 np0005465987 nova_compute[230713]: 2025-10-02 12:36:08.993 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:36:09 np0005465987 nova_compute[230713]: 2025-10-02 12:36:09.066 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:09.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/875864746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:09 np0005465987 nova_compute[230713]: 2025-10-02 12:36:09.532 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:09 np0005465987 nova_compute[230713]: 2025-10-02 12:36:09.539 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:09 np0005465987 nova_compute[230713]: 2025-10-02 12:36:09.580 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:09 np0005465987 nova_compute[230713]: 2025-10-02 12:36:09.673 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:36:09 np0005465987 nova_compute[230713]: 2025-10-02 12:36:09.674 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:09.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:10 np0005465987 nova_compute[230713]: 2025-10-02 12:36:10.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:10 np0005465987 nova_compute[230713]: 2025-10-02 12:36:10.674 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:10 np0005465987 nova_compute[230713]: 2025-10-02 12:36:10.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:11.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:11.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:12 np0005465987 nova_compute[230713]: 2025-10-02 12:36:12.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:12 np0005465987 nova_compute[230713]: 2025-10-02 12:36:12.810 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "4d77c34a-2c0a-4826-946c-7470e45491fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:12 np0005465987 nova_compute[230713]: 2025-10-02 12:36:12.811 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:12 np0005465987 nova_compute[230713]: 2025-10-02 12:36:12.889 2 DEBUG nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:36:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:13.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:13 np0005465987 nova_compute[230713]: 2025-10-02 12:36:13.258 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:13 np0005465987 nova_compute[230713]: 2025-10-02 12:36:13.259 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:13 np0005465987 nova_compute[230713]: 2025-10-02 12:36:13.265 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:36:13 np0005465987 nova_compute[230713]: 2025-10-02 12:36:13.265 2 INFO nova.compute.claims [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:36:13 np0005465987 nova_compute[230713]: 2025-10-02 12:36:13.638 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:13.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2177103198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.044 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.049 2 DEBUG nova.compute.provider_tree [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.064 2 DEBUG nova.scheduler.client.report [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.094 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.095 2 DEBUG nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.138 2 DEBUG nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.138 2 DEBUG nova.network.neutron [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.163 2 INFO nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.183 2 DEBUG nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.271 2 DEBUG nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.273 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.273 2 INFO nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Creating image(s)#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.301 2 DEBUG nova.storage.rbd_utils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 4d77c34a-2c0a-4826-946c-7470e45491fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.337 2 DEBUG nova.storage.rbd_utils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 4d77c34a-2c0a-4826-946c-7470e45491fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.368 2 DEBUG nova.storage.rbd_utils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 4d77c34a-2c0a-4826-946c-7470e45491fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.373 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.404 2 DEBUG nova.policy [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fe9cc788734f406d826446a848700331', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.441 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.442 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.443 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.443 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.479 2 DEBUG nova.storage.rbd_utils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 4d77c34a-2c0a-4826-946c-7470e45491fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.483 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 4d77c34a-2c0a-4826-946c-7470e45491fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:14.527 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:14.528 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:36:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.942 2 DEBUG nova.network.neutron [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Successfully created port: 1e30bae6-0607-4bc0-9ad1-f722bc33c687 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:14 np0005465987 nova_compute[230713]: 2025-10-02 12:36:14.964 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:36:15 np0005465987 nova_compute[230713]: 2025-10-02 12:36:15.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:15.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:15.529 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:15.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:15 np0005465987 nova_compute[230713]: 2025-10-02 12:36:15.782 2 DEBUG nova.network.neutron [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Successfully updated port: 1e30bae6-0607-4bc0-9ad1-f722bc33c687 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:36:15 np0005465987 nova_compute[230713]: 2025-10-02 12:36:15.825 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "refresh_cache-4d77c34a-2c0a-4826-946c-7470e45491fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:15 np0005465987 nova_compute[230713]: 2025-10-02 12:36:15.826 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquired lock "refresh_cache-4d77c34a-2c0a-4826-946c-7470e45491fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:15 np0005465987 nova_compute[230713]: 2025-10-02 12:36:15.826 2 DEBUG nova.network.neutron [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:36:15 np0005465987 nova_compute[230713]: 2025-10-02 12:36:15.967 2 DEBUG nova.compute.manager [req-a1aa7545-466a-4859-a125-368418a0f288 req-66b9800c-001d-44bc-b9b9-d919c9b8a2ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Received event network-changed-1e30bae6-0607-4bc0-9ad1-f722bc33c687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:15 np0005465987 nova_compute[230713]: 2025-10-02 12:36:15.967 2 DEBUG nova.compute.manager [req-a1aa7545-466a-4859-a125-368418a0f288 req-66b9800c-001d-44bc-b9b9-d919c9b8a2ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Refreshing instance network info cache due to event network-changed-1e30bae6-0607-4bc0-9ad1-f722bc33c687. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:36:15 np0005465987 nova_compute[230713]: 2025-10-02 12:36:15.967 2 DEBUG oslo_concurrency.lockutils [req-a1aa7545-466a-4859-a125-368418a0f288 req-66b9800c-001d-44bc-b9b9-d919c9b8a2ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-4d77c34a-2c0a-4826-946c-7470e45491fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:16 np0005465987 nova_compute[230713]: 2025-10-02 12:36:16.090 2 DEBUG nova.network.neutron [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:36:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:17.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:17 np0005465987 nova_compute[230713]: 2025-10-02 12:36:17.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:17.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:17 np0005465987 nova_compute[230713]: 2025-10-02 12:36:17.908 2 DEBUG nova.network.neutron [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Updating instance_info_cache with network_info: [{"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:17 np0005465987 nova_compute[230713]: 2025-10-02 12:36:17.919 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 4d77c34a-2c0a-4826-946c-7470e45491fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:18 np0005465987 nova_compute[230713]: 2025-10-02 12:36:18.035 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Releasing lock "refresh_cache-4d77c34a-2c0a-4826-946c-7470e45491fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:18 np0005465987 nova_compute[230713]: 2025-10-02 12:36:18.035 2 DEBUG nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Instance network_info: |[{"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:36:18 np0005465987 nova_compute[230713]: 2025-10-02 12:36:18.036 2 DEBUG oslo_concurrency.lockutils [req-a1aa7545-466a-4859-a125-368418a0f288 req-66b9800c-001d-44bc-b9b9-d919c9b8a2ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-4d77c34a-2c0a-4826-946c-7470e45491fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:18 np0005465987 nova_compute[230713]: 2025-10-02 12:36:18.036 2 DEBUG nova.network.neutron [req-a1aa7545-466a-4859-a125-368418a0f288 req-66b9800c-001d-44bc-b9b9-d919c9b8a2ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Refreshing network info cache for port 1e30bae6-0607-4bc0-9ad1-f722bc33c687 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:36:18 np0005465987 nova_compute[230713]: 2025-10-02 12:36:18.081 2 DEBUG nova.storage.rbd_utils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] resizing rbd image 4d77c34a-2c0a-4826-946c-7470e45491fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:36:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:36:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:19.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:36:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:19.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.189 2 DEBUG nova.objects.instance [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'migration_context' on Instance uuid 4d77c34a-2c0a-4826-946c-7470e45491fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.204 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.205 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Ensure instance console log exists: /var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.205 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.206 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.206 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.209 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Start _get_guest_xml network_info=[{"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.216 2 WARNING nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.222 2 DEBUG nova.virt.libvirt.host [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.223 2 DEBUG nova.virt.libvirt.host [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.227 2 DEBUG nova.virt.libvirt.host [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.228 2 DEBUG nova.virt.libvirt.host [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.230 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.231 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.231 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.232 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.232 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.233 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.233 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.234 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.234 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.235 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.235 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.236 2 DEBUG nova.virt.hardware [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.240 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:20 np0005465987 podman[282573]: 2025-10-02 12:36:20.466699394 +0000 UTC m=+0.089226814 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:36:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:36:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3348269603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:36:20 np0005465987 nova_compute[230713]: 2025-10-02 12:36:20.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:21 np0005465987 nova_compute[230713]: 2025-10-02 12:36:21.091 2 DEBUG nova.network.neutron [req-a1aa7545-466a-4859-a125-368418a0f288 req-66b9800c-001d-44bc-b9b9-d919c9b8a2ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Updated VIF entry in instance network info cache for port 1e30bae6-0607-4bc0-9ad1-f722bc33c687. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:36:21 np0005465987 nova_compute[230713]: 2025-10-02 12:36:21.091 2 DEBUG nova.network.neutron [req-a1aa7545-466a-4859-a125-368418a0f288 req-66b9800c-001d-44bc-b9b9-d919c9b8a2ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Updating instance_info_cache with network_info: [{"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:21 np0005465987 nova_compute[230713]: 2025-10-02 12:36:21.104 2 DEBUG oslo_concurrency.lockutils [req-a1aa7545-466a-4859-a125-368418a0f288 req-66b9800c-001d-44bc-b9b9-d919c9b8a2ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-4d77c34a-2c0a-4826-946c-7470e45491fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:21 np0005465987 nova_compute[230713]: 2025-10-02 12:36:21.145 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.905s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:21 np0005465987 nova_compute[230713]: 2025-10-02 12:36:21.178 2 DEBUG nova.storage.rbd_utils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 4d77c34a-2c0a-4826-946c-7470e45491fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:21 np0005465987 nova_compute[230713]: 2025-10-02 12:36:21.182 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:21.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:36:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3760426406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:36:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:21.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.011 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.829s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.015 2 DEBUG nova.virt.libvirt.vif [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1334211739',display_name='tempest-ServersTestJSON-server-1334211739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1334211739',id=140,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-xjyxzdzg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:14Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=4d77c34a-2c0a-4826-946c-7470e45491fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.015 2 DEBUG nova.network.os_vif_util [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.017 2 DEBUG nova.network.os_vif_util [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:78:40,bridge_name='br-int',has_traffic_filtering=True,id=1e30bae6-0607-4bc0-9ad1-f722bc33c687,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e30bae6-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.018 2 DEBUG nova.objects.instance [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d77c34a-2c0a-4826-946c-7470e45491fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.034 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <uuid>4d77c34a-2c0a-4826-946c-7470e45491fa</uuid>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <name>instance-0000008c</name>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServersTestJSON-server-1334211739</nova:name>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:36:20</nova:creationTime>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <nova:user uuid="fe9cc788734f406d826446a848700331">tempest-ServersTestJSON-80077074-project-member</nova:user>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <nova:project uuid="bc0d63d3b4404ef8858166e8836dd0af">tempest-ServersTestJSON-80077074</nova:project>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <nova:port uuid="1e30bae6-0607-4bc0-9ad1-f722bc33c687">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <entry name="serial">4d77c34a-2c0a-4826-946c-7470e45491fa</entry>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <entry name="uuid">4d77c34a-2c0a-4826-946c-7470e45491fa</entry>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/4d77c34a-2c0a-4826-946c-7470e45491fa_disk">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/4d77c34a-2c0a-4826-946c-7470e45491fa_disk.config">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:16:78:40"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <target dev="tap1e30bae6-06"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa/console.log" append="off"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:36:22 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:36:22 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:36:22 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:36:22 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.040 2 DEBUG nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Preparing to wait for external event network-vif-plugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.041 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.042 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.042 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.044 2 DEBUG nova.virt.libvirt.vif [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1334211739',display_name='tempest-ServersTestJSON-server-1334211739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1334211739',id=140,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-xjyxzdzg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:14Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=4d77c34a-2c0a-4826-946c-7470e45491fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.045 2 DEBUG nova.network.os_vif_util [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.046 2 DEBUG nova.network.os_vif_util [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:78:40,bridge_name='br-int',has_traffic_filtering=True,id=1e30bae6-0607-4bc0-9ad1-f722bc33c687,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e30bae6-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.047 2 DEBUG os_vif [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:78:40,bridge_name='br-int',has_traffic_filtering=True,id=1e30bae6-0607-4bc0-9ad1-f722bc33c687,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e30bae6-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e30bae6-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.056 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e30bae6-06, col_values=(('external_ids', {'iface-id': '1e30bae6-0607-4bc0-9ad1-f722bc33c687', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:78:40', 'vm-uuid': '4d77c34a-2c0a-4826-946c-7470e45491fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:22 np0005465987 NetworkManager[44910]: <info>  [1759408582.0622] manager: (tap1e30bae6-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.071 2 INFO os_vif [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:78:40,bridge_name='br-int',has_traffic_filtering=True,id=1e30bae6-0607-4bc0-9ad1-f722bc33c687,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e30bae6-06')#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.120 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.120 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.121 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No VIF found with MAC fa:16:3e:16:78:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.122 2 INFO nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Using config drive#033[00m
Oct  2 08:36:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.166 2 DEBUG nova.storage.rbd_utils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 4d77c34a-2c0a-4826-946c-7470e45491fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.788 2 INFO nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Creating config drive at /var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa/disk.config#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.805 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm0fst9f0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:22 np0005465987 nova_compute[230713]: 2025-10-02 12:36:22.963 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm0fst9f0" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:23 np0005465987 nova_compute[230713]: 2025-10-02 12:36:23.006 2 DEBUG nova.storage.rbd_utils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 4d77c34a-2c0a-4826-946c-7470e45491fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:23 np0005465987 nova_compute[230713]: 2025-10-02 12:36:23.011 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa/disk.config 4d77c34a-2c0a-4826-946c-7470e45491fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:23.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:36:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:36:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:36:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:23.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:23 np0005465987 nova_compute[230713]: 2025-10-02 12:36:23.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:25 np0005465987 nova_compute[230713]: 2025-10-02 12:36:25.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:25.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:25.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:25 np0005465987 podman[282939]: 2025-10-02 12:36:25.842807777 +0000 UTC m=+0.059481595 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:36:25 np0005465987 podman[282940]: 2025-10-02 12:36:25.866734785 +0000 UTC m=+0.073184262 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:36:25 np0005465987 podman[282938]: 2025-10-02 12:36:25.903756423 +0000 UTC m=+0.114170390 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:36:27 np0005465987 nova_compute[230713]: 2025-10-02 12:36:27.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:27.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:27.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:27.961 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:27.962 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:27.963 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:28 np0005465987 nova_compute[230713]: 2025-10-02 12:36:28.129 2 DEBUG oslo_concurrency.processutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa/disk.config 4d77c34a-2c0a-4826-946c-7470e45491fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:28 np0005465987 nova_compute[230713]: 2025-10-02 12:36:28.129 2 INFO nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Deleting local config drive /var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa/disk.config because it was imported into RBD.#033[00m
Oct  2 08:36:28 np0005465987 kernel: tap1e30bae6-06: entered promiscuous mode
Oct  2 08:36:28 np0005465987 NetworkManager[44910]: <info>  [1759408588.1904] manager: (tap1e30bae6-06): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Oct  2 08:36:28 np0005465987 nova_compute[230713]: 2025-10-02 12:36:28.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:28Z|00531|binding|INFO|Claiming lport 1e30bae6-0607-4bc0-9ad1-f722bc33c687 for this chassis.
Oct  2 08:36:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:28Z|00532|binding|INFO|1e30bae6-0607-4bc0-9ad1-f722bc33c687: Claiming fa:16:3e:16:78:40 10.100.0.14
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.199 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:78:40 10.100.0.14'], port_security=['fa:16:3e:16:78:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4d77c34a-2c0a-4826-946c-7470e45491fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2a11ff87-bec6-4638-b302-adcd655efba9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=797b6af2-473b-4626-9e97-a0a489119419, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=1e30bae6-0607-4bc0-9ad1-f722bc33c687) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.200 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 1e30bae6-0607-4bc0-9ad1-f722bc33c687 in datapath d7203b00-e5e4-402e-b777-ac6280fa23ac bound to our chassis#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.202 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7203b00-e5e4-402e-b777-ac6280fa23ac#033[00m
Oct  2 08:36:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:28Z|00533|binding|INFO|Setting lport 1e30bae6-0607-4bc0-9ad1-f722bc33c687 ovn-installed in OVS
Oct  2 08:36:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:28Z|00534|binding|INFO|Setting lport 1e30bae6-0607-4bc0-9ad1-f722bc33c687 up in Southbound
Oct  2 08:36:28 np0005465987 nova_compute[230713]: 2025-10-02 12:36:28.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:28 np0005465987 nova_compute[230713]: 2025-10-02 12:36:28.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.217 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b39527d1-c64f-4566-94a5-b706352fac0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:28 np0005465987 systemd-machined[188335]: New machine qemu-62-instance-0000008c.
Oct  2 08:36:28 np0005465987 systemd-udevd[283017]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:36:28 np0005465987 NetworkManager[44910]: <info>  [1759408588.2428] device (tap1e30bae6-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:36:28 np0005465987 systemd[1]: Started Virtual Machine qemu-62-instance-0000008c.
Oct  2 08:36:28 np0005465987 NetworkManager[44910]: <info>  [1759408588.2441] device (tap1e30bae6-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.255 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a74a365b-8d2a-424d-83bb-671d865fe8e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.259 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f56a8ff7-af4a-4072-ae95-8f2bdff0f854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.290 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[82114199-383d-4e39-b5e7-0d4937bd7163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.307 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[788094d7-67da-4c95-9bd2-0fe6c0733f89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7203b00-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:c4:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653015, 'reachable_time': 17336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283028, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.322 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9f63a5a6-d504-4da0-bbab-67afcc7478d8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd7203b00-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653029, 'tstamp': 653029}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283029, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7203b00-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653032, 'tstamp': 653032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283029, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.324 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7203b00-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:28 np0005465987 nova_compute[230713]: 2025-10-02 12:36:28.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:28 np0005465987 nova_compute[230713]: 2025-10-02 12:36:28.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.327 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7203b00-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.328 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.328 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7203b00-e0, col_values=(('external_ids', {'iface-id': '6f9d54ba-3cfb-48b9-bef7-b2077e6931d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:28.328 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:36:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:29.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:36:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:36:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:36:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:36:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:36:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:29 np0005465987 nova_compute[230713]: 2025-10-02 12:36:29.713 2 DEBUG nova.compute.manager [req-91c3b814-e97d-4a94-9179-e24d2feacf36 req-d0baacce-ce73-4e88-a377-7d0a4a7e4417 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Received event network-vif-plugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:29 np0005465987 nova_compute[230713]: 2025-10-02 12:36:29.714 2 DEBUG oslo_concurrency.lockutils [req-91c3b814-e97d-4a94-9179-e24d2feacf36 req-d0baacce-ce73-4e88-a377-7d0a4a7e4417 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:29 np0005465987 nova_compute[230713]: 2025-10-02 12:36:29.715 2 DEBUG oslo_concurrency.lockutils [req-91c3b814-e97d-4a94-9179-e24d2feacf36 req-d0baacce-ce73-4e88-a377-7d0a4a7e4417 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:29 np0005465987 nova_compute[230713]: 2025-10-02 12:36:29.715 2 DEBUG oslo_concurrency.lockutils [req-91c3b814-e97d-4a94-9179-e24d2feacf36 req-d0baacce-ce73-4e88-a377-7d0a4a7e4417 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:29 np0005465987 nova_compute[230713]: 2025-10-02 12:36:29.716 2 DEBUG nova.compute.manager [req-91c3b814-e97d-4a94-9179-e24d2feacf36 req-d0baacce-ce73-4e88-a377-7d0a4a7e4417 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Processing event network-vif-plugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:36:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:29.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.005 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408590.0045826, 4d77c34a-2c0a-4826-946c-7470e45491fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.006 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] VM Started (Lifecycle Event)#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.008 2 DEBUG nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.014 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.026 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.029 2 INFO nova.virt.libvirt.driver [-] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Instance spawned successfully.#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.030 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.036 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.057 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.057 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.058 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.059 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.059 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.060 2 DEBUG nova.virt.libvirt.driver [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.063 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.064 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408590.0084534, 4d77c34a-2c0a-4826-946c-7470e45491fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.064 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.099 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.103 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408590.0133593, 4d77c34a-2c0a-4826-946c-7470e45491fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.104 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.124 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.128 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.132 2 INFO nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Took 15.86 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.132 2 DEBUG nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.154 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.212 2 INFO nova.compute.manager [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Took 16.98 seconds to build instance.#033[00m
Oct  2 08:36:30 np0005465987 nova_compute[230713]: 2025-10-02 12:36:30.238 2 DEBUG oslo_concurrency.lockutils [None req-8c8a2e36-059b-4c1c-b682-b6dd85146ef7 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:31.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:31.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:31 np0005465987 nova_compute[230713]: 2025-10-02 12:36:31.836 2 DEBUG nova.compute.manager [req-1f74557b-c296-416c-89eb-8e6e0debaf42 req-c2e34712-29af-483e-bbd5-dedad784b60d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Received event network-vif-plugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:31 np0005465987 nova_compute[230713]: 2025-10-02 12:36:31.836 2 DEBUG oslo_concurrency.lockutils [req-1f74557b-c296-416c-89eb-8e6e0debaf42 req-c2e34712-29af-483e-bbd5-dedad784b60d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:31 np0005465987 nova_compute[230713]: 2025-10-02 12:36:31.836 2 DEBUG oslo_concurrency.lockutils [req-1f74557b-c296-416c-89eb-8e6e0debaf42 req-c2e34712-29af-483e-bbd5-dedad784b60d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:31 np0005465987 nova_compute[230713]: 2025-10-02 12:36:31.836 2 DEBUG oslo_concurrency.lockutils [req-1f74557b-c296-416c-89eb-8e6e0debaf42 req-c2e34712-29af-483e-bbd5-dedad784b60d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:31 np0005465987 nova_compute[230713]: 2025-10-02 12:36:31.837 2 DEBUG nova.compute.manager [req-1f74557b-c296-416c-89eb-8e6e0debaf42 req-c2e34712-29af-483e-bbd5-dedad784b60d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] No waiting events found dispatching network-vif-plugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:31 np0005465987 nova_compute[230713]: 2025-10-02 12:36:31.837 2 WARNING nova.compute.manager [req-1f74557b-c296-416c-89eb-8e6e0debaf42 req-c2e34712-29af-483e-bbd5-dedad784b60d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Received unexpected event network-vif-plugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:36:32 np0005465987 nova_compute[230713]: 2025-10-02 12:36:32.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:33.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:33 np0005465987 nova_compute[230713]: 2025-10-02 12:36:33.580 2 DEBUG oslo_concurrency.lockutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "4d77c34a-2c0a-4826-946c-7470e45491fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:33 np0005465987 nova_compute[230713]: 2025-10-02 12:36:33.581 2 DEBUG oslo_concurrency.lockutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:33 np0005465987 nova_compute[230713]: 2025-10-02 12:36:33.581 2 DEBUG oslo_concurrency.lockutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:33 np0005465987 nova_compute[230713]: 2025-10-02 12:36:33.581 2 DEBUG oslo_concurrency.lockutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:33 np0005465987 nova_compute[230713]: 2025-10-02 12:36:33.582 2 DEBUG oslo_concurrency.lockutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:33 np0005465987 nova_compute[230713]: 2025-10-02 12:36:33.583 2 INFO nova.compute.manager [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Terminating instance#033[00m
Oct  2 08:36:33 np0005465987 nova_compute[230713]: 2025-10-02 12:36:33.583 2 DEBUG nova.compute.manager [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:36:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:33.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:34 np0005465987 kernel: tap1e30bae6-06 (unregistering): left promiscuous mode
Oct  2 08:36:34 np0005465987 NetworkManager[44910]: <info>  [1759408594.7107] device (tap1e30bae6-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:36:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:34Z|00535|binding|INFO|Releasing lport 1e30bae6-0607-4bc0-9ad1-f722bc33c687 from this chassis (sb_readonly=0)
Oct  2 08:36:34 np0005465987 nova_compute[230713]: 2025-10-02 12:36:34.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:34Z|00536|binding|INFO|Setting lport 1e30bae6-0607-4bc0-9ad1-f722bc33c687 down in Southbound
Oct  2 08:36:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:34Z|00537|binding|INFO|Removing iface tap1e30bae6-06 ovn-installed in OVS
Oct  2 08:36:34 np0005465987 nova_compute[230713]: 2025-10-02 12:36:34.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.732 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:78:40 10.100.0.14'], port_security=['fa:16:3e:16:78:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4d77c34a-2c0a-4826-946c-7470e45491fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2a11ff87-bec6-4638-b302-adcd655efba9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=797b6af2-473b-4626-9e97-a0a489119419, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=1e30bae6-0607-4bc0-9ad1-f722bc33c687) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.734 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 1e30bae6-0607-4bc0-9ad1-f722bc33c687 in datapath d7203b00-e5e4-402e-b777-ac6280fa23ac unbound from our chassis#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.736 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7203b00-e5e4-402e-b777-ac6280fa23ac#033[00m
Oct  2 08:36:34 np0005465987 nova_compute[230713]: 2025-10-02 12:36:34.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.750 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[451fa586-3e8f-441f-b36b-3e7f28fe4f5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.777 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[017a2fac-2ffe-4634-83f9-2a0f03d2b768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.780 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ff14ae-a11a-4270-ae68-d9367eda1d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:34 np0005465987 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct  2 08:36:34 np0005465987 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008c.scope: Consumed 4.743s CPU time.
Oct  2 08:36:34 np0005465987 systemd-machined[188335]: Machine qemu-62-instance-0000008c terminated.
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.805 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c80424cf-26ca-4421-8254-58143b422226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.825 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[602bb889-8c86-4a7d-8d1c-2d53b80a97bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7203b00-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:c4:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653015, 'reachable_time': 17336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283084, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.849 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ce75a987-0daa-4160-9990-f671cb604121]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd7203b00-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653029, 'tstamp': 653029}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283085, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7203b00-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653032, 'tstamp': 653032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283085, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.852 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7203b00-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:34 np0005465987 nova_compute[230713]: 2025-10-02 12:36:34.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:34 np0005465987 nova_compute[230713]: 2025-10-02 12:36:34.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.860 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7203b00-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.860 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.861 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7203b00-e0, col_values=(('external_ids', {'iface-id': '6f9d54ba-3cfb-48b9-bef7-b2077e6931d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:34.862 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.026 2 INFO nova.virt.libvirt.driver [-] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Instance destroyed successfully.#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.026 2 DEBUG nova.objects.instance [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'resources' on Instance uuid 4d77c34a-2c0a-4826-946c-7470e45491fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.040 2 DEBUG nova.virt.libvirt.vif [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:36:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1334211739',display_name='tempest-ServersTestJSON-server-1334211739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1334211739',id=140,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:36:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-xjyxzdzg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:30Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=4d77c34a-2c0a-4826-946c-7470e45491fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.041 2 DEBUG nova.network.os_vif_util [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "address": "fa:16:3e:16:78:40", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e30bae6-06", "ovs_interfaceid": "1e30bae6-0607-4bc0-9ad1-f722bc33c687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.042 2 DEBUG nova.network.os_vif_util [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:78:40,bridge_name='br-int',has_traffic_filtering=True,id=1e30bae6-0607-4bc0-9ad1-f722bc33c687,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e30bae6-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.042 2 DEBUG os_vif [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:78:40,bridge_name='br-int',has_traffic_filtering=True,id=1e30bae6-0607-4bc0-9ad1-f722bc33c687,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e30bae6-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.044 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e30bae6-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.051 2 DEBUG nova.compute.manager [req-96137f3f-8237-4cec-8076-fb70185d1375 req-c452358d-84a4-409a-8daa-56edaa15f7e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Received event network-vif-unplugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.051 2 DEBUG oslo_concurrency.lockutils [req-96137f3f-8237-4cec-8076-fb70185d1375 req-c452358d-84a4-409a-8daa-56edaa15f7e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.052 2 DEBUG oslo_concurrency.lockutils [req-96137f3f-8237-4cec-8076-fb70185d1375 req-c452358d-84a4-409a-8daa-56edaa15f7e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.052 2 DEBUG oslo_concurrency.lockutils [req-96137f3f-8237-4cec-8076-fb70185d1375 req-c452358d-84a4-409a-8daa-56edaa15f7e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.052 2 DEBUG nova.compute.manager [req-96137f3f-8237-4cec-8076-fb70185d1375 req-c452358d-84a4-409a-8daa-56edaa15f7e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] No waiting events found dispatching network-vif-unplugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.052 2 DEBUG nova.compute.manager [req-96137f3f-8237-4cec-8076-fb70185d1375 req-c452358d-84a4-409a-8daa-56edaa15f7e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Received event network-vif-unplugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.053 2 INFO os_vif [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:78:40,bridge_name='br-int',has_traffic_filtering=True,id=1e30bae6-0607-4bc0-9ad1-f722bc33c687,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e30bae6-06')#033[00m
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.066942) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595067000, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1895, "num_deletes": 257, "total_data_size": 4475138, "memory_usage": 4537880, "flush_reason": "Manual Compaction"}
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595087524, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 2909933, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52582, "largest_seqno": 54472, "table_properties": {"data_size": 2901821, "index_size": 4862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17419, "raw_average_key_size": 20, "raw_value_size": 2885420, "raw_average_value_size": 3406, "num_data_blocks": 211, "num_entries": 847, "num_filter_entries": 847, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408438, "oldest_key_time": 1759408438, "file_creation_time": 1759408595, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 20622 microseconds, and 9149 cpu microseconds.
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.087568) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 2909933 bytes OK
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.087586) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.088899) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.088912) EVENT_LOG_v1 {"time_micros": 1759408595088907, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.088928) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 4466488, prev total WAL file size 4466488, number of live WAL files 2.
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.089963) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373536' seq:72057594037927935, type:22 .. '6C6F676D0032303037' seq:0, type:0; will stop at (end)
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(2841KB)], [102(10MB)]
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595090005, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 13501411, "oldest_snapshot_seqno": -1}
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 7989 keys, 13336683 bytes, temperature: kUnknown
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595142850, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 13336683, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13281132, "index_size": 34447, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 206176, "raw_average_key_size": 25, "raw_value_size": 13136672, "raw_average_value_size": 1644, "num_data_blocks": 1361, "num_entries": 7989, "num_filter_entries": 7989, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759408595, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.143093) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 13336683 bytes
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.144323) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 255.2 rd, 252.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.1 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 8524, records dropped: 535 output_compression: NoCompression
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.144343) EVENT_LOG_v1 {"time_micros": 1759408595144333, "job": 64, "event": "compaction_finished", "compaction_time_micros": 52912, "compaction_time_cpu_micros": 27938, "output_level": 6, "num_output_files": 1, "total_output_size": 13336683, "num_input_records": 8524, "num_output_records": 7989, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595145126, "job": 64, "event": "table_file_deletion", "file_number": 104}
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408595147528, "job": 64, "event": "table_file_deletion", "file_number": 102}
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.089875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.147579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.147584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.147586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.147588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:36:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:36:35.147589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:36:35 np0005465987 nova_compute[230713]: 2025-10-02 12:36:35.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:35.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:35.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:37.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:37 np0005465987 nova_compute[230713]: 2025-10-02 12:36:37.571 2 DEBUG nova.compute.manager [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Received event network-vif-plugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:37 np0005465987 nova_compute[230713]: 2025-10-02 12:36:37.571 2 DEBUG oslo_concurrency.lockutils [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:37 np0005465987 nova_compute[230713]: 2025-10-02 12:36:37.572 2 DEBUG oslo_concurrency.lockutils [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:37 np0005465987 nova_compute[230713]: 2025-10-02 12:36:37.572 2 DEBUG oslo_concurrency.lockutils [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:37 np0005465987 nova_compute[230713]: 2025-10-02 12:36:37.572 2 DEBUG nova.compute.manager [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] No waiting events found dispatching network-vif-plugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:37 np0005465987 nova_compute[230713]: 2025-10-02 12:36:37.573 2 WARNING nova.compute.manager [req-77bb5514-342a-4122-a9cf-c778fdb284c5 req-257f07bd-f9a7-4788-9fd9-add7182eedfd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Received unexpected event network-vif-plugged-1e30bae6-0607-4bc0-9ad1-f722bc33c687 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:36:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:37.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:39.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:39.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:40 np0005465987 nova_compute[230713]: 2025-10-02 12:36:40.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005465987 nova_compute[230713]: 2025-10-02 12:36:40.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:36:40 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1825524569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:36:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:36:40 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1825524569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:36:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:36:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:41.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:36:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:41.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:36:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:43.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:36:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:36:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:43.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:36:43 np0005465987 nova_compute[230713]: 2025-10-02 12:36:43.936 2 INFO nova.virt.libvirt.driver [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Deleting instance files /var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa_del#033[00m
Oct  2 08:36:43 np0005465987 nova_compute[230713]: 2025-10-02 12:36:43.937 2 INFO nova.virt.libvirt.driver [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Deletion of /var/lib/nova/instances/4d77c34a-2c0a-4826-946c-7470e45491fa_del complete#033[00m
Oct  2 08:36:44 np0005465987 nova_compute[230713]: 2025-10-02 12:36:44.038 2 INFO nova.compute.manager [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Took 10.45 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:36:44 np0005465987 nova_compute[230713]: 2025-10-02 12:36:44.038 2 DEBUG oslo.service.loopingcall [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:36:44 np0005465987 nova_compute[230713]: 2025-10-02 12:36:44.039 2 DEBUG nova.compute.manager [-] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:36:44 np0005465987 nova_compute[230713]: 2025-10-02 12:36:44.039 2 DEBUG nova.network.neutron [-] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:36:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:36:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:36:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:45 np0005465987 nova_compute[230713]: 2025-10-02 12:36:45.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:45 np0005465987 nova_compute[230713]: 2025-10-02 12:36:45.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:45.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:47 np0005465987 nova_compute[230713]: 2025-10-02 12:36:47.618 2 DEBUG nova.network.neutron [-] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:47 np0005465987 nova_compute[230713]: 2025-10-02 12:36:47.642 2 INFO nova.compute.manager [-] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Took 3.60 seconds to deallocate network for instance.#033[00m
Oct  2 08:36:47 np0005465987 nova_compute[230713]: 2025-10-02 12:36:47.691 2 DEBUG oslo_concurrency.lockutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:47 np0005465987 nova_compute[230713]: 2025-10-02 12:36:47.691 2 DEBUG oslo_concurrency.lockutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:47 np0005465987 nova_compute[230713]: 2025-10-02 12:36:47.776 2 DEBUG nova.compute.manager [req-a7e00566-f4b4-45bf-b839-f04ab4284f86 req-e3372069-9c3b-4c7d-96ab-477c6d2c85df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Received event network-vif-deleted-1e30bae6-0607-4bc0-9ad1-f722bc33c687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:47 np0005465987 nova_compute[230713]: 2025-10-02 12:36:47.778 2 DEBUG oslo_concurrency.processutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:47.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3186242937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.224 2 DEBUG oslo_concurrency.processutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.230 2 DEBUG nova.compute.provider_tree [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.252 2 DEBUG nova.scheduler.client.report [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.276 2 DEBUG oslo_concurrency.lockutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.314 2 INFO nova.scheduler.client.report [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Deleted allocations for instance 4d77c34a-2c0a-4826-946c-7470e45491fa#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.405 2 DEBUG oslo_concurrency.lockutils [None req-8f57816c-12a9-4213-8398-d12cdf53e3a6 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "4d77c34a-2c0a-4826-946c-7470e45491fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.876 2 DEBUG oslo_concurrency.lockutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.877 2 DEBUG oslo_concurrency.lockutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.878 2 DEBUG oslo_concurrency.lockutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.878 2 DEBUG oslo_concurrency.lockutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.879 2 DEBUG oslo_concurrency.lockutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.881 2 INFO nova.compute.manager [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Terminating instance#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.883 2 DEBUG nova.compute.manager [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:36:48 np0005465987 kernel: tap738a4094-d8 (unregistering): left promiscuous mode
Oct  2 08:36:48 np0005465987 NetworkManager[44910]: <info>  [1759408608.9431] device (tap738a4094-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:36:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:48Z|00538|binding|INFO|Releasing lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 from this chassis (sb_readonly=0)
Oct  2 08:36:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:48Z|00539|binding|INFO|Setting lport 738a4094-d8e6-4b4c-9cfc-440659391fa0 down in Southbound
Oct  2 08:36:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:48Z|00540|binding|INFO|Removing iface tap738a4094-d8 ovn-installed in OVS
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:48.973 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:e7:27 10.100.0.9'], port_security=['fa:16:3e:a7:e7:27 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6ec65b83-acd5-4790-8df0-0dfa903d2f84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e58f4ba2-c72c-42b8-acea-ca6241431726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc4d8f857b2d42bf9ae477fc5f514216', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3110ab08-53b5-412f-abb1-fdd400b42e71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff6421fa-d014-4140-8a8b-1356d60478c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=738a4094-d8e6-4b4c-9cfc-440659391fa0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:48.975 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 738a4094-d8e6-4b4c-9cfc-440659391fa0 in datapath e58f4ba2-c72c-42b8-acea-ca6241431726 unbound from our chassis#033[00m
Oct  2 08:36:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:48.977 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e58f4ba2-c72c-42b8-acea-ca6241431726, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:36:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:48.979 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f5dbc535-a809-4039-936a-c08f291e5544]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:48.981 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 namespace which is not needed anymore#033[00m
Oct  2 08:36:48 np0005465987 nova_compute[230713]: 2025-10-02 12:36:48.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:49 np0005465987 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct  2 08:36:49 np0005465987 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007d.scope: Consumed 20.263s CPU time.
Oct  2 08:36:49 np0005465987 systemd-machined[188335]: Machine qemu-58-instance-0000007d terminated.
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:49 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278444]: [NOTICE]   (278448) : haproxy version is 2.8.14-c23fe91
Oct  2 08:36:49 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278444]: [NOTICE]   (278448) : path to executable is /usr/sbin/haproxy
Oct  2 08:36:49 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278444]: [WARNING]  (278448) : Exiting Master process...
Oct  2 08:36:49 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278444]: [WARNING]  (278448) : Exiting Master process...
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.121 2 INFO nova.virt.libvirt.driver [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Instance destroyed successfully.#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.121 2 DEBUG nova.objects.instance [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lazy-loading 'resources' on Instance uuid 6ec65b83-acd5-4790-8df0-0dfa903d2f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:49 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278444]: [ALERT]    (278448) : Current worker (278450) exited with code 143 (Terminated)
Oct  2 08:36:49 np0005465987 neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726[278444]: [WARNING]  (278448) : All workers exited. Exiting... (0)
Oct  2 08:36:49 np0005465987 systemd[1]: libpod-bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365.scope: Deactivated successfully.
Oct  2 08:36:49 np0005465987 podman[283215]: 2025-10-02 12:36:49.134527228 +0000 UTC m=+0.050240492 container died bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.139 2 DEBUG nova.virt.libvirt.vif [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1934445766',display_name='tempest-ServerRescueNegativeTestJSON-server-1934445766',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1934445766',id=125,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:33:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc4d8f857b2d42bf9ae477fc5f514216',ramdisk_id='',reservation_id='r-a8p10ujd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-959216005',owner_user_name='tempest-ServerRescueNegativeTestJSON-959216005-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:05Z,user_data=None,user_id='6c932f0d0e594f00855572fbe06ee3aa',uuid=6ec65b83-acd5-4790-8df0-0dfa903d2f84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.139 2 DEBUG nova.network.os_vif_util [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converting VIF {"id": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "address": "fa:16:3e:a7:e7:27", "network": {"id": "e58f4ba2-c72c-42b8-acea-ca6241431726", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1915611894-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc4d8f857b2d42bf9ae477fc5f514216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap738a4094-d8", "ovs_interfaceid": "738a4094-d8e6-4b4c-9cfc-440659391fa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.140 2 DEBUG nova.network.os_vif_util [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:e7:27,bridge_name='br-int',has_traffic_filtering=True,id=738a4094-d8e6-4b4c-9cfc-440659391fa0,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a4094-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.140 2 DEBUG os_vif [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:e7:27,bridge_name='br-int',has_traffic_filtering=True,id=738a4094-d8e6-4b4c-9cfc-440659391fa0,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a4094-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.142 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738a4094-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.149 2 INFO os_vif [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:e7:27,bridge_name='br-int',has_traffic_filtering=True,id=738a4094-d8e6-4b4c-9cfc-440659391fa0,network=Network(e58f4ba2-c72c-42b8-acea-ca6241431726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap738a4094-d8')#033[00m
Oct  2 08:36:49 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365-userdata-shm.mount: Deactivated successfully.
Oct  2 08:36:49 np0005465987 systemd[1]: var-lib-containers-storage-overlay-e6d43223992921cd704841b4fcb74d44e85107e6df5b6e5d8ed0a513a5eb2e96-merged.mount: Deactivated successfully.
Oct  2 08:36:49 np0005465987 podman[283215]: 2025-10-02 12:36:49.181728916 +0000 UTC m=+0.097442160 container cleanup bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:36:49 np0005465987 systemd[1]: libpod-conmon-bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365.scope: Deactivated successfully.
Oct  2 08:36:49 np0005465987 podman[283267]: 2025-10-02 12:36:49.238482066 +0000 UTC m=+0.035375344 container remove bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:36:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:49.244 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d32fd66d-16bc-4e2f-8245-d6fe941fb4c7]: (4, ('Thu Oct  2 12:36:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365)\nbc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365\nThu Oct  2 12:36:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 (bc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365)\nbc114d8b45b9b7bef0b9edceb0351d4c399d0ed4ebf70293571aa7b5c81a0365\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:49.246 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb59648-6690-4a8c-880b-3d77c493ac1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:49.247 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58f4ba2-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:49 np0005465987 kernel: tape58f4ba2-c0: left promiscuous mode
Oct  2 08:36:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:36:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:49.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:49.312 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[74a1164a-924c-4be5-a282-6f0b7d056184]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:49.341 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b243d8-36f3-4b22-a7fe-4aaed7a55aa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:49.342 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[04e6b417-4790-4f0b-9c05-18c96e4df8b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:49.357 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[85d22386-d21f-4f5c-b37d-f3828ad30903]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650889, 'reachable_time': 24365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283284, 'error': None, 'target': 'ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:49.359 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e58f4ba2-c72c-42b8-acea-ca6241431726 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:36:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:36:49.359 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e7bf67-3ec8-4e3c-9b6e-3fa4d62eeb65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:49 np0005465987 systemd[1]: run-netns-ovnmeta\x2de58f4ba2\x2dc72c\x2d42b8\x2dacea\x2dca6241431726.mount: Deactivated successfully.
Oct  2 08:36:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:36:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:49.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.942 2 INFO nova.virt.libvirt.driver [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Deleting instance files /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84_del#033[00m
Oct  2 08:36:49 np0005465987 nova_compute[230713]: 2025-10-02 12:36:49.945 2 INFO nova.virt.libvirt.driver [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Deletion of /var/lib/nova/instances/6ec65b83-acd5-4790-8df0-0dfa903d2f84_del complete#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.025 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408595.0232236, 4d77c34a-2c0a-4826-946c-7470e45491fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.029 2 INFO nova.compute.manager [-] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.039 2 INFO nova.compute.manager [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.040 2 DEBUG oslo.service.loopingcall [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.041 2 DEBUG nova.compute.manager [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.041 2 DEBUG nova.network.neutron [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.136 2 DEBUG nova.compute.manager [None req-4d362848-679c-455f-ad1e-b22112c03ed9 - - - - - -] [instance: 4d77c34a-2c0a-4826-946c-7470e45491fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.431 2 DEBUG nova.compute.manager [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-unplugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.431 2 DEBUG oslo_concurrency.lockutils [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.432 2 DEBUG oslo_concurrency.lockutils [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.432 2 DEBUG oslo_concurrency.lockutils [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.432 2 DEBUG nova.compute.manager [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-unplugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:50 np0005465987 nova_compute[230713]: 2025-10-02 12:36:50.432 2 DEBUG nova.compute.manager [req-7d352f9c-0dbf-49bd-9783-3121eab2a8a3 req-21110024-be7b-4143-b256-8cdfb4fd61ef d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-unplugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:36:50 np0005465987 podman[283286]: 2025-10-02 12:36:50.857418259 +0000 UTC m=+0.061671597 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:36:51 np0005465987 nova_compute[230713]: 2025-10-02 12:36:51.028 2 DEBUG nova.network.neutron [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:51 np0005465987 nova_compute[230713]: 2025-10-02 12:36:51.203 2 DEBUG nova.compute.manager [req-fea7c369-4f0d-4bd2-a169-5ed9911da091 req-91bfe0b8-4c90-49d0-87a1-2c1195e7bd7f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-deleted-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:51 np0005465987 nova_compute[230713]: 2025-10-02 12:36:51.203 2 INFO nova.compute.manager [req-fea7c369-4f0d-4bd2-a169-5ed9911da091 req-91bfe0b8-4c90-49d0-87a1-2c1195e7bd7f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Neutron deleted interface 738a4094-d8e6-4b4c-9cfc-440659391fa0; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:36:51 np0005465987 nova_compute[230713]: 2025-10-02 12:36:51.203 2 DEBUG nova.network.neutron [req-fea7c369-4f0d-4bd2-a169-5ed9911da091 req-91bfe0b8-4c90-49d0-87a1-2c1195e7bd7f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:51 np0005465987 nova_compute[230713]: 2025-10-02 12:36:51.212 2 INFO nova.compute.manager [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Took 1.17 seconds to deallocate network for instance.#033[00m
Oct  2 08:36:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:51.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:51 np0005465987 nova_compute[230713]: 2025-10-02 12:36:51.411 2 DEBUG nova.compute.manager [req-fea7c369-4f0d-4bd2-a169-5ed9911da091 req-91bfe0b8-4c90-49d0-87a1-2c1195e7bd7f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Detach interface failed, port_id=738a4094-d8e6-4b4c-9cfc-440659391fa0, reason: Instance 6ec65b83-acd5-4790-8df0-0dfa903d2f84 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:36:51 np0005465987 nova_compute[230713]: 2025-10-02 12:36:51.544 2 DEBUG oslo_concurrency.lockutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:51 np0005465987 nova_compute[230713]: 2025-10-02 12:36:51.544 2 DEBUG oslo_concurrency.lockutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:51 np0005465987 nova_compute[230713]: 2025-10-02 12:36:51.636 2 DEBUG oslo_concurrency.processutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:51.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3136633686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.068 2 DEBUG oslo_concurrency.processutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.076 2 DEBUG nova.compute.provider_tree [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.096 2 DEBUG nova.scheduler.client.report [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.122 2 DEBUG oslo_concurrency.lockutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.149 2 INFO nova.scheduler.client.report [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Deleted allocations for instance 6ec65b83-acd5-4790-8df0-0dfa903d2f84#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.217 2 DEBUG oslo_concurrency.lockutils [None req-a3f65480-dec8-4d53-8a00-0c6031d25e5a 6c932f0d0e594f00855572fbe06ee3aa cc4d8f857b2d42bf9ae477fc5f514216 - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.614 2 DEBUG nova.compute.manager [req-55f1c07e-70c5-466b-b1f3-62d0ff624486 req-b9b485a2-e61d-44d6-9872-58c6d360e9ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.615 2 DEBUG oslo_concurrency.lockutils [req-55f1c07e-70c5-466b-b1f3-62d0ff624486 req-b9b485a2-e61d-44d6-9872-58c6d360e9ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.615 2 DEBUG oslo_concurrency.lockutils [req-55f1c07e-70c5-466b-b1f3-62d0ff624486 req-b9b485a2-e61d-44d6-9872-58c6d360e9ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.616 2 DEBUG oslo_concurrency.lockutils [req-55f1c07e-70c5-466b-b1f3-62d0ff624486 req-b9b485a2-e61d-44d6-9872-58c6d360e9ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6ec65b83-acd5-4790-8df0-0dfa903d2f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.616 2 DEBUG nova.compute.manager [req-55f1c07e-70c5-466b-b1f3-62d0ff624486 req-b9b485a2-e61d-44d6-9872-58c6d360e9ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] No waiting events found dispatching network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:52 np0005465987 nova_compute[230713]: 2025-10-02 12:36:52.617 2 WARNING nova.compute.manager [req-55f1c07e-70c5-466b-b1f3-62d0ff624486 req-b9b485a2-e61d-44d6-9872-58c6d360e9ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Received unexpected event network-vif-plugged-738a4094-d8e6-4b4c-9cfc-440659391fa0 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:36:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:53.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:53.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:54 np0005465987 nova_compute[230713]: 2025-10-02 12:36:54.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:55 np0005465987 nova_compute[230713]: 2025-10-02 12:36:55.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:55.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:55.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.206 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.207 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.223 2 DEBUG nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.289 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.290 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.297 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.298 2 INFO nova.compute.claims [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.445 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:56 np0005465987 podman[283347]: 2025-10-02 12:36:56.866504334 +0000 UTC m=+0.083200889 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:36:56 np0005465987 podman[283346]: 2025-10-02 12:36:56.876163779 +0000 UTC m=+0.086687134 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:36:56 np0005465987 podman[283345]: 2025-10-02 12:36:56.879513691 +0000 UTC m=+0.091763373 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:36:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/243347859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.934 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.940 2 DEBUG nova.compute.provider_tree [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:56 np0005465987 nova_compute[230713]: 2025-10-02 12:36:56.987 2 DEBUG nova.scheduler.client.report [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.023 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.024 2 DEBUG nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.080 2 DEBUG nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.080 2 DEBUG nova.network.neutron [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.098 2 INFO nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.116 2 DEBUG nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.188 2 DEBUG nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.189 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.190 2 INFO nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Creating image(s)#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.217 2 DEBUG nova.storage.rbd_utils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.247 2 DEBUG nova.storage.rbd_utils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.279 2 DEBUG nova.storage.rbd_utils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.284 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:57.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.317 2 DEBUG nova.policy [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fe9cc788734f406d826446a848700331', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.366 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.367 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.367 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.367 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.396 2 DEBUG nova.storage.rbd_utils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.400 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:36:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:57.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:36:57 np0005465987 nova_compute[230713]: 2025-10-02 12:36:57.988 2 DEBUG nova.network.neutron [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Successfully created port: 14612808-d896-4fb0-87fd-b7d7fa7855b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:36:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:58Z|00541|binding|INFO|Releasing lport 6f9d54ba-3cfb-48b9-bef7-b2077e6931d7 from this chassis (sb_readonly=0)
Oct  2 08:36:58 np0005465987 nova_compute[230713]: 2025-10-02 12:36:58.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:36:58Z|00542|binding|INFO|Releasing lport 6f9d54ba-3cfb-48b9-bef7-b2077e6931d7 from this chassis (sb_readonly=0)
Oct  2 08:36:58 np0005465987 nova_compute[230713]: 2025-10-02 12:36:58.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:58 np0005465987 nova_compute[230713]: 2025-10-02 12:36:58.439 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:58 np0005465987 nova_compute[230713]: 2025-10-02 12:36:58.518 2 DEBUG nova.storage.rbd_utils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] resizing rbd image 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:36:58 np0005465987 nova_compute[230713]: 2025-10-02 12:36:58.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:58 np0005465987 nova_compute[230713]: 2025-10-02 12:36:58.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:36:58 np0005465987 nova_compute[230713]: 2025-10-02 12:36:58.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:36:58 np0005465987 nova_compute[230713]: 2025-10-02 12:36:58.992 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.206 2 DEBUG nova.objects.instance [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'migration_context' on Instance uuid 768b572a-8d38-4e11-a6c3-367d4ddc79b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.225 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.225 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Ensure instance console log exists: /var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.226 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.226 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.226 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.284 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.285 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.285 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.285 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 58c789e4-1ba5-4001-913d-9aa19bd9c59f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:59.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.684 2 DEBUG nova.network.neutron [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Successfully updated port: 14612808-d896-4fb0-87fd-b7d7fa7855b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.704 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "refresh_cache-768b572a-8d38-4e11-a6c3-367d4ddc79b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.705 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquired lock "refresh_cache-768b572a-8d38-4e11-a6c3-367d4ddc79b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.705 2 DEBUG nova.network.neutron [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:36:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:36:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:59.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.843 2 DEBUG nova.compute.manager [req-a5d45916-2738-48e8-bd48-73dd82d892df req-d34dd738-131c-4579-b80c-bab82e38263b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Received event network-changed-14612808-d896-4fb0-87fd-b7d7fa7855b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.844 2 DEBUG nova.compute.manager [req-a5d45916-2738-48e8-bd48-73dd82d892df req-d34dd738-131c-4579-b80c-bab82e38263b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Refreshing instance network info cache due to event network-changed-14612808-d896-4fb0-87fd-b7d7fa7855b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.844 2 DEBUG oslo_concurrency.lockutils [req-a5d45916-2738-48e8-bd48-73dd82d892df req-d34dd738-131c-4579-b80c-bab82e38263b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-768b572a-8d38-4e11-a6c3-367d4ddc79b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:59 np0005465987 nova_compute[230713]: 2025-10-02 12:36:59.924 2 DEBUG nova.network.neutron [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:37:00 np0005465987 nova_compute[230713]: 2025-10-02 12:37:00.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:01 np0005465987 nova_compute[230713]: 2025-10-02 12:37:01.273 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updating instance_info_cache with network_info: [{"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:01 np0005465987 nova_compute[230713]: 2025-10-02 12:37:01.294 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:01 np0005465987 nova_compute[230713]: 2025-10-02 12:37:01.294 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:37:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:01.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:01.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.013 2 DEBUG nova.network.neutron [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Updating instance_info_cache with network_info: [{"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.044 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Releasing lock "refresh_cache-768b572a-8d38-4e11-a6c3-367d4ddc79b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.044 2 DEBUG nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Instance network_info: |[{"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.045 2 DEBUG oslo_concurrency.lockutils [req-a5d45916-2738-48e8-bd48-73dd82d892df req-d34dd738-131c-4579-b80c-bab82e38263b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-768b572a-8d38-4e11-a6c3-367d4ddc79b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.045 2 DEBUG nova.network.neutron [req-a5d45916-2738-48e8-bd48-73dd82d892df req-d34dd738-131c-4579-b80c-bab82e38263b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Refreshing network info cache for port 14612808-d896-4fb0-87fd-b7d7fa7855b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.051 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Start _get_guest_xml network_info=[{"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.057 2 WARNING nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.063 2 DEBUG nova.virt.libvirt.host [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.064 2 DEBUG nova.virt.libvirt.host [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.068 2 DEBUG nova.virt.libvirt.host [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.069 2 DEBUG nova.virt.libvirt.host [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.071 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.072 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.072 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.073 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.073 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.074 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.074 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.075 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.075 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.076 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.076 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.077 2 DEBUG nova.virt.hardware [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.082 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3648069516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.502 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.524 2 DEBUG nova.storage.rbd_utils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.527 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1656805135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.932 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.934 2 DEBUG nova.virt.libvirt.vif [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-744365491',display_name='tempest-ServersTestJSON-server-744365491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-744365491',id=143,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-yy21j9uo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:57Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=768b572a-8d38-4e11-a6c3-367d4ddc79b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.934 2 DEBUG nova.network.os_vif_util [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.935 2 DEBUG nova.network.os_vif_util [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:42:c7,bridge_name='br-int',has_traffic_filtering=True,id=14612808-d896-4fb0-87fd-b7d7fa7855b1,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14612808-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.936 2 DEBUG nova.objects.instance [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'pci_devices' on Instance uuid 768b572a-8d38-4e11-a6c3-367d4ddc79b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.954 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <uuid>768b572a-8d38-4e11-a6c3-367d4ddc79b2</uuid>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <name>instance-0000008f</name>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServersTestJSON-server-744365491</nova:name>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:37:02</nova:creationTime>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <nova:user uuid="fe9cc788734f406d826446a848700331">tempest-ServersTestJSON-80077074-project-member</nova:user>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <nova:project uuid="bc0d63d3b4404ef8858166e8836dd0af">tempest-ServersTestJSON-80077074</nova:project>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <nova:port uuid="14612808-d896-4fb0-87fd-b7d7fa7855b1">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <entry name="serial">768b572a-8d38-4e11-a6c3-367d4ddc79b2</entry>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <entry name="uuid">768b572a-8d38-4e11-a6c3-367d4ddc79b2</entry>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk.config">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:0c:42:c7"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <target dev="tap14612808-d8"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2/console.log" append="off"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:37:02 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:37:02 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:37:02 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:37:02 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.956 2 DEBUG nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Preparing to wait for external event network-vif-plugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.956 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.957 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.957 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.958 2 DEBUG nova.virt.libvirt.vif [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:36:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-744365491',display_name='tempest-ServersTestJSON-server-744365491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-744365491',id=143,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-yy21j9uo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:57Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=768b572a-8d38-4e11-a6c3-367d4ddc79b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.958 2 DEBUG nova.network.os_vif_util [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.959 2 DEBUG nova.network.os_vif_util [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:42:c7,bridge_name='br-int',has_traffic_filtering=True,id=14612808-d896-4fb0-87fd-b7d7fa7855b1,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14612808-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.959 2 DEBUG os_vif [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:42:c7,bridge_name='br-int',has_traffic_filtering=True,id=14612808-d896-4fb0-87fd-b7d7fa7855b1,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14612808-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.967 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14612808-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.968 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14612808-d8, col_values=(('external_ids', {'iface-id': '14612808-d896-4fb0-87fd-b7d7fa7855b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:42:c7', 'vm-uuid': '768b572a-8d38-4e11-a6c3-367d4ddc79b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:02 np0005465987 NetworkManager[44910]: <info>  [1759408622.9717] manager: (tap14612808-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:02 np0005465987 nova_compute[230713]: 2025-10-02 12:37:02.976 2 INFO os_vif [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:42:c7,bridge_name='br-int',has_traffic_filtering=True,id=14612808-d896-4fb0-87fd-b7d7fa7855b1,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14612808-d8')#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.028 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.029 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.029 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] No VIF found with MAC fa:16:3e:0c:42:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.030 2 INFO nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Using config drive#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.064 2 DEBUG nova.storage.rbd_utils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:03.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.419 2 INFO nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Creating config drive at /var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2/disk.config#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.431 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7h8wqil execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.476 2 DEBUG nova.network.neutron [req-a5d45916-2738-48e8-bd48-73dd82d892df req-d34dd738-131c-4579-b80c-bab82e38263b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Updated VIF entry in instance network info cache for port 14612808-d896-4fb0-87fd-b7d7fa7855b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.477 2 DEBUG nova.network.neutron [req-a5d45916-2738-48e8-bd48-73dd82d892df req-d34dd738-131c-4579-b80c-bab82e38263b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Updating instance_info_cache with network_info: [{"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.495 2 DEBUG oslo_concurrency.lockutils [req-a5d45916-2738-48e8-bd48-73dd82d892df req-d34dd738-131c-4579-b80c-bab82e38263b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-768b572a-8d38-4e11-a6c3-367d4ddc79b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.590 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7h8wqil" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.622 2 DEBUG nova.storage.rbd_utils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] rbd image 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:03 np0005465987 nova_compute[230713]: 2025-10-02 12:37:03.626 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2/disk.config 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:03.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:04 np0005465987 nova_compute[230713]: 2025-10-02 12:37:04.120 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408609.1188695, 6ec65b83-acd5-4790-8df0-0dfa903d2f84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:04 np0005465987 nova_compute[230713]: 2025-10-02 12:37:04.122 2 INFO nova.compute.manager [-] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:37:04 np0005465987 nova_compute[230713]: 2025-10-02 12:37:04.150 2 DEBUG nova.compute.manager [None req-949f736c-9b46-466e-ad8f-4c29eeeb1685 - - - - - -] [instance: 6ec65b83-acd5-4790-8df0-0dfa903d2f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:05 np0005465987 nova_compute[230713]: 2025-10-02 12:37:05.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:05.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:05 np0005465987 nova_compute[230713]: 2025-10-02 12:37:05.545 2 DEBUG oslo_concurrency.processutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2/disk.config 768b572a-8d38-4e11-a6c3-367d4ddc79b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.919s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:05 np0005465987 nova_compute[230713]: 2025-10-02 12:37:05.546 2 INFO nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Deleting local config drive /var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2/disk.config because it was imported into RBD.#033[00m
Oct  2 08:37:05 np0005465987 kernel: tap14612808-d8: entered promiscuous mode
Oct  2 08:37:05 np0005465987 NetworkManager[44910]: <info>  [1759408625.6068] manager: (tap14612808-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Oct  2 08:37:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:05Z|00543|binding|INFO|Claiming lport 14612808-d896-4fb0-87fd-b7d7fa7855b1 for this chassis.
Oct  2 08:37:05 np0005465987 nova_compute[230713]: 2025-10-02 12:37:05.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:05Z|00544|binding|INFO|14612808-d896-4fb0-87fd-b7d7fa7855b1: Claiming fa:16:3e:0c:42:c7 10.100.0.3
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.614 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:42:c7 10.100.0.3'], port_security=['fa:16:3e:0c:42:c7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '768b572a-8d38-4e11-a6c3-367d4ddc79b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2a11ff87-bec6-4638-b302-adcd655efba9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=797b6af2-473b-4626-9e97-a0a489119419, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=14612808-d896-4fb0-87fd-b7d7fa7855b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.615 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 14612808-d896-4fb0-87fd-b7d7fa7855b1 in datapath d7203b00-e5e4-402e-b777-ac6280fa23ac bound to our chassis#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.616 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7203b00-e5e4-402e-b777-ac6280fa23ac#033[00m
Oct  2 08:37:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:05Z|00545|binding|INFO|Setting lport 14612808-d896-4fb0-87fd-b7d7fa7855b1 up in Southbound
Oct  2 08:37:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:05Z|00546|binding|INFO|Setting lport 14612808-d896-4fb0-87fd-b7d7fa7855b1 ovn-installed in OVS
Oct  2 08:37:05 np0005465987 nova_compute[230713]: 2025-10-02 12:37:05.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:05 np0005465987 nova_compute[230713]: 2025-10-02 12:37:05.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.639 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd843cc-b9e8-4004-bc17-5975c533ecfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:05 np0005465987 systemd-machined[188335]: New machine qemu-63-instance-0000008f.
Oct  2 08:37:05 np0005465987 systemd-udevd[283714]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:05 np0005465987 systemd[1]: Started Virtual Machine qemu-63-instance-0000008f.
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.668 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5f33fa21-1c25-4be7-b2f6-62a8780ca647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.671 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0d704b1d-df3b-4964-ae6c-e05babe4de2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:05 np0005465987 NetworkManager[44910]: <info>  [1759408625.6735] device (tap14612808-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:37:05 np0005465987 NetworkManager[44910]: <info>  [1759408625.6749] device (tap14612808-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.696 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa2adf8-b8c7-491e-8902-e1d2ef45399f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.711 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc7a20a-11a9-429f-9a3a-70d2e84f5bf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7203b00-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:c4:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653015, 'reachable_time': 17336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283720, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.726 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[905b9f19-e100-42da-82fb-8ee2977ebfcc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd7203b00-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653029, 'tstamp': 653029}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283724, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7203b00-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653032, 'tstamp': 653032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283724, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.728 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7203b00-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:05 np0005465987 nova_compute[230713]: 2025-10-02 12:37:05.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:05 np0005465987 nova_compute[230713]: 2025-10-02 12:37:05.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.732 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7203b00-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.732 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.733 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7203b00-e0, col_values=(('external_ids', {'iface-id': '6f9d54ba-3cfb-48b9-bef7-b2077e6931d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:05.733 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:05.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.371 2 DEBUG nova.compute.manager [req-f8e8cf7e-55a0-491f-b95d-d6f25ffe0ec0 req-f05b8e81-13e2-4f21-96af-b8e0f6792c95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Received event network-vif-plugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.373 2 DEBUG oslo_concurrency.lockutils [req-f8e8cf7e-55a0-491f-b95d-d6f25ffe0ec0 req-f05b8e81-13e2-4f21-96af-b8e0f6792c95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.373 2 DEBUG oslo_concurrency.lockutils [req-f8e8cf7e-55a0-491f-b95d-d6f25ffe0ec0 req-f05b8e81-13e2-4f21-96af-b8e0f6792c95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.374 2 DEBUG oslo_concurrency.lockutils [req-f8e8cf7e-55a0-491f-b95d-d6f25ffe0ec0 req-f05b8e81-13e2-4f21-96af-b8e0f6792c95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.374 2 DEBUG nova.compute.manager [req-f8e8cf7e-55a0-491f-b95d-d6f25ffe0ec0 req-f05b8e81-13e2-4f21-96af-b8e0f6792c95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Processing event network-vif-plugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.779 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408626.7791686, 768b572a-8d38-4e11-a6c3-367d4ddc79b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.780 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] VM Started (Lifecycle Event)#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.783 2 DEBUG nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.790 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.794 2 INFO nova.virt.libvirt.driver [-] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Instance spawned successfully.#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.794 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.815 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.819 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.836 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.837 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.837 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.838 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.838 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.839 2 DEBUG nova.virt.libvirt.driver [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.843 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.844 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408626.7804747, 768b572a-8d38-4e11-a6c3-367d4ddc79b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.844 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.882 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.885 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408626.7901082, 768b572a-8d38-4e11-a6c3-367d4ddc79b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.886 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.913 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.917 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.938 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.944 2 INFO nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Took 9.76 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:37:06 np0005465987 nova_compute[230713]: 2025-10-02 12:37:06.944 2 DEBUG nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:07 np0005465987 nova_compute[230713]: 2025-10-02 12:37:07.040 2 INFO nova.compute.manager [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Took 10.78 seconds to build instance.#033[00m
Oct  2 08:37:07 np0005465987 nova_compute[230713]: 2025-10-02 12:37:07.064 2 DEBUG oslo_concurrency.lockutils [None req-e76854e5-b829-4df0-9755-d04724e17ad8 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:07 np0005465987 nova_compute[230713]: 2025-10-02 12:37:07.288 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:07.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:07 np0005465987 nova_compute[230713]: 2025-10-02 12:37:07.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:07 np0005465987 nova_compute[230713]: 2025-10-02 12:37:07.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.568 2 DEBUG nova.compute.manager [req-0ca5dcaa-e5f2-427a-ab6c-b8e8b4f322ed req-ce0a7e27-e4e0-44f6-9429-2c698858ef11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Received event network-vif-plugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.569 2 DEBUG oslo_concurrency.lockutils [req-0ca5dcaa-e5f2-427a-ab6c-b8e8b4f322ed req-ce0a7e27-e4e0-44f6-9429-2c698858ef11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.569 2 DEBUG oslo_concurrency.lockutils [req-0ca5dcaa-e5f2-427a-ab6c-b8e8b4f322ed req-ce0a7e27-e4e0-44f6-9429-2c698858ef11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.569 2 DEBUG oslo_concurrency.lockutils [req-0ca5dcaa-e5f2-427a-ab6c-b8e8b4f322ed req-ce0a7e27-e4e0-44f6-9429-2c698858ef11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.570 2 DEBUG nova.compute.manager [req-0ca5dcaa-e5f2-427a-ab6c-b8e8b4f322ed req-ce0a7e27-e4e0-44f6-9429-2c698858ef11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] No waiting events found dispatching network-vif-plugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.570 2 WARNING nova.compute.manager [req-0ca5dcaa-e5f2-427a-ab6c-b8e8b4f322ed req-ce0a7e27-e4e0-44f6-9429-2c698858ef11 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Received unexpected event network-vif-plugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.996 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:37:08 np0005465987 nova_compute[230713]: 2025-10-02 12:37:08.996 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:09.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/545548618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.505 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.588 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.589 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.593 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.593 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.792 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.793 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4104MB free_disk=20.835155487060547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.794 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.794 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:09.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.902 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 58c789e4-1ba5-4001-913d-9aa19bd9c59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.902 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 768b572a-8d38-4e11-a6c3-367d4ddc79b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.903 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.903 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:37:09 np0005465987 nova_compute[230713]: 2025-10-02 12:37:09.988 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:10 np0005465987 nova_compute[230713]: 2025-10-02 12:37:10.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1902971222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:10 np0005465987 nova_compute[230713]: 2025-10-02 12:37:10.419 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:10 np0005465987 nova_compute[230713]: 2025-10-02 12:37:10.426 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:10 np0005465987 nova_compute[230713]: 2025-10-02 12:37:10.443 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:10 np0005465987 nova_compute[230713]: 2025-10-02 12:37:10.468 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:37:10 np0005465987 nova_compute[230713]: 2025-10-02 12:37:10.469 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:11.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.469 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.470 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.489 2 DEBUG oslo_concurrency.lockutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.490 2 DEBUG oslo_concurrency.lockutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.490 2 DEBUG oslo_concurrency.lockutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.490 2 DEBUG oslo_concurrency.lockutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.491 2 DEBUG oslo_concurrency.lockutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.492 2 INFO nova.compute.manager [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Terminating instance#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.493 2 DEBUG nova.compute.manager [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:37:11 np0005465987 kernel: tap14612808-d8 (unregistering): left promiscuous mode
Oct  2 08:37:11 np0005465987 NetworkManager[44910]: <info>  [1759408631.6035] device (tap14612808-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:11Z|00547|binding|INFO|Releasing lport 14612808-d896-4fb0-87fd-b7d7fa7855b1 from this chassis (sb_readonly=0)
Oct  2 08:37:11 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:11Z|00548|binding|INFO|Setting lport 14612808-d896-4fb0-87fd-b7d7fa7855b1 down in Southbound
Oct  2 08:37:11 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:11Z|00549|binding|INFO|Removing iface tap14612808-d8 ovn-installed in OVS
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.615 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:42:c7 10.100.0.3'], port_security=['fa:16:3e:0c:42:c7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '768b572a-8d38-4e11-a6c3-367d4ddc79b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2a11ff87-bec6-4638-b302-adcd655efba9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=797b6af2-473b-4626-9e97-a0a489119419, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=14612808-d896-4fb0-87fd-b7d7fa7855b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.616 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 14612808-d896-4fb0-87fd-b7d7fa7855b1 in datapath d7203b00-e5e4-402e-b777-ac6280fa23ac unbound from our chassis#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.618 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7203b00-e5e4-402e-b777-ac6280fa23ac#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.637 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3aa766-6285-498b-9672-d2e5da930f1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.678 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2e4ad7-0a80-438c-9b20-1bc07873a8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.683 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e502fcc6-a27d-4c1f-8c1d-0e84402f73f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:11 np0005465987 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct  2 08:37:11 np0005465987 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008f.scope: Consumed 5.747s CPU time.
Oct  2 08:37:11 np0005465987 systemd-machined[188335]: Machine qemu-63-instance-0000008f terminated.
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.714 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ce715c2c-faf2-4708-b22d-461010a13da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.738 2 INFO nova.virt.libvirt.driver [-] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Instance destroyed successfully.#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.739 2 DEBUG nova.objects.instance [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'resources' on Instance uuid 768b572a-8d38-4e11-a6c3-367d4ddc79b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.740 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[77a064dc-0a30-4376-a8de-efb086a6221c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7203b00-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:c4:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653015, 'reachable_time': 17336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283833, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.755 2 DEBUG nova.virt.libvirt.vif [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:36:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-744365491',display_name='tempest-ServersTestJSON-server-744365491',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-744365491',id=143,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-yy21j9uo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:37:08Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=768b572a-8d38-4e11-a6c3-367d4ddc79b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.756 2 DEBUG nova.network.os_vif_util [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "address": "fa:16:3e:0c:42:c7", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14612808-d8", "ovs_interfaceid": "14612808-d896-4fb0-87fd-b7d7fa7855b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.757 2 DEBUG nova.network.os_vif_util [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:42:c7,bridge_name='br-int',has_traffic_filtering=True,id=14612808-d896-4fb0-87fd-b7d7fa7855b1,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14612808-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.757 2 DEBUG os_vif [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:42:c7,bridge_name='br-int',has_traffic_filtering=True,id=14612808-d896-4fb0-87fd-b7d7fa7855b1,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14612808-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.759 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14612808-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.758 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c669ab3e-4442-49a1-98cb-7423a2ad9742]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd7203b00-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653029, 'tstamp': 653029}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283838, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7203b00-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653032, 'tstamp': 653032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283838, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.759 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7203b00-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.763 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7203b00-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.763 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.763 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7203b00-e0, col_values=(('external_ids', {'iface-id': '6f9d54ba-3cfb-48b9-bef7-b2077e6931d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:11.764 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.765 2 INFO os_vif [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:42:c7,bridge_name='br-int',has_traffic_filtering=True,id=14612808-d896-4fb0-87fd-b7d7fa7855b1,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14612808-d8')#033[00m
Oct  2 08:37:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:11.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.946 2 DEBUG nova.compute.manager [req-c329732a-6a83-416e-895d-aea34eb8aed3 req-148e7032-65e9-49df-9295-ea9f9f7a12ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Received event network-vif-unplugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.946 2 DEBUG oslo_concurrency.lockutils [req-c329732a-6a83-416e-895d-aea34eb8aed3 req-148e7032-65e9-49df-9295-ea9f9f7a12ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.946 2 DEBUG oslo_concurrency.lockutils [req-c329732a-6a83-416e-895d-aea34eb8aed3 req-148e7032-65e9-49df-9295-ea9f9f7a12ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.947 2 DEBUG oslo_concurrency.lockutils [req-c329732a-6a83-416e-895d-aea34eb8aed3 req-148e7032-65e9-49df-9295-ea9f9f7a12ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.947 2 DEBUG nova.compute.manager [req-c329732a-6a83-416e-895d-aea34eb8aed3 req-148e7032-65e9-49df-9295-ea9f9f7a12ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] No waiting events found dispatching network-vif-unplugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:11 np0005465987 nova_compute[230713]: 2025-10-02 12:37:11.947 2 DEBUG nova.compute.manager [req-c329732a-6a83-416e-895d-aea34eb8aed3 req-148e7032-65e9-49df-9295-ea9f9f7a12ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Received event network-vif-unplugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:37:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:13.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:13.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.048 2 DEBUG nova.compute.manager [req-a70db812-03fb-4a78-bf7e-1a2afb53751d req-35433228-85cf-4b89-8560-565e3cfe4ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Received event network-vif-plugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.048 2 DEBUG oslo_concurrency.lockutils [req-a70db812-03fb-4a78-bf7e-1a2afb53751d req-35433228-85cf-4b89-8560-565e3cfe4ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.049 2 DEBUG oslo_concurrency.lockutils [req-a70db812-03fb-4a78-bf7e-1a2afb53751d req-35433228-85cf-4b89-8560-565e3cfe4ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.050 2 DEBUG oslo_concurrency.lockutils [req-a70db812-03fb-4a78-bf7e-1a2afb53751d req-35433228-85cf-4b89-8560-565e3cfe4ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.050 2 DEBUG nova.compute.manager [req-a70db812-03fb-4a78-bf7e-1a2afb53751d req-35433228-85cf-4b89-8560-565e3cfe4ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] No waiting events found dispatching network-vif-plugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.050 2 WARNING nova.compute.manager [req-a70db812-03fb-4a78-bf7e-1a2afb53751d req-35433228-85cf-4b89-8560-565e3cfe4ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Received unexpected event network-vif-plugged-14612808-d896-4fb0-87fd-b7d7fa7855b1 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.239 2 INFO nova.virt.libvirt.driver [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Deleting instance files /var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2_del#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.240 2 INFO nova.virt.libvirt.driver [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Deletion of /var/lib/nova/instances/768b572a-8d38-4e11-a6c3-367d4ddc79b2_del complete#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.448 2 INFO nova.compute.manager [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Took 2.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.448 2 DEBUG oslo.service.loopingcall [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.449 2 DEBUG nova.compute.manager [-] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.449 2 DEBUG nova.network.neutron [-] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:37:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:14 np0005465987 nova_compute[230713]: 2025-10-02 12:37:14.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:37:15 np0005465987 nova_compute[230713]: 2025-10-02 12:37:15.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:15.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:15 np0005465987 nova_compute[230713]: 2025-10-02 12:37:15.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:15.443 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:15.444 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:37:15 np0005465987 nova_compute[230713]: 2025-10-02 12:37:15.451 2 DEBUG nova.network.neutron [-] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:15 np0005465987 nova_compute[230713]: 2025-10-02 12:37:15.479 2 INFO nova.compute.manager [-] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Took 1.03 seconds to deallocate network for instance.#033[00m
Oct  2 08:37:15 np0005465987 nova_compute[230713]: 2025-10-02 12:37:15.567 2 DEBUG oslo_concurrency.lockutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:15 np0005465987 nova_compute[230713]: 2025-10-02 12:37:15.567 2 DEBUG oslo_concurrency.lockutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:15 np0005465987 nova_compute[230713]: 2025-10-02 12:37:15.570 2 DEBUG nova.compute.manager [req-31d46b1d-c7b3-4db0-98ef-5e130c1c69a3 req-61e10b8c-af0e-4def-bca7-904b9c596cf9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Received event network-vif-deleted-14612808-d896-4fb0-87fd-b7d7fa7855b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:15 np0005465987 nova_compute[230713]: 2025-10-02 12:37:15.623 2 DEBUG oslo_concurrency.processutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:15.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3471199299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:16 np0005465987 nova_compute[230713]: 2025-10-02 12:37:16.043 2 DEBUG oslo_concurrency.processutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:16 np0005465987 nova_compute[230713]: 2025-10-02 12:37:16.049 2 DEBUG nova.compute.provider_tree [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:16 np0005465987 nova_compute[230713]: 2025-10-02 12:37:16.068 2 DEBUG nova.scheduler.client.report [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:16 np0005465987 nova_compute[230713]: 2025-10-02 12:37:16.089 2 DEBUG oslo_concurrency.lockutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:16 np0005465987 nova_compute[230713]: 2025-10-02 12:37:16.121 2 INFO nova.scheduler.client.report [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Deleted allocations for instance 768b572a-8d38-4e11-a6c3-367d4ddc79b2#033[00m
Oct  2 08:37:16 np0005465987 nova_compute[230713]: 2025-10-02 12:37:16.229 2 DEBUG oslo_concurrency.lockutils [None req-44e81e3e-3a3b-4e1f-ac9b-b38fb6727011 fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "768b572a-8d38-4e11-a6c3-367d4ddc79b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:16 np0005465987 nova_compute[230713]: 2025-10-02 12:37:16.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:17.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:19.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:19.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:20 np0005465987 nova_compute[230713]: 2025-10-02 12:37:20.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:21.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:21 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:21.445 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:21 np0005465987 nova_compute[230713]: 2025-10-02 12:37:21.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:21.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:21 np0005465987 podman[283880]: 2025-10-02 12:37:21.884109587 +0000 UTC m=+0.088912315 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct  2 08:37:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:23.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:23.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:25 np0005465987 nova_compute[230713]: 2025-10-02 12:37:25.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:25.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:25.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.034 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.034 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.071 2 DEBUG nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.187 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.188 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.195 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.195 2 INFO nova.compute.claims [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.369 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.735 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408631.7340589, 768b572a-8d38-4e11-a6c3-367d4ddc79b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.736 2 INFO nova.compute.manager [-] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.757 2 DEBUG nova.compute.manager [None req-d21b55bc-64ea-4ae6-b4c7-2f96f9182060 - - - - - -] [instance: 768b572a-8d38-4e11-a6c3-367d4ddc79b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2525186184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.827 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.836 2 DEBUG nova.compute.provider_tree [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.872 2 DEBUG nova.scheduler.client.report [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.903 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.904 2 DEBUG nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.983 2 DEBUG nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:37:26 np0005465987 nova_compute[230713]: 2025-10-02 12:37:26.983 2 DEBUG nova.network.neutron [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.009 2 INFO nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.046 2 DEBUG nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.160 2 DEBUG nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.162 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.162 2 INFO nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Creating image(s)#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.201 2 DEBUG nova.storage.rbd_utils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.242 2 DEBUG nova.storage.rbd_utils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.282 2 DEBUG nova.storage.rbd_utils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.288 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.332 2 DEBUG nova.policy [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c02d1dcc10ea4e57bbc6b7a3c100dc7b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6822f02d5ca04c659329a75d487054cf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:37:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:27.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.391 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.392 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.392 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.393 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.425 2 DEBUG nova.storage.rbd_utils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:27 np0005465987 nova_compute[230713]: 2025-10-02 12:37:27.430 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d2dd3328-2833-429e-9f0d-3ffff2c72587_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:27.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:27 np0005465987 podman[284015]: 2025-10-02 12:37:27.873798148 +0000 UTC m=+0.073180733 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:37:27 np0005465987 podman[284014]: 2025-10-02 12:37:27.874183378 +0000 UTC m=+0.073339167 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:37:27 np0005465987 podman[284013]: 2025-10-02 12:37:27.932353467 +0000 UTC m=+0.132683858 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:37:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:27.963 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:27.963 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:27.964 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.064 2 DEBUG nova.network.neutron [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Successfully created port: c775ce5e-fd60-4ae8-a429-77f19ee2e858 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.538 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 d2dd3328-2833-429e-9f0d-3ffff2c72587_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.599 2 DEBUG nova.storage.rbd_utils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] resizing rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.845 2 DEBUG nova.network.neutron [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Successfully updated port: c775ce5e-fd60-4ae8-a429-77f19ee2e858 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.901 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "refresh_cache-d2dd3328-2833-429e-9f0d-3ffff2c72587" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.901 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquired lock "refresh_cache-d2dd3328-2833-429e-9f0d-3ffff2c72587" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.902 2 DEBUG nova.network.neutron [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.911 2 DEBUG nova.objects.instance [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'migration_context' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.932 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.933 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Ensure instance console log exists: /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.934 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.934 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:28 np0005465987 nova_compute[230713]: 2025-10-02 12:37:28.934 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:29 np0005465987 nova_compute[230713]: 2025-10-02 12:37:29.066 2 DEBUG nova.compute.manager [req-10b8b803-e8bf-47d5-bd30-06b255dca6c2 req-d99c9040-6e72-499a-b8e2-11bdae5bae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-changed-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:29 np0005465987 nova_compute[230713]: 2025-10-02 12:37:29.067 2 DEBUG nova.compute.manager [req-10b8b803-e8bf-47d5-bd30-06b255dca6c2 req-d99c9040-6e72-499a-b8e2-11bdae5bae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Refreshing instance network info cache due to event network-changed-c775ce5e-fd60-4ae8-a429-77f19ee2e858. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:37:29 np0005465987 nova_compute[230713]: 2025-10-02 12:37:29.068 2 DEBUG oslo_concurrency.lockutils [req-10b8b803-e8bf-47d5-bd30-06b255dca6c2 req-d99c9040-6e72-499a-b8e2-11bdae5bae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-d2dd3328-2833-429e-9f0d-3ffff2c72587" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:29 np0005465987 nova_compute[230713]: 2025-10-02 12:37:29.263 2 DEBUG nova.network.neutron [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:37:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:29.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:29.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.909364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408649909660, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 823, "num_deletes": 251, "total_data_size": 1506884, "memory_usage": 1524656, "flush_reason": "Manual Compaction"}
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408649918926, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 993471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54477, "largest_seqno": 55295, "table_properties": {"data_size": 989649, "index_size": 1602, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9032, "raw_average_key_size": 19, "raw_value_size": 981854, "raw_average_value_size": 2143, "num_data_blocks": 70, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408595, "oldest_key_time": 1759408595, "file_creation_time": 1759408649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 9591 microseconds, and 4872 cpu microseconds.
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.918968) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 993471 bytes OK
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.918986) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.920550) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.920566) EVENT_LOG_v1 {"time_micros": 1759408649920561, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.920584) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1502612, prev total WAL file size 1502612, number of live WAL files 2.
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.921691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(970KB)], [105(12MB)]
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408649921794, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 14330154, "oldest_snapshot_seqno": -1}
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 7934 keys, 12464789 bytes, temperature: kUnknown
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408649973603, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 12464789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12410479, "index_size": 33398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205821, "raw_average_key_size": 25, "raw_value_size": 12267799, "raw_average_value_size": 1546, "num_data_blocks": 1312, "num_entries": 7934, "num_filter_entries": 7934, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759408649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.973899) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12464789 bytes
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.975265) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 276.2 rd, 240.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(27.0) write-amplify(12.5) OK, records in: 8447, records dropped: 513 output_compression: NoCompression
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.975286) EVENT_LOG_v1 {"time_micros": 1759408649975277, "job": 66, "event": "compaction_finished", "compaction_time_micros": 51877, "compaction_time_cpu_micros": 24879, "output_level": 6, "num_output_files": 1, "total_output_size": 12464789, "num_input_records": 8447, "num_output_records": 7934, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408649975582, "job": 66, "event": "table_file_deletion", "file_number": 107}
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408649978422, "job": 66, "event": "table_file_deletion", "file_number": 105}
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.921511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.978504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.978512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.978516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.978520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:37:29.978524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.455 2 DEBUG nova.network.neutron [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Updating instance_info_cache with network_info: [{"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.477 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Releasing lock "refresh_cache-d2dd3328-2833-429e-9f0d-3ffff2c72587" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.477 2 DEBUG nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance network_info: |[{"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.478 2 DEBUG oslo_concurrency.lockutils [req-10b8b803-e8bf-47d5-bd30-06b255dca6c2 req-d99c9040-6e72-499a-b8e2-11bdae5bae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-d2dd3328-2833-429e-9f0d-3ffff2c72587" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.478 2 DEBUG nova.network.neutron [req-10b8b803-e8bf-47d5-bd30-06b255dca6c2 req-d99c9040-6e72-499a-b8e2-11bdae5bae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Refreshing network info cache for port c775ce5e-fd60-4ae8-a429-77f19ee2e858 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.481 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Start _get_guest_xml network_info=[{"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.485 2 WARNING nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.491 2 DEBUG nova.virt.libvirt.host [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.492 2 DEBUG nova.virt.libvirt.host [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.497 2 DEBUG nova.virt.libvirt.host [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.497 2 DEBUG nova.virt.libvirt.host [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.499 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.500 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.501 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.501 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.502 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.502 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.503 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.503 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.504 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.504 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.505 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.505 2 DEBUG nova.virt.hardware [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:37:30 np0005465987 nova_compute[230713]: 2025-10-02 12:37:30.510 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3185048013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.035 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.064 2 DEBUG nova.storage.rbd_utils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.068 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:31.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:31 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3789338036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.514 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.518 2 DEBUG nova.virt.libvirt.vif [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1524407385',display_name='tempest-tempest.common.compute-instance-1524407385',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1524407385',id=147,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-2ey4o6h2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:27Z,user_data=None,user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=d2dd3328-2833-429e-9f0d-3ffff2c72587,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.519 2 DEBUG nova.network.os_vif_util [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.521 2 DEBUG nova.network.os_vif_util [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.523 2 DEBUG nova.objects.instance [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'pci_devices' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.574 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <uuid>d2dd3328-2833-429e-9f0d-3ffff2c72587</uuid>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <name>instance-00000093</name>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <nova:name>tempest-tempest.common.compute-instance-1524407385</nova:name>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:37:30</nova:creationTime>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <nova:user uuid="c02d1dcc10ea4e57bbc6b7a3c100dc7b">tempest-ServerActionsTestOtherA-1680083910-project-member</nova:user>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <nova:project uuid="6822f02d5ca04c659329a75d487054cf">tempest-ServerActionsTestOtherA-1680083910</nova:project>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <nova:port uuid="c775ce5e-fd60-4ae8-a429-77f19ee2e858">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <entry name="serial">d2dd3328-2833-429e-9f0d-3ffff2c72587</entry>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <entry name="uuid">d2dd3328-2833-429e-9f0d-3ffff2c72587</entry>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d2dd3328-2833-429e-9f0d-3ffff2c72587_disk">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:ad:b4:8f"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <target dev="tapc775ce5e-fd"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/console.log" append="off"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:37:31 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:37:31 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:37:31 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:37:31 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.576 2 DEBUG nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Preparing to wait for external event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.576 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.577 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.577 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.578 2 DEBUG nova.virt.libvirt.vif [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1524407385',display_name='tempest-tempest.common.compute-instance-1524407385',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1524407385',id=147,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-2ey4o6h2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:27Z,user_data=None,user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=d2dd3328-2833-429e-9f0d-3ffff2c72587,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.578 2 DEBUG nova.network.os_vif_util [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.579 2 DEBUG nova.network.os_vif_util [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.580 2 DEBUG os_vif [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc775ce5e-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc775ce5e-fd, col_values=(('external_ids', {'iface-id': 'c775ce5e-fd60-4ae8-a429-77f19ee2e858', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:b4:8f', 'vm-uuid': 'd2dd3328-2833-429e-9f0d-3ffff2c72587'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:31 np0005465987 NetworkManager[44910]: <info>  [1759408651.5903] manager: (tapc775ce5e-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.599 2 INFO os_vif [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd')#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.682 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.683 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.684 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No VIF found with MAC fa:16:3e:ad:b4:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.686 2 INFO nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Using config drive#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.710 2 DEBUG nova.storage.rbd_utils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:31.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.988 2 DEBUG nova.network.neutron [req-10b8b803-e8bf-47d5-bd30-06b255dca6c2 req-d99c9040-6e72-499a-b8e2-11bdae5bae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Updated VIF entry in instance network info cache for port c775ce5e-fd60-4ae8-a429-77f19ee2e858. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:37:31 np0005465987 nova_compute[230713]: 2025-10-02 12:37:31.988 2 DEBUG nova.network.neutron [req-10b8b803-e8bf-47d5-bd30-06b255dca6c2 req-d99c9040-6e72-499a-b8e2-11bdae5bae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Updating instance_info_cache with network_info: [{"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:32 np0005465987 nova_compute[230713]: 2025-10-02 12:37:32.006 2 DEBUG oslo_concurrency.lockutils [req-10b8b803-e8bf-47d5-bd30-06b255dca6c2 req-d99c9040-6e72-499a-b8e2-11bdae5bae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-d2dd3328-2833-429e-9f0d-3ffff2c72587" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:32 np0005465987 nova_compute[230713]: 2025-10-02 12:37:32.113 2 INFO nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Creating config drive at /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config#033[00m
Oct  2 08:37:32 np0005465987 nova_compute[230713]: 2025-10-02 12:37:32.121 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiku9y74u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:32 np0005465987 nova_compute[230713]: 2025-10-02 12:37:32.262 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiku9y74u" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:32 np0005465987 nova_compute[230713]: 2025-10-02 12:37:32.296 2 DEBUG nova.storage.rbd_utils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:32 np0005465987 nova_compute[230713]: 2025-10-02 12:37:32.299 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e332 e332: 3 total, 3 up, 3 in
Oct  2 08:37:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:33.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:33.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.115 2 DEBUG oslo_concurrency.processutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.816s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.116 2 INFO nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Deleting local config drive /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config because it was imported into RBD.#033[00m
Oct  2 08:37:34 np0005465987 kernel: tapc775ce5e-fd: entered promiscuous mode
Oct  2 08:37:34 np0005465987 NetworkManager[44910]: <info>  [1759408654.1800] manager: (tapc775ce5e-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:34Z|00550|binding|INFO|Claiming lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 for this chassis.
Oct  2 08:37:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:34Z|00551|binding|INFO|c775ce5e-fd60-4ae8-a429-77f19ee2e858: Claiming fa:16:3e:ad:b4:8f 10.100.0.10
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 NetworkManager[44910]: <info>  [1759408654.2045] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct  2 08:37:34 np0005465987 NetworkManager[44910]: <info>  [1759408654.2051] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Oct  2 08:37:34 np0005465987 systemd-udevd[284286]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.208 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:b4:8f 10.100.0.10'], port_security=['fa:16:3e:ad:b4:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd2dd3328-2833-429e-9f0d-3ffff2c72587', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02b03dd3-c87c-43e1-9317-5837e2c47d47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c775ce5e-fd60-4ae8-a429-77f19ee2e858) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.210 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c775ce5e-fd60-4ae8-a429-77f19ee2e858 in datapath 00455285-97a7-4fa2-ba83-e8060936877e bound to our chassis#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.212 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00455285-97a7-4fa2-ba83-e8060936877e#033[00m
Oct  2 08:37:34 np0005465987 NetworkManager[44910]: <info>  [1759408654.2204] device (tapc775ce5e-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:37:34 np0005465987 NetworkManager[44910]: <info>  [1759408654.2210] device (tapc775ce5e-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:37:34 np0005465987 systemd-machined[188335]: New machine qemu-64-instance-00000093.
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.238 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[87a5bfa6-ed19-48ba-a91d-e8521b1c722a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.239 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00455285-91 in ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.241 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00455285-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.241 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6eceb13e-43ac-4066-b28f-b43b498bfee7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.242 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcc89ec-c3d2-4c7b-a66e-4ea5d66bc786]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.255 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[2f45dc46-9d1e-4cfa-9806-c0f191516401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 systemd[1]: Started Virtual Machine qemu-64-instance-00000093.
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.281 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5e16853f-43a7-4e1c-a44b-46243c31fdc4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.330 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb376c3-b898-4d65-8aae-db38efdee6b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 NetworkManager[44910]: <info>  [1759408654.3561] manager: (tap00455285-90): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.355 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[27034e1a-5ae9-4120-91a4-1609c8940e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 systemd-udevd[284290]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:34Z|00552|binding|INFO|Releasing lport 6f9d54ba-3cfb-48b9-bef7-b2077e6931d7 from this chassis (sb_readonly=0)
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.394 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7a538091-b371-4f2a-95eb-edcfd91f41ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.397 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[86a8c354-2030-4add-9f24-de9645074b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:34Z|00553|binding|INFO|Setting lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 ovn-installed in OVS
Oct  2 08:37:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:34Z|00554|binding|INFO|Setting lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 up in Southbound
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 NetworkManager[44910]: <info>  [1759408654.4255] device (tap00455285-90): carrier: link connected
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.433 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4535c4a3-f9f6-412f-832f-b8166378327c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.458 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea80030-d2f7-4bc2-8d0e-fd9176580d09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672300, 'reachable_time': 28881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284320, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.476 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[457f7e16-daa9-4679-9a60-8edc1ca60f9f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:8a3c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672300, 'tstamp': 672300}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284321, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.495 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[564ff4e7-1c6e-4808-bc0f-e468b72ad19f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672300, 'reachable_time': 28881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284322, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.524 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c1031e76-e9b6-45c4-9152-af53f2eb3112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.581 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd3cf50-81c7-4cd2-98e8-d114d649b748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.583 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.583 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.583 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00455285-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 NetworkManager[44910]: <info>  [1759408654.5862] manager: (tap00455285-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Oct  2 08:37:34 np0005465987 kernel: tap00455285-90: entered promiscuous mode
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.590 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00455285-90, col_values=(('external_ids', {'iface-id': '293fb87a-10df-4698-a69e-3023bca5a6a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:34Z|00555|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 nova_compute[230713]: 2025-10-02 12:37:34.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.614 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.615 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[72742889-4f5e-45cc-9b08-b1f52e18faeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.617 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-00455285-97a7-4fa2-ba83-e8060936877e
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 00455285-97a7-4fa2-ba83-e8060936877e
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:37:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:37:34.618 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'env', 'PROCESS_TAG=haproxy-00455285-97a7-4fa2-ba83-e8060936877e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00455285-97a7-4fa2-ba83-e8060936877e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:37:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e333 e333: 3 total, 3 up, 3 in
Oct  2 08:37:34 np0005465987 podman[284387]: 2025-10-02 12:37:34.964410751 +0000 UTC m=+0.054224681 container create c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:37:35 np0005465987 systemd[1]: Started libpod-conmon-c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a.scope.
Oct  2 08:37:35 np0005465987 podman[284387]: 2025-10-02 12:37:34.933818991 +0000 UTC m=+0.023632951 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:37:35 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:37:35 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84cd65d33a7d98b382109a0ee2f165ab364413d5e9065b9a158d566fa3fa727a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:37:35 np0005465987 podman[284387]: 2025-10-02 12:37:35.073357396 +0000 UTC m=+0.163171326 container init c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:37:35 np0005465987 podman[284387]: 2025-10-02 12:37:35.081378687 +0000 UTC m=+0.171192597 container start c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:37:35 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[284403]: [NOTICE]   (284407) : New worker (284409) forked
Oct  2 08:37:35 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[284403]: [NOTICE]   (284407) : Loading success.
Oct  2 08:37:35 np0005465987 nova_compute[230713]: 2025-10-02 12:37:35.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:35.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:35 np0005465987 nova_compute[230713]: 2025-10-02 12:37:35.623 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408655.623204, d2dd3328-2833-429e-9f0d-3ffff2c72587 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:35 np0005465987 nova_compute[230713]: 2025-10-02 12:37:35.624 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] VM Started (Lifecycle Event)#033[00m
Oct  2 08:37:35 np0005465987 nova_compute[230713]: 2025-10-02 12:37:35.642 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:35 np0005465987 nova_compute[230713]: 2025-10-02 12:37:35.646 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408655.6243105, d2dd3328-2833-429e-9f0d-3ffff2c72587 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:35 np0005465987 nova_compute[230713]: 2025-10-02 12:37:35.646 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:37:35 np0005465987 nova_compute[230713]: 2025-10-02 12:37:35.662 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:35 np0005465987 nova_compute[230713]: 2025-10-02 12:37:35.666 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:35 np0005465987 nova_compute[230713]: 2025-10-02 12:37:35.685 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:35.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.373 2 DEBUG nova.compute.manager [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.373 2 DEBUG oslo_concurrency.lockutils [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.374 2 DEBUG oslo_concurrency.lockutils [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.374 2 DEBUG oslo_concurrency.lockutils [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.374 2 DEBUG nova.compute.manager [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Processing event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.374 2 DEBUG nova.compute.manager [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.374 2 DEBUG oslo_concurrency.lockutils [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.375 2 DEBUG oslo_concurrency.lockutils [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.375 2 DEBUG oslo_concurrency.lockutils [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.375 2 DEBUG nova.compute.manager [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] No waiting events found dispatching network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.375 2 WARNING nova.compute.manager [req-1d878952-5425-493c-8ad1-4ee68750c2d6 req-8ce632e9-ef5b-4293-ac9d-00b1851acc56 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received unexpected event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.375 2 DEBUG nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.379 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.379 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408656.3789067, d2dd3328-2833-429e-9f0d-3ffff2c72587 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.379 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.383 2 INFO nova.virt.libvirt.driver [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance spawned successfully.#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.383 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.408 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e334 e334: 3 total, 3 up, 3 in
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.411 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.412 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.412 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.413 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.413 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.414 2 DEBUG nova.virt.libvirt.driver [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.418 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.444 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.477 2 INFO nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Took 9.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.477 2 DEBUG nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.601 2 INFO nova.compute.manager [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Took 10.45 seconds to build instance.#033[00m
Oct  2 08:37:36 np0005465987 nova_compute[230713]: 2025-10-02 12:37:36.625 2 DEBUG oslo_concurrency.lockutils [None req-2346d8b0-c298-4094-9096-2ca7fc91d982 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:37.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:37.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:38 np0005465987 nova_compute[230713]: 2025-10-02 12:37:38.518 2 DEBUG oslo_concurrency.lockutils [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:38 np0005465987 nova_compute[230713]: 2025-10-02 12:37:38.519 2 DEBUG oslo_concurrency.lockutils [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:38 np0005465987 nova_compute[230713]: 2025-10-02 12:37:38.519 2 DEBUG nova.compute.manager [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:38 np0005465987 nova_compute[230713]: 2025-10-02 12:37:38.522 2 DEBUG nova.compute.manager [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:37:38 np0005465987 nova_compute[230713]: 2025-10-02 12:37:38.523 2 DEBUG nova.objects.instance [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'flavor' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:38 np0005465987 nova_compute[230713]: 2025-10-02 12:37:38.565 2 DEBUG nova.virt.libvirt.driver [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:37:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:39.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:39.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e335 e335: 3 total, 3 up, 3 in
Oct  2 08:37:40 np0005465987 nova_compute[230713]: 2025-10-02 12:37:40.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:41.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:41 np0005465987 nova_compute[230713]: 2025-10-02 12:37:41.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:41.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:43.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:43.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:45 np0005465987 nova_compute[230713]: 2025-10-02 12:37:45.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:45.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:45.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:37:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:37:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:37:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:37:46 np0005465987 nova_compute[230713]: 2025-10-02 12:37:46.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:47.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:37:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:37:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:37:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:37:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:47.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:48 np0005465987 nova_compute[230713]: 2025-10-02 12:37:48.609 2 DEBUG nova.virt.libvirt.driver [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:37:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:49.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:49.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:50 np0005465987 nova_compute[230713]: 2025-10-02 12:37:50.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:51 np0005465987 nova_compute[230713]: 2025-10-02 12:37:51.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:51Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:b4:8f 10.100.0.10
Oct  2 08:37:51 np0005465987 ovn_controller[129172]: 2025-10-02T12:37:51Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:b4:8f 10.100.0.10
Oct  2 08:37:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:51.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:52 np0005465987 podman[284554]: 2025-10-02 12:37:52.842994333 +0000 UTC m=+0.051743044 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 08:37:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:53.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:53.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:55 np0005465987 nova_compute[230713]: 2025-10-02 12:37:55.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:55.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:55.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:56 np0005465987 nova_compute[230713]: 2025-10-02 12:37:56.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:56 np0005465987 nova_compute[230713]: 2025-10-02 12:37:56.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:57.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:37:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:57.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:58 np0005465987 podman[284573]: 2025-10-02 12:37:58.913419273 +0000 UTC m=+0.108114003 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:37:58 np0005465987 podman[284574]: 2025-10-02 12:37:58.925755002 +0000 UTC m=+0.114505309 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:37:58 np0005465987 podman[284572]: 2025-10-02 12:37:58.937366301 +0000 UTC m=+0.138822917 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:37:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:59.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:59 np0005465987 nova_compute[230713]: 2025-10-02 12:37:59.668 2 DEBUG nova.virt.libvirt.driver [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:37:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:37:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:37:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:59.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:00 np0005465987 nova_compute[230713]: 2025-10-02 12:38:00.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:00 np0005465987 nova_compute[230713]: 2025-10-02 12:38:00.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:00 np0005465987 nova_compute[230713]: 2025-10-02 12:38:00.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:38:00 np0005465987 nova_compute[230713]: 2025-10-02 12:38:00.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:38:01 np0005465987 nova_compute[230713]: 2025-10-02 12:38:01.173 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:38:01 np0005465987 nova_compute[230713]: 2025-10-02 12:38:01.174 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:38:01 np0005465987 nova_compute[230713]: 2025-10-02 12:38:01.174 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:38:01 np0005465987 nova_compute[230713]: 2025-10-02 12:38:01.174 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 58c789e4-1ba5-4001-913d-9aa19bd9c59f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:01.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:01 np0005465987 nova_compute[230713]: 2025-10-02 12:38:01.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:01.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:02 np0005465987 nova_compute[230713]: 2025-10-02 12:38:02.292 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updating instance_info_cache with network_info: [{"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:38:02 np0005465987 nova_compute[230713]: 2025-10-02 12:38:02.405 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-58c789e4-1ba5-4001-913d-9aa19bd9c59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:38:02 np0005465987 nova_compute[230713]: 2025-10-02 12:38:02.406 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:38:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:03.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:03 np0005465987 nova_compute[230713]: 2025-10-02 12:38:03.686 2 INFO nova.virt.libvirt.driver [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance shutdown successfully after 25 seconds.#033[00m
Oct  2 08:38:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:03.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:04 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:04Z|00556|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct  2 08:38:05 np0005465987 kernel: tapc775ce5e-fd (unregistering): left promiscuous mode
Oct  2 08:38:05 np0005465987 NetworkManager[44910]: <info>  [1759408685.1502] device (tapc775ce5e-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:38:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:05Z|00557|binding|INFO|Releasing lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 from this chassis (sb_readonly=0)
Oct  2 08:38:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:05Z|00558|binding|INFO|Setting lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 down in Southbound
Oct  2 08:38:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:05Z|00559|binding|INFO|Removing iface tapc775ce5e-fd ovn-installed in OVS
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.170 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:b4:8f 10.100.0.10'], port_security=['fa:16:3e:ad:b4:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd2dd3328-2833-429e-9f0d-3ffff2c72587', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '02b03dd3-c87c-43e1-9317-5837e2c47d47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c775ce5e-fd60-4ae8-a429-77f19ee2e858) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.172 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c775ce5e-fd60-4ae8-a429-77f19ee2e858 in datapath 00455285-97a7-4fa2-ba83-e8060936877e unbound from our chassis#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.174 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00455285-97a7-4fa2-ba83-e8060936877e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.176 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2c8ad690-a28d-4a23-aa32-9c38eb60269f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.176 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e namespace which is not needed anymore#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:05 np0005465987 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct  2 08:38:05 np0005465987 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000093.scope: Consumed 14.563s CPU time.
Oct  2 08:38:05 np0005465987 systemd-machined[188335]: Machine qemu-64-instance-00000093 terminated.
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.320 2 INFO nova.virt.libvirt.driver [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance destroyed successfully.#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.320 2 DEBUG nova.objects.instance [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'numa_topology' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:05 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[284403]: [NOTICE]   (284407) : haproxy version is 2.8.14-c23fe91
Oct  2 08:38:05 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[284403]: [NOTICE]   (284407) : path to executable is /usr/sbin/haproxy
Oct  2 08:38:05 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[284403]: [WARNING]  (284407) : Exiting Master process...
Oct  2 08:38:05 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[284403]: [WARNING]  (284407) : Exiting Master process...
Oct  2 08:38:05 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[284403]: [ALERT]    (284407) : Current worker (284409) exited with code 143 (Terminated)
Oct  2 08:38:05 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[284403]: [WARNING]  (284407) : All workers exited. Exiting... (0)
Oct  2 08:38:05 np0005465987 systemd[1]: libpod-c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a.scope: Deactivated successfully.
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.336 2 DEBUG nova.compute.manager [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:05 np0005465987 podman[284653]: 2025-10-02 12:38:05.341661741 +0000 UTC m=+0.043763415 container died c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:38:05 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:38:05 np0005465987 systemd[1]: var-lib-containers-storage-overlay-84cd65d33a7d98b382109a0ee2f165ab364413d5e9065b9a158d566fa3fa727a-merged.mount: Deactivated successfully.
Oct  2 08:38:05 np0005465987 podman[284653]: 2025-10-02 12:38:05.385049854 +0000 UTC m=+0.087151518 container cleanup c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:38:05 np0005465987 systemd[1]: libpod-conmon-c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a.scope: Deactivated successfully.
Oct  2 08:38:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:05.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.413 2 DEBUG oslo_concurrency.lockutils [None req-6486a235-39bc-4409-b789-cc13700fa687 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 26.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:05 np0005465987 podman[284692]: 2025-10-02 12:38:05.456273161 +0000 UTC m=+0.048344169 container remove c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.461 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[160b47f8-8041-4418-8f22-69829cb8db5b]: (4, ('Thu Oct  2 12:38:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e (c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a)\nc0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a\nThu Oct  2 12:38:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e (c0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a)\nc0a6aab40eb787b05a1e51862cd37d4f2a2faa7951c2f0a7d22e0bc2cf3fd52a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.463 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b51f5cfa-b5ff-40e1-844f-865f35a26ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.464 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:05 np0005465987 kernel: tap00455285-90: left promiscuous mode
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.490 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9a71ec40-1866-4987-b1be-51d3fec3e027]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.518 2 DEBUG nova.compute.manager [req-fc3b2a4c-7b0c-45f8-a82e-a335c7712784 req-26f4aa6a-edcd-4146-a140-103c739d146e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-vif-unplugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.518 2 DEBUG oslo_concurrency.lockutils [req-fc3b2a4c-7b0c-45f8-a82e-a335c7712784 req-26f4aa6a-edcd-4146-a140-103c739d146e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.519 2 DEBUG oslo_concurrency.lockutils [req-fc3b2a4c-7b0c-45f8-a82e-a335c7712784 req-26f4aa6a-edcd-4146-a140-103c739d146e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.519 2 DEBUG oslo_concurrency.lockutils [req-fc3b2a4c-7b0c-45f8-a82e-a335c7712784 req-26f4aa6a-edcd-4146-a140-103c739d146e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.519 2 DEBUG nova.compute.manager [req-fc3b2a4c-7b0c-45f8-a82e-a335c7712784 req-26f4aa6a-edcd-4146-a140-103c739d146e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] No waiting events found dispatching network-vif-unplugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:38:05 np0005465987 nova_compute[230713]: 2025-10-02 12:38:05.519 2 WARNING nova.compute.manager [req-fc3b2a4c-7b0c-45f8-a82e-a335c7712784 req-26f4aa6a-edcd-4146-a140-103c739d146e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received unexpected event network-vif-unplugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.527 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[66aa5f71-48df-40e9-a3df-757aecf8273b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.528 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3aac9d-950a-48ed-a6a9-60a0ec2667e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.542 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d07d326b-2fcc-4965-938b-361792d0542c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672289, 'reachable_time': 28154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284711, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:05 np0005465987 systemd[1]: run-netns-ovnmeta\x2d00455285\x2d97a7\x2d4fa2\x2dba83\x2de8060936877e.mount: Deactivated successfully.
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.545 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:38:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:05.545 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[022bae37-1be1-4670-836e-e26d07bd5b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:05.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:06 np0005465987 nova_compute[230713]: 2025-10-02 12:38:06.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:07.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:07 np0005465987 nova_compute[230713]: 2025-10-02 12:38:07.702 2 DEBUG nova.compute.manager [req-6fd6f859-467a-4fdf-be46-811291f15b15 req-8c061662-049a-4ba7-8f15-55f74ea2345d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:07 np0005465987 nova_compute[230713]: 2025-10-02 12:38:07.703 2 DEBUG oslo_concurrency.lockutils [req-6fd6f859-467a-4fdf-be46-811291f15b15 req-8c061662-049a-4ba7-8f15-55f74ea2345d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:07 np0005465987 nova_compute[230713]: 2025-10-02 12:38:07.703 2 DEBUG oslo_concurrency.lockutils [req-6fd6f859-467a-4fdf-be46-811291f15b15 req-8c061662-049a-4ba7-8f15-55f74ea2345d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:07 np0005465987 nova_compute[230713]: 2025-10-02 12:38:07.703 2 DEBUG oslo_concurrency.lockutils [req-6fd6f859-467a-4fdf-be46-811291f15b15 req-8c061662-049a-4ba7-8f15-55f74ea2345d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:07 np0005465987 nova_compute[230713]: 2025-10-02 12:38:07.704 2 DEBUG nova.compute.manager [req-6fd6f859-467a-4fdf-be46-811291f15b15 req-8c061662-049a-4ba7-8f15-55f74ea2345d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] No waiting events found dispatching network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:38:07 np0005465987 nova_compute[230713]: 2025-10-02 12:38:07.704 2 WARNING nova.compute.manager [req-6fd6f859-467a-4fdf-be46-811291f15b15 req-8c061662-049a-4ba7-8f15-55f74ea2345d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received unexpected event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:38:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:07.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:08 np0005465987 nova_compute[230713]: 2025-10-02 12:38:08.397 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:38:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:38:08 np0005465987 nova_compute[230713]: 2025-10-02 12:38:08.947 2 INFO nova.compute.manager [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Rebuilding instance#033[00m
Oct  2 08:38:08 np0005465987 nova_compute[230713]: 2025-10-02 12:38:08.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.185 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'trusted_certs' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.206 2 DEBUG nova.compute.manager [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.257 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'pci_requests' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.271 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'pci_devices' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.286 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'resources' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.296 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'migration_context' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.320 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.326 2 INFO nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance already shutdown.#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.335 2 INFO nova.virt.libvirt.driver [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance destroyed successfully.#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.342 2 INFO nova.virt.libvirt.driver [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance destroyed successfully.#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.343 2 DEBUG nova.virt.libvirt.vif [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1524407385',display_name='tempest-tempest.common.compute-instance-1524407385',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1524407385',id=147,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-2ey4o6h2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:38:08Z,user_data=None,user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=d2dd3328-2833-429e-9f0d-3ffff2c72587,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.343 2 DEBUG nova.network.os_vif_util [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.344 2 DEBUG nova.network.os_vif_util [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.345 2 DEBUG os_vif [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc775ce5e-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.356 2 INFO os_vif [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd')#033[00m
Oct  2 08:38:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:09.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:38:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:09.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.991 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:38:09 np0005465987 nova_compute[230713]: 2025-10-02 12:38:09.991 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3009170363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.433 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.521 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.521 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.528 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.528 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:38:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:10.599 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:10.600 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.679 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.681 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4211MB free_disk=20.774799346923828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.681 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.682 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.737 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 58c789e4-1ba5-4001-913d-9aa19bd9c59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.738 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance d2dd3328-2833-429e-9f0d-3ffff2c72587 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.738 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.738 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:38:10 np0005465987 nova_compute[230713]: 2025-10-02 12:38:10.781 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/998240517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:11 np0005465987 nova_compute[230713]: 2025-10-02 12:38:11.208 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:11 np0005465987 nova_compute[230713]: 2025-10-02 12:38:11.213 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:38:11 np0005465987 nova_compute[230713]: 2025-10-02 12:38:11.230 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:38:11 np0005465987 nova_compute[230713]: 2025-10-02 12:38:11.251 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:38:11 np0005465987 nova_compute[230713]: 2025-10-02 12:38:11.251 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:11.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:11.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.251 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.252 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.460 2 INFO nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Deleting instance files /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587_del#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.461 2 INFO nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Deletion of /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587_del complete#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.663 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.664 2 INFO nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Creating image(s)#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.705 2 DEBUG nova.storage.rbd_utils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.738 2 DEBUG nova.storage.rbd_utils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.761 2 DEBUG nova.storage.rbd_utils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.764 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.823 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.823 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.824 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.824 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.849 2 DEBUG nova.storage.rbd_utils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:12 np0005465987 nova_compute[230713]: 2025-10-02 12:38:12.852 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 d2dd3328-2833-429e-9f0d-3ffff2c72587_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e336 e336: 3 total, 3 up, 3 in
Oct  2 08:38:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:13.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:13 np0005465987 nova_compute[230713]: 2025-10-02 12:38:13.424 2 DEBUG oslo_concurrency.lockutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:13 np0005465987 nova_compute[230713]: 2025-10-02 12:38:13.425 2 DEBUG oslo_concurrency.lockutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:13 np0005465987 nova_compute[230713]: 2025-10-02 12:38:13.425 2 DEBUG oslo_concurrency.lockutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:13 np0005465987 nova_compute[230713]: 2025-10-02 12:38:13.426 2 DEBUG oslo_concurrency.lockutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:13 np0005465987 nova_compute[230713]: 2025-10-02 12:38:13.426 2 DEBUG oslo_concurrency.lockutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:13 np0005465987 nova_compute[230713]: 2025-10-02 12:38:13.428 2 INFO nova.compute.manager [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Terminating instance#033[00m
Oct  2 08:38:13 np0005465987 nova_compute[230713]: 2025-10-02 12:38:13.429 2 DEBUG nova.compute.manager [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:38:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:14 np0005465987 kernel: tap41486814-fe (unregistering): left promiscuous mode
Oct  2 08:38:14 np0005465987 NetworkManager[44910]: <info>  [1759408694.4019] device (tap41486814-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:38:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:14Z|00560|binding|INFO|Releasing lport 41486814-fe68-4ee4-acaa-661e4bca279a from this chassis (sb_readonly=0)
Oct  2 08:38:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:14Z|00561|binding|INFO|Setting lport 41486814-fe68-4ee4-acaa-661e4bca279a down in Southbound
Oct  2 08:38:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:14Z|00562|binding|INFO|Removing iface tap41486814-fe ovn-installed in OVS
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.418 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:0d:be 10.100.0.7'], port_security=['fa:16:3e:44:0d:be 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '58c789e4-1ba5-4001-913d-9aa19bd9c59f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc0d63d3b4404ef8858166e8836dd0af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2a11ff87-bec6-4638-b302-adcd655efba9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=797b6af2-473b-4626-9e97-a0a489119419, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=41486814-fe68-4ee4-acaa-661e4bca279a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.419 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 41486814-fe68-4ee4-acaa-661e4bca279a in datapath d7203b00-e5e4-402e-b777-ac6280fa23ac unbound from our chassis#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.420 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7203b00-e5e4-402e-b777-ac6280fa23ac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.422 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d11f9c-dff1-4961-a4ed-392876678daf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.422 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac namespace which is not needed anymore#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:14 np0005465987 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct  2 08:38:14 np0005465987 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000083.scope: Consumed 23.259s CPU time.
Oct  2 08:38:14 np0005465987 systemd-machined[188335]: Machine qemu-59-instance-00000083 terminated.
Oct  2 08:38:14 np0005465987 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[278993]: [NOTICE]   (278997) : haproxy version is 2.8.14-c23fe91
Oct  2 08:38:14 np0005465987 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[278993]: [NOTICE]   (278997) : path to executable is /usr/sbin/haproxy
Oct  2 08:38:14 np0005465987 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[278993]: [WARNING]  (278997) : Exiting Master process...
Oct  2 08:38:14 np0005465987 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[278993]: [ALERT]    (278997) : Current worker (278999) exited with code 143 (Terminated)
Oct  2 08:38:14 np0005465987 neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac[278993]: [WARNING]  (278997) : All workers exited. Exiting... (0)
Oct  2 08:38:14 np0005465987 systemd[1]: libpod-5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3.scope: Deactivated successfully.
Oct  2 08:38:14 np0005465987 conmon[278993]: conmon 5c37738d09dfe14000ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3.scope/container/memory.events
Oct  2 08:38:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:14 np0005465987 podman[284945]: 2025-10-02 12:38:14.560839581 +0000 UTC m=+0.051088275 container died 5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:38:14 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3-userdata-shm.mount: Deactivated successfully.
Oct  2 08:38:14 np0005465987 systemd[1]: var-lib-containers-storage-overlay-00ebc52595e9e89d09fcf283d5fe2687c0c007e6464125bf77a71d404cf5b91d-merged.mount: Deactivated successfully.
Oct  2 08:38:14 np0005465987 podman[284945]: 2025-10-02 12:38:14.61352909 +0000 UTC m=+0.103777784 container cleanup 5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:38:14 np0005465987 systemd[1]: libpod-conmon-5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3.scope: Deactivated successfully.
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.663 2 INFO nova.virt.libvirt.driver [-] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Instance destroyed successfully.#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.663 2 DEBUG nova.objects.instance [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lazy-loading 'resources' on Instance uuid 58c789e4-1ba5-4001-913d-9aa19bd9c59f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:14 np0005465987 podman[284976]: 2025-10-02 12:38:14.67610888 +0000 UTC m=+0.042493949 container remove 5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.681 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8c60462b-53f1-46a4-bf9e-f6576b8ebe6e]: (4, ('Thu Oct  2 12:38:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac (5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3)\n5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3\nThu Oct  2 12:38:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac (5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3)\n5c37738d09dfe14000ffeda937ace14183e3073286b3f006fb8bab7cf3c388b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.683 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d0cbeb3b-52b6-4f53-afd1-d464e95c65cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.684 2 DEBUG nova.virt.libvirt.vif [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1772536260',display_name='tempest-₡-1772536260',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--1772536260',id=131,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bc0d63d3b4404ef8858166e8836dd0af',ramdisk_id='',reservation_id='r-j9whf17i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-80077074',owner_user_name='tempest-ServersTestJSON-80077074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:22Z,user_data=None,user_id='fe9cc788734f406d826446a848700331',uuid=58c789e4-1ba5-4001-913d-9aa19bd9c59f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.684 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7203b00-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.684 2 DEBUG nova.network.os_vif_util [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converting VIF {"id": "41486814-fe68-4ee4-acaa-661e4bca279a", "address": "fa:16:3e:44:0d:be", "network": {"id": "d7203b00-e5e4-402e-b777-ac6280fa23ac", "bridge": "br-int", "label": "tempest-ServersTestJSON-1524378232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc0d63d3b4404ef8858166e8836dd0af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41486814-fe", "ovs_interfaceid": "41486814-fe68-4ee4-acaa-661e4bca279a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.685 2 DEBUG nova.network.os_vif_util [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:0d:be,bridge_name='br-int',has_traffic_filtering=True,id=41486814-fe68-4ee4-acaa-661e4bca279a,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41486814-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.685 2 DEBUG os_vif [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:0d:be,bridge_name='br-int',has_traffic_filtering=True,id=41486814-fe68-4ee4-acaa-661e4bca279a,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41486814-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:38:14 np0005465987 kernel: tapd7203b00-e0: left promiscuous mode
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41486814-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:14 np0005465987 nova_compute[230713]: 2025-10-02 12:38:14.704 2 INFO os_vif [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:0d:be,bridge_name='br-int',has_traffic_filtering=True,id=41486814-fe68-4ee4-acaa-661e4bca279a,network=Network(d7203b00-e5e4-402e-b777-ac6280fa23ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41486814-fe')#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.705 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[44c203ee-7cd9-4fcf-b6fc-1d5d266f6850]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.727 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[896ffe0f-1b4e-4bc6-aaa8-b1bec9be411d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.728 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[180c33fd-3b03-4acf-9fbb-0369108d2de8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.746 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b06d001f-dc57-4590-af11-04d10f0f1b30]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653008, 'reachable_time': 17649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285019, 'error': None, 'target': 'ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.749 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7203b00-e5e4-402e-b777-ac6280fa23ac deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:38:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:14.750 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[5b394fb3-3d05-4ce0-87dc-533809b1aba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:14 np0005465987 systemd[1]: run-netns-ovnmeta\x2dd7203b00\x2de5e4\x2d402e\x2db777\x2dac6280fa23ac.mount: Deactivated successfully.
Oct  2 08:38:15 np0005465987 nova_compute[230713]: 2025-10-02 12:38:15.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:15.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:15 np0005465987 nova_compute[230713]: 2025-10-02 12:38:15.519 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 d2dd3328-2833-429e-9f0d-3ffff2c72587_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:15 np0005465987 nova_compute[230713]: 2025-10-02 12:38:15.586 2 DEBUG nova.storage.rbd_utils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] resizing rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:38:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.150 2 DEBUG nova.compute.manager [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Received event network-vif-unplugged-41486814-fe68-4ee4-acaa-661e4bca279a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.150 2 DEBUG oslo_concurrency.lockutils [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.151 2 DEBUG oslo_concurrency.lockutils [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.151 2 DEBUG oslo_concurrency.lockutils [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.152 2 DEBUG nova.compute.manager [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] No waiting events found dispatching network-vif-unplugged-41486814-fe68-4ee4-acaa-661e4bca279a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.152 2 DEBUG nova.compute.manager [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Received event network-vif-unplugged-41486814-fe68-4ee4-acaa-661e4bca279a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.152 2 DEBUG nova.compute.manager [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Received event network-vif-plugged-41486814-fe68-4ee4-acaa-661e4bca279a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.153 2 DEBUG oslo_concurrency.lockutils [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.153 2 DEBUG oslo_concurrency.lockutils [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.153 2 DEBUG oslo_concurrency.lockutils [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.154 2 DEBUG nova.compute.manager [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] No waiting events found dispatching network-vif-plugged-41486814-fe68-4ee4-acaa-661e4bca279a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.154 2 WARNING nova.compute.manager [req-38a48eab-8e0d-4946-9128-962953420459 req-443a8d4b-82b9-4c99-a67b-0c7fa9e55b67 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Received unexpected event network-vif-plugged-41486814-fe68-4ee4-acaa-661e4bca279a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.686 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.687 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Ensure instance console log exists: /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.687 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.688 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.688 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.691 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Start _get_guest_xml network_info=[{"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.694 2 WARNING nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.699 2 DEBUG nova.virt.libvirt.host [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.700 2 DEBUG nova.virt.libvirt.host [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.704 2 DEBUG nova.virt.libvirt.host [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.705 2 DEBUG nova.virt.libvirt.host [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.706 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.706 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.707 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.707 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.707 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.707 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.707 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.708 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.708 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.708 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.708 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.709 2 DEBUG nova.virt.hardware [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.709 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'vcpu_model' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.821 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:16 np0005465987 nova_compute[230713]: 2025-10-02 12:38:16.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:38:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:38:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/192862081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.250 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.272 2 DEBUG nova.storage.rbd_utils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.276 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:17.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:17.602 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:38:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2250064736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.699 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.701 2 DEBUG nova.virt.libvirt.vif [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:37:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1524407385',display_name='tempest-tempest.common.compute-instance-1524407385',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1524407385',id=147,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-2ey4o6h2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:38:12Z,user_data=None,user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=d2dd3328-2833-429e-9f0d-3ffff2c72587,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.701 2 DEBUG nova.network.os_vif_util [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.702 2 DEBUG nova.network.os_vif_util [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.704 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <uuid>d2dd3328-2833-429e-9f0d-3ffff2c72587</uuid>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <name>instance-00000093</name>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <nova:name>tempest-tempest.common.compute-instance-1524407385</nova:name>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:38:16</nova:creationTime>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <nova:user uuid="c02d1dcc10ea4e57bbc6b7a3c100dc7b">tempest-ServerActionsTestOtherA-1680083910-project-member</nova:user>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <nova:project uuid="6822f02d5ca04c659329a75d487054cf">tempest-ServerActionsTestOtherA-1680083910</nova:project>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="db05f54c-61f8-42d6-a1e2-da3219a77b12"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <nova:port uuid="c775ce5e-fd60-4ae8-a429-77f19ee2e858">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <entry name="serial">d2dd3328-2833-429e-9f0d-3ffff2c72587</entry>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <entry name="uuid">d2dd3328-2833-429e-9f0d-3ffff2c72587</entry>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d2dd3328-2833-429e-9f0d-3ffff2c72587_disk">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:ad:b4:8f"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <target dev="tapc775ce5e-fd"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/console.log" append="off"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:38:17 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:38:17 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:38:17 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:38:17 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.706 2 DEBUG nova.compute.manager [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Preparing to wait for external event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.706 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.707 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.707 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.708 2 DEBUG nova.virt.libvirt.vif [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:37:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1524407385',display_name='tempest-tempest.common.compute-instance-1524407385',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1524407385',id=147,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-2ey4o6h2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:38:12Z,user_data=None,user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=d2dd3328-2833-429e-9f0d-3ffff2c72587,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.708 2 DEBUG nova.network.os_vif_util [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.708 2 DEBUG nova.network.os_vif_util [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.709 2 DEBUG os_vif [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.710 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.710 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc775ce5e-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc775ce5e-fd, col_values=(('external_ids', {'iface-id': 'c775ce5e-fd60-4ae8-a429-77f19ee2e858', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:b4:8f', 'vm-uuid': 'd2dd3328-2833-429e-9f0d-3ffff2c72587'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:17 np0005465987 NetworkManager[44910]: <info>  [1759408697.7508] manager: (tapc775ce5e-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.756 2 INFO os_vif [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd')#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.813 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.814 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.814 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No VIF found with MAC fa:16:3e:ad:b4:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.815 2 INFO nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Using config drive#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.840 2 DEBUG nova.storage.rbd_utils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.868 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'ec2_ids' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:17 np0005465987 nova_compute[230713]: 2025-10-02 12:38:17.903 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'keypairs' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:17.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:19 np0005465987 nova_compute[230713]: 2025-10-02 12:38:19.036 2 INFO nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Creating config drive at /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config#033[00m
Oct  2 08:38:19 np0005465987 nova_compute[230713]: 2025-10-02 12:38:19.041 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_baegaic execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:19 np0005465987 nova_compute[230713]: 2025-10-02 12:38:19.172 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_baegaic" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:19 np0005465987 nova_compute[230713]: 2025-10-02 12:38:19.199 2 DEBUG nova.storage.rbd_utils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:19 np0005465987 nova_compute[230713]: 2025-10-02 12:38:19.203 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:19.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:19.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.318 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408685.3177264, d2dd3328-2833-429e-9f0d-3ffff2c72587 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.319 2 INFO nova.compute.manager [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.345 2 DEBUG nova.compute.manager [None req-0b5d6c6a-ad2e-489c-8952-233b1a407eb7 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.348 2 DEBUG nova.compute.manager [None req-0b5d6c6a-ad2e-489c-8952-233b1a407eb7 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.374 2 DEBUG oslo_concurrency.processutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config d2dd3328-2833-429e-9f0d-3ffff2c72587_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.375 2 INFO nova.compute.manager [None req-0b5d6c6a-ad2e-489c-8952-233b1a407eb7 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.375 2 INFO nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Deleting local config drive /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587/disk.config because it was imported into RBD.#033[00m
Oct  2 08:38:20 np0005465987 kernel: tapc775ce5e-fd: entered promiscuous mode
Oct  2 08:38:20 np0005465987 NetworkManager[44910]: <info>  [1759408700.4222] manager: (tapc775ce5e-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:20Z|00563|binding|INFO|Claiming lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 for this chassis.
Oct  2 08:38:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:20Z|00564|binding|INFO|c775ce5e-fd60-4ae8-a429-77f19ee2e858: Claiming fa:16:3e:ad:b4:8f 10.100.0.10
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.431 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:b4:8f 10.100.0.10'], port_security=['fa:16:3e:ad:b4:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd2dd3328-2833-429e-9f0d-3ffff2c72587', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '5', 'neutron:security_group_ids': '02b03dd3-c87c-43e1-9317-5837e2c47d47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c775ce5e-fd60-4ae8-a429-77f19ee2e858) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.432 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c775ce5e-fd60-4ae8-a429-77f19ee2e858 in datapath 00455285-97a7-4fa2-ba83-e8060936877e bound to our chassis#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.433 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00455285-97a7-4fa2-ba83-e8060936877e#033[00m
Oct  2 08:38:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:20Z|00565|binding|INFO|Setting lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 ovn-installed in OVS
Oct  2 08:38:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:20Z|00566|binding|INFO|Setting lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 up in Southbound
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.447 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bc54bce7-1861-43b1-8bf5-73af40d866b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.448 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00455285-91 in ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.450 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00455285-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.450 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[160d5206-defc-4eb2-872b-a07a1d1b539d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 systemd-machined[188335]: New machine qemu-65-instance-00000093.
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.451 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f50d5b98-4eb0-47d1-a286-e80983b9bf75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 systemd[1]: Started Virtual Machine qemu-65-instance-00000093.
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.463 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[69e902d7-6872-4403-b982-f9c707d65df0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 systemd-udevd[285234]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:38:20 np0005465987 NetworkManager[44910]: <info>  [1759408700.4850] device (tapc775ce5e-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:38:20 np0005465987 NetworkManager[44910]: <info>  [1759408700.4858] device (tapc775ce5e-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.489 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b351e635-7339-447d-92ef-067e5512d054]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.521 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[02da6b20-090b-4a31-bbef-abd1854d8f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 NetworkManager[44910]: <info>  [1759408700.5261] manager: (tap00455285-90): new Veth device (/org/freedesktop/NetworkManager/Devices/262)
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.526 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d44be71f-65a4-4ddf-951c-9dc704e2872e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.559 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4fa8dc-d893-4546-999c-a1849444701b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.563 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe37a30-1d44-4f22-9f50-4052344089eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 NetworkManager[44910]: <info>  [1759408700.5872] device (tap00455285-90): carrier: link connected
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.593 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[01cd19bd-98ab-482e-989c-018f45ae1e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.610 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb42c43-be65-44d1-8f87-8c4a0145d685]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676916, 'reachable_time': 37278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285264, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.629 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d00768-b5c7-4e62-8bd8-63d6c63dcecf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:8a3c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676916, 'tstamp': 676916}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285265, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.647 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad04d3d-fea9-4136-adf8-5f18a0903921]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676916, 'reachable_time': 37278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285266, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.680 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc00dc6-d746-4e77-a58d-974170b050de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.743 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fecbbaae-b76b-4c87-b593-9dc65bd8283b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.744 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.745 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.745 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00455285-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:20 np0005465987 NetworkManager[44910]: <info>  [1759408700.7473] manager: (tap00455285-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:20 np0005465987 kernel: tap00455285-90: entered promiscuous mode
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.752 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00455285-90, col_values=(('external_ids', {'iface-id': '293fb87a-10df-4698-a69e-3023bca5a6a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:20Z|00567|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:20 np0005465987 nova_compute[230713]: 2025-10-02 12:38:20.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.778 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.779 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e371be51-b776-45bb-aac3-4c1f68a183e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.780 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-00455285-97a7-4fa2-ba83-e8060936877e
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 00455285-97a7-4fa2-ba83-e8060936877e
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:38:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:20.780 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'env', 'PROCESS_TAG=haproxy-00455285-97a7-4fa2-ba83-e8060936877e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00455285-97a7-4fa2-ba83-e8060936877e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:38:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 e337: 3 total, 3 up, 3 in
Oct  2 08:38:21 np0005465987 podman[285316]: 2025-10-02 12:38:21.141017004 +0000 UTC m=+0.047039274 container create 4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:38:21 np0005465987 systemd[1]: Started libpod-conmon-4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b.scope.
Oct  2 08:38:21 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:38:21 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e238e133e753f45097e7c4e1270024400e764faf20b38182db014b8b0bf36228/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:38:21 np0005465987 podman[285316]: 2025-10-02 12:38:21.11793458 +0000 UTC m=+0.023956880 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:38:21 np0005465987 podman[285316]: 2025-10-02 12:38:21.225117206 +0000 UTC m=+0.131139496 container init 4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:38:21 np0005465987 podman[285316]: 2025-10-02 12:38:21.229936319 +0000 UTC m=+0.135958599 container start 4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:38:21 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[285347]: [NOTICE]   (285351) : New worker (285353) forked
Oct  2 08:38:21 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[285347]: [NOTICE]   (285351) : Loading success.
Oct  2 08:38:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:21.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:21 np0005465987 nova_compute[230713]: 2025-10-02 12:38:21.546 2 DEBUG nova.compute.manager [req-ecbcd9ab-1b36-4115-8ebb-05f6a00881fd req-152ef701-3b7e-414d-9fa1-4da41ae728b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:21 np0005465987 nova_compute[230713]: 2025-10-02 12:38:21.547 2 DEBUG oslo_concurrency.lockutils [req-ecbcd9ab-1b36-4115-8ebb-05f6a00881fd req-152ef701-3b7e-414d-9fa1-4da41ae728b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:21 np0005465987 nova_compute[230713]: 2025-10-02 12:38:21.547 2 DEBUG oslo_concurrency.lockutils [req-ecbcd9ab-1b36-4115-8ebb-05f6a00881fd req-152ef701-3b7e-414d-9fa1-4da41ae728b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:21 np0005465987 nova_compute[230713]: 2025-10-02 12:38:21.548 2 DEBUG oslo_concurrency.lockutils [req-ecbcd9ab-1b36-4115-8ebb-05f6a00881fd req-152ef701-3b7e-414d-9fa1-4da41ae728b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:21 np0005465987 nova_compute[230713]: 2025-10-02 12:38:21.548 2 DEBUG nova.compute.manager [req-ecbcd9ab-1b36-4115-8ebb-05f6a00881fd req-152ef701-3b7e-414d-9fa1-4da41ae728b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Processing event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:38:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.199 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408702.1993527, d2dd3328-2833-429e-9f0d-3ffff2c72587 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.200 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] VM Started (Lifecycle Event)#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.201 2 DEBUG nova.compute.manager [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.204 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.208 2 INFO nova.virt.libvirt.driver [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance spawned successfully.#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.208 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.247 2 INFO nova.virt.libvirt.driver [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Deleting instance files /var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f_del#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.248 2 INFO nova.virt.libvirt.driver [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Deletion of /var/lib/nova/instances/58c789e4-1ba5-4001-913d-9aa19bd9c59f_del complete#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.286 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.290 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.327 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.327 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.328 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.328 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.329 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.329 2 DEBUG nova.virt.libvirt.driver [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.393 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.393 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408702.199875, d2dd3328-2833-429e-9f0d-3ffff2c72587 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.394 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.454 2 INFO nova.compute.manager [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Took 9.02 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.455 2 DEBUG oslo.service.loopingcall [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.456 2 DEBUG nova.compute.manager [-] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.457 2 DEBUG nova.network.neutron [-] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.463 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.468 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408702.204368, d2dd3328-2833-429e-9f0d-3ffff2c72587 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.469 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.534 2 DEBUG nova.compute.manager [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.537 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.546 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.648 2 INFO nova.compute.manager [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] bringing vm to original state: 'stopped'#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.884 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.884 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.885 2 DEBUG nova.compute.manager [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:22 np0005465987 nova_compute[230713]: 2025-10-02 12:38:22.888 2 DEBUG nova.compute.manager [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:38:22 np0005465987 kernel: tapc775ce5e-fd (unregistering): left promiscuous mode
Oct  2 08:38:22 np0005465987 NetworkManager[44910]: <info>  [1759408702.9992] device (tapc775ce5e-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:23Z|00568|binding|INFO|Releasing lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 from this chassis (sb_readonly=0)
Oct  2 08:38:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:23Z|00569|binding|INFO|Setting lport c775ce5e-fd60-4ae8-a429-77f19ee2e858 down in Southbound
Oct  2 08:38:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:38:23Z|00570|binding|INFO|Removing iface tapc775ce5e-fd ovn-installed in OVS
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.013 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:b4:8f 10.100.0.10'], port_security=['fa:16:3e:ad:b4:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd2dd3328-2833-429e-9f0d-3ffff2c72587', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '6', 'neutron:security_group_ids': '02b03dd3-c87c-43e1-9317-5837e2c47d47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c775ce5e-fd60-4ae8-a429-77f19ee2e858) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.014 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c775ce5e-fd60-4ae8-a429-77f19ee2e858 in datapath 00455285-97a7-4fa2-ba83-e8060936877e unbound from our chassis#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.017 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00455285-97a7-4fa2-ba83-e8060936877e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.018 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[20dcf5b9-d635-4244-81af-f43fa4d0c063]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.018 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e namespace which is not needed anymore#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:23 np0005465987 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct  2 08:38:23 np0005465987 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000093.scope: Consumed 1.524s CPU time.
Oct  2 08:38:23 np0005465987 systemd-machined[188335]: Machine qemu-65-instance-00000093 terminated.
Oct  2 08:38:23 np0005465987 podman[285371]: 2025-10-02 12:38:23.098163786 +0000 UTC m=+0.074228242 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:38:23 np0005465987 NetworkManager[44910]: <info>  [1759408703.1067] manager: (tapc775ce5e-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.120 2 INFO nova.virt.libvirt.driver [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance destroyed successfully.#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.120 2 DEBUG nova.compute.manager [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.167 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:23 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[285347]: [NOTICE]   (285351) : haproxy version is 2.8.14-c23fe91
Oct  2 08:38:23 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[285347]: [NOTICE]   (285351) : path to executable is /usr/sbin/haproxy
Oct  2 08:38:23 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[285347]: [WARNING]  (285351) : Exiting Master process...
Oct  2 08:38:23 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[285347]: [WARNING]  (285351) : Exiting Master process...
Oct  2 08:38:23 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[285347]: [ALERT]    (285351) : Current worker (285353) exited with code 143 (Terminated)
Oct  2 08:38:23 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[285347]: [WARNING]  (285351) : All workers exited. Exiting... (0)
Oct  2 08:38:23 np0005465987 systemd[1]: libpod-4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b.scope: Deactivated successfully.
Oct  2 08:38:23 np0005465987 podman[285411]: 2025-10-02 12:38:23.180386075 +0000 UTC m=+0.078378385 container died 4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.194 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.194 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.195 2 DEBUG nova.objects.instance [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:38:23 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b-userdata-shm.mount: Deactivated successfully.
Oct  2 08:38:23 np0005465987 systemd[1]: var-lib-containers-storage-overlay-e238e133e753f45097e7c4e1270024400e764faf20b38182db014b8b0bf36228-merged.mount: Deactivated successfully.
Oct  2 08:38:23 np0005465987 podman[285411]: 2025-10-02 12:38:23.214852613 +0000 UTC m=+0.112844923 container cleanup 4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:38:23 np0005465987 systemd[1]: libpod-conmon-4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b.scope: Deactivated successfully.
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.252 2 DEBUG oslo_concurrency.lockutils [None req-5440d927-b444-47d8-9774-bb725d741e11 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:23 np0005465987 podman[285451]: 2025-10-02 12:38:23.275154001 +0000 UTC m=+0.037228445 container remove 4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.280 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[420d65f6-4b1e-40e6-96c4-af155dbffd15]: (4, ('Thu Oct  2 12:38:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e (4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b)\n4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b\nThu Oct  2 12:38:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e (4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b)\n4b0ea8b8a7e8f05f603d410d43ef6b044116eeb3f495166f6231ce7a11c48c3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.282 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1dfa3e-48a7-4724-8194-1f66eff47bc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.283 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:23 np0005465987 kernel: tap00455285-90: left promiscuous mode
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.305 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[713ff337-e347-4332-a6f1-afeecc8f3935]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.343 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[12897486-04c7-409b-90d8-b0d222a40aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.344 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dff27661-f990-40e0-8d22-37edfe1e4674]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.359 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4d98bf11-5a16-4bfb-9ba5-3fcccf3ac68d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676908, 'reachable_time': 27245, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285473, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.362 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:38:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:23.362 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[8237d36c-a4ff-46df-8858-799156b50761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:38:23 np0005465987 systemd[1]: run-netns-ovnmeta\x2d00455285\x2d97a7\x2d4fa2\x2dba83\x2de8060936877e.mount: Deactivated successfully.
Oct  2 08:38:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:23.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.693 2 DEBUG nova.compute.manager [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.693 2 DEBUG oslo_concurrency.lockutils [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.693 2 DEBUG oslo_concurrency.lockutils [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.693 2 DEBUG oslo_concurrency.lockutils [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.694 2 DEBUG nova.compute.manager [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] No waiting events found dispatching network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.694 2 WARNING nova.compute.manager [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received unexpected event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.694 2 DEBUG nova.compute.manager [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-vif-unplugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.694 2 DEBUG oslo_concurrency.lockutils [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.694 2 DEBUG oslo_concurrency.lockutils [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.695 2 DEBUG oslo_concurrency.lockutils [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.695 2 DEBUG nova.compute.manager [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] No waiting events found dispatching network-vif-unplugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.695 2 WARNING nova.compute.manager [req-10612750-9431-4780-843b-6b5df35f6f66 req-d538b349-a28a-4714-a9d4-566b5e933c60 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received unexpected event network-vif-unplugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.779 2 DEBUG nova.network.neutron [-] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.798 2 INFO nova.compute.manager [-] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Took 1.34 seconds to deallocate network for instance.#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.849 2 DEBUG oslo_concurrency.lockutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.850 2 DEBUG oslo_concurrency.lockutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.870 2 DEBUG nova.compute.manager [req-b2b6cbd3-74b6-47b5-a25b-07df8a434969 req-46bac5bf-7d19-44da-a5cf-799626ed171f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Received event network-vif-deleted-41486814-fe68-4ee4-acaa-661e4bca279a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:23.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.957 2 DEBUG oslo_concurrency.processutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:23 np0005465987 nova_compute[230713]: 2025-10-02 12:38:23.991 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:24 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3606411082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:24 np0005465987 nova_compute[230713]: 2025-10-02 12:38:24.414 2 DEBUG oslo_concurrency.processutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:24 np0005465987 nova_compute[230713]: 2025-10-02 12:38:24.419 2 DEBUG nova.compute.provider_tree [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:38:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:24 np0005465987 nova_compute[230713]: 2025-10-02 12:38:24.563 2 DEBUG nova.scheduler.client.report [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:38:24 np0005465987 nova_compute[230713]: 2025-10-02 12:38:24.649 2 DEBUG oslo_concurrency.lockutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:24 np0005465987 nova_compute[230713]: 2025-10-02 12:38:24.672 2 INFO nova.scheduler.client.report [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Deleted allocations for instance 58c789e4-1ba5-4001-913d-9aa19bd9c59f#033[00m
Oct  2 08:38:24 np0005465987 nova_compute[230713]: 2025-10-02 12:38:24.746 2 DEBUG oslo_concurrency.lockutils [None req-d6ba3d23-892c-49b1-9bff-802456c44cbc fe9cc788734f406d826446a848700331 bc0d63d3b4404ef8858166e8836dd0af - - default default] Lock "58c789e4-1ba5-4001-913d-9aa19bd9c59f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:25 np0005465987 nova_compute[230713]: 2025-10-02 12:38:25.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:38:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:25.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:38:25 np0005465987 nova_compute[230713]: 2025-10-02 12:38:25.834 2 DEBUG nova.compute.manager [req-1f2e152b-c11c-430e-a55a-ca9e2913336a req-f4200b80-4aea-49cf-b413-9bdec09503a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:25 np0005465987 nova_compute[230713]: 2025-10-02 12:38:25.834 2 DEBUG oslo_concurrency.lockutils [req-1f2e152b-c11c-430e-a55a-ca9e2913336a req-f4200b80-4aea-49cf-b413-9bdec09503a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:25 np0005465987 nova_compute[230713]: 2025-10-02 12:38:25.834 2 DEBUG oslo_concurrency.lockutils [req-1f2e152b-c11c-430e-a55a-ca9e2913336a req-f4200b80-4aea-49cf-b413-9bdec09503a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:25 np0005465987 nova_compute[230713]: 2025-10-02 12:38:25.835 2 DEBUG oslo_concurrency.lockutils [req-1f2e152b-c11c-430e-a55a-ca9e2913336a req-f4200b80-4aea-49cf-b413-9bdec09503a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:25 np0005465987 nova_compute[230713]: 2025-10-02 12:38:25.835 2 DEBUG nova.compute.manager [req-1f2e152b-c11c-430e-a55a-ca9e2913336a req-f4200b80-4aea-49cf-b413-9bdec09503a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] No waiting events found dispatching network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:38:25 np0005465987 nova_compute[230713]: 2025-10-02 12:38:25.835 2 WARNING nova.compute.manager [req-1f2e152b-c11c-430e-a55a-ca9e2913336a req-f4200b80-4aea-49cf-b413-9bdec09503a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received unexpected event network-vif-plugged-c775ce5e-fd60-4ae8-a429-77f19ee2e858 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:38:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.782 2 DEBUG oslo_concurrency.lockutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.782 2 DEBUG oslo_concurrency.lockutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.783 2 DEBUG oslo_concurrency.lockutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.783 2 DEBUG oslo_concurrency.lockutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.783 2 DEBUG oslo_concurrency.lockutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.785 2 INFO nova.compute.manager [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Terminating instance#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.786 2 DEBUG nova.compute.manager [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.796 2 INFO nova.virt.libvirt.driver [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Instance destroyed successfully.#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.796 2 DEBUG nova.objects.instance [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'resources' on Instance uuid d2dd3328-2833-429e-9f0d-3ffff2c72587 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.846 2 DEBUG nova.virt.libvirt.vif [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:37:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1524407385',display_name='tempest-tempest.common.compute-instance-1524407385',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1524407385',id=147,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:38:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-2ey4o6h2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:38:23Z,user_data=None,user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=d2dd3328-2833-429e-9f0d-3ffff2c72587,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.847 2 DEBUG nova.network.os_vif_util [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "address": "fa:16:3e:ad:b4:8f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775ce5e-fd", "ovs_interfaceid": "c775ce5e-fd60-4ae8-a429-77f19ee2e858", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.848 2 DEBUG nova.network.os_vif_util [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.848 2 DEBUG os_vif [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc775ce5e-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:38:26 np0005465987 nova_compute[230713]: 2025-10-02 12:38:26.857 2 INFO os_vif [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:b4:8f,bridge_name='br-int',has_traffic_filtering=True,id=c775ce5e-fd60-4ae8-a429-77f19ee2e858,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775ce5e-fd')#033[00m
Oct  2 08:38:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:27.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:27.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:27.963 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:27.964 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:38:27.964 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:27 np0005465987 nova_compute[230713]: 2025-10-02 12:38:27.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:28 np0005465987 nova_compute[230713]: 2025-10-02 12:38:28.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:28 np0005465987 nova_compute[230713]: 2025-10-02 12:38:28.414 2 INFO nova.virt.libvirt.driver [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Deleting instance files /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587_del#033[00m
Oct  2 08:38:28 np0005465987 nova_compute[230713]: 2025-10-02 12:38:28.415 2 INFO nova.virt.libvirt.driver [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Deletion of /var/lib/nova/instances/d2dd3328-2833-429e-9f0d-3ffff2c72587_del complete#033[00m
Oct  2 08:38:28 np0005465987 nova_compute[230713]: 2025-10-02 12:38:28.476 2 INFO nova.compute.manager [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Took 1.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:38:28 np0005465987 nova_compute[230713]: 2025-10-02 12:38:28.476 2 DEBUG oslo.service.loopingcall [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:38:28 np0005465987 nova_compute[230713]: 2025-10-02 12:38:28.477 2 DEBUG nova.compute.manager [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:38:28 np0005465987 nova_compute[230713]: 2025-10-02 12:38:28.477 2 DEBUG nova.network.neutron [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.221 2 DEBUG nova.network.neutron [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.245 2 INFO nova.compute.manager [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Took 0.77 seconds to deallocate network for instance.#033[00m
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.308 2 DEBUG oslo_concurrency.lockutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.308 2 DEBUG oslo_concurrency.lockutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.351 2 DEBUG oslo_concurrency.processutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.395 2 DEBUG nova.compute.manager [req-cca12ddc-65c3-4058-96e9-722ed67bc117 req-9b5c27ee-da73-4baf-a062-c62e18bc7799 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Received event network-vif-deleted-c775ce5e-fd60-4ae8-a429-77f19ee2e858 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:29.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.661 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408694.6601837, 58c789e4-1ba5-4001-913d-9aa19bd9c59f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.662 2 INFO nova.compute.manager [-] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.699 2 DEBUG nova.compute.manager [None req-3d5f46aa-7eaf-4a9a-934c-e28b1e84c2d9 - - - - - -] [instance: 58c789e4-1ba5-4001-913d-9aa19bd9c59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:29 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3666136594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.834 2 DEBUG oslo_concurrency.processutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.840 2 DEBUG nova.compute.provider_tree [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:38:29 np0005465987 podman[285537]: 2025-10-02 12:38:29.858645136 +0000 UTC m=+0.070302514 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.862 2 DEBUG nova.scheduler.client.report [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:38:29 np0005465987 podman[285538]: 2025-10-02 12:38:29.884548498 +0000 UTC m=+0.086439568 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 08:38:29 np0005465987 podman[285536]: 2025-10-02 12:38:29.955461167 +0000 UTC m=+0.166930419 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:38:29 np0005465987 nova_compute[230713]: 2025-10-02 12:38:29.955 2 DEBUG oslo_concurrency.lockutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:29.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:30 np0005465987 nova_compute[230713]: 2025-10-02 12:38:30.015 2 INFO nova.scheduler.client.report [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Deleted allocations for instance d2dd3328-2833-429e-9f0d-3ffff2c72587#033[00m
Oct  2 08:38:30 np0005465987 nova_compute[230713]: 2025-10-02 12:38:30.131 2 DEBUG oslo_concurrency.lockutils [None req-4a8a9c6a-4d26-48a1-959a-e8e22be69360 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "d2dd3328-2833-429e-9f0d-3ffff2c72587" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:30 np0005465987 nova_compute[230713]: 2025-10-02 12:38:30.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:31.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:31 np0005465987 nova_compute[230713]: 2025-10-02 12:38:31.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:33.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.264 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "0290d502-43fd-4dbe-9981-4589da91ed42" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.264 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "0290d502-43fd-4dbe-9981-4589da91ed42" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.279 2 DEBUG nova.compute.manager [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.332 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.333 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.338 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.339 2 INFO nova.compute.claims [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.445 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:34 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1974365964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.894 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.902 2 DEBUG nova.compute.provider_tree [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.924 2 DEBUG nova.scheduler.client.report [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.957 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:34 np0005465987 nova_compute[230713]: 2025-10-02 12:38:34.958 2 DEBUG nova.compute.manager [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.015 2 DEBUG nova.compute.manager [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.033 2 INFO nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.063 2 DEBUG nova.compute.manager [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.158 2 DEBUG nova.compute.manager [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.159 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.159 2 INFO nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Creating image(s)#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.186 2 DEBUG nova.storage.rbd_utils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.211 2 DEBUG nova.storage.rbd_utils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.236 2 DEBUG nova.storage.rbd_utils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.240 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.334 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.335 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.336 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.337 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.372 2 DEBUG nova.storage.rbd_utils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.377 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0290d502-43fd-4dbe-9981-4589da91ed42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:35.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:35 np0005465987 nova_compute[230713]: 2025-10-02 12:38:35.954 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0290d502-43fd-4dbe-9981-4589da91ed42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:35.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.055 2 DEBUG nova.storage.rbd_utils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] resizing rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.202 2 DEBUG nova.objects.instance [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'migration_context' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.218 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.219 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Ensure instance console log exists: /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.219 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.219 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.220 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.221 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.224 2 WARNING nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.229 2 DEBUG nova.virt.libvirt.host [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.230 2 DEBUG nova.virt.libvirt.host [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.232 2 DEBUG nova.virt.libvirt.host [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.232 2 DEBUG nova.virt.libvirt.host [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.234 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.234 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.234 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.234 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.235 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.235 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.235 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.235 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.236 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.236 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.236 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.236 2 DEBUG nova.virt.hardware [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.239 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:38:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1405666715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.653 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.687 2 DEBUG nova.storage.rbd_utils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.690 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:36 np0005465987 nova_compute[230713]: 2025-10-02 12:38:36.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:38:37 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/894417205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.114 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.116 2 DEBUG nova.objects.instance [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.132 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <uuid>0290d502-43fd-4dbe-9981-4589da91ed42</uuid>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <name>instance-00000094</name>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerShowV254Test-server-785058548</nova:name>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:38:36</nova:creationTime>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <nova:user uuid="d608dcd0370740e78efce5555136abfd">tempest-ServerShowV254Test-625415953-project-member</nova:user>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <nova:project uuid="0c7abd0bd350497b8968db8ec003111e">tempest-ServerShowV254Test-625415953</nova:project>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <entry name="serial">0290d502-43fd-4dbe-9981-4589da91ed42</entry>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <entry name="uuid">0290d502-43fd-4dbe-9981-4589da91ed42</entry>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/0290d502-43fd-4dbe-9981-4589da91ed42_disk">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/0290d502-43fd-4dbe-9981-4589da91ed42_disk.config">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/console.log" append="off"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:38:37 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:38:37 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:38:37 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:38:37 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.190 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.191 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.192 2 INFO nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Using config drive#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.228 2 DEBUG nova.storage.rbd_utils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:37.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.478 2 INFO nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Creating config drive at /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.487 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt6z3_h6o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.627 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt6z3_h6o" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.671 2 DEBUG nova.storage.rbd_utils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:37 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.676 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:37.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:37.999 2 DEBUG oslo_concurrency.processutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.000 2 INFO nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Deleting local config drive /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config because it was imported into RBD.#033[00m
Oct  2 08:38:38 np0005465987 systemd-machined[188335]: New machine qemu-66-instance-00000094.
Oct  2 08:38:38 np0005465987 systemd[1]: Started Virtual Machine qemu-66-instance-00000094.
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.119 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408703.1181176, d2dd3328-2833-429e-9f0d-3ffff2c72587 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.119 2 INFO nova.compute.manager [-] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.142 2 DEBUG nova.compute.manager [None req-24c60336-0f0a-489b-b1e1-9ab3dab15fd2 - - - - - -] [instance: d2dd3328-2833-429e-9f0d-3ffff2c72587] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.879 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408718.8784683, 0290d502-43fd-4dbe-9981-4589da91ed42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.879 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.882 2 DEBUG nova.compute.manager [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.883 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.888 2 INFO nova.virt.libvirt.driver [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance spawned successfully.#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.888 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.991 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.998 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.998 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.999 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:38 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.999 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:38.999 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.000 2 DEBUG nova.virt.libvirt.driver [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.003 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.108 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.109 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408718.8808458, 0290d502-43fd-4dbe-9981-4589da91ed42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.109 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] VM Started (Lifecycle Event)#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.148 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.153 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.159 2 INFO nova.compute.manager [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Took 4.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.159 2 DEBUG nova.compute.manager [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.185 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.215 2 INFO nova.compute.manager [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Took 4.90 seconds to build instance.#033[00m
Oct  2 08:38:39 np0005465987 nova_compute[230713]: 2025-10-02 12:38:39.238 2 DEBUG oslo_concurrency.lockutils [None req-82ea0af6-8dce-46d7-b540-8c88d372e6c5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "0290d502-43fd-4dbe-9981-4589da91ed42" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:39.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:39.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:40 np0005465987 nova_compute[230713]: 2025-10-02 12:38:40.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:41.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:41 np0005465987 nova_compute[230713]: 2025-10-02 12:38:41.701 2 INFO nova.compute.manager [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Rebuilding instance#033[00m
Oct  2 08:38:41 np0005465987 nova_compute[230713]: 2025-10-02 12:38:41.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:41.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:42 np0005465987 nova_compute[230713]: 2025-10-02 12:38:42.037 2 DEBUG nova.objects.instance [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:42 np0005465987 nova_compute[230713]: 2025-10-02 12:38:42.055 2 DEBUG nova.compute.manager [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:42 np0005465987 nova_compute[230713]: 2025-10-02 12:38:42.111 2 DEBUG nova.objects.instance [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'pci_requests' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:42 np0005465987 nova_compute[230713]: 2025-10-02 12:38:42.123 2 DEBUG nova.objects.instance [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:42 np0005465987 nova_compute[230713]: 2025-10-02 12:38:42.135 2 DEBUG nova.objects.instance [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'resources' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:42 np0005465987 nova_compute[230713]: 2025-10-02 12:38:42.147 2 DEBUG nova.objects.instance [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'migration_context' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:42 np0005465987 nova_compute[230713]: 2025-10-02 12:38:42.158 2 DEBUG nova.objects.instance [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:38:42 np0005465987 nova_compute[230713]: 2025-10-02 12:38:42.162 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:38:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:43.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:38:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3006945388' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:38:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:38:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3006945388' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:38:45 np0005465987 nova_compute[230713]: 2025-10-02 12:38:45.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:45.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:45.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:46 np0005465987 nova_compute[230713]: 2025-10-02 12:38:46.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:47.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:47.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:49.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:49.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:50 np0005465987 nova_compute[230713]: 2025-10-02 12:38:50.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:50 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Oct  2 08:38:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:51.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:51 np0005465987 nova_compute[230713]: 2025-10-02 12:38:51.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:52 np0005465987 nova_compute[230713]: 2025-10-02 12:38:52.212 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:38:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:53.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:53 np0005465987 podman[285970]: 2025-10-02 12:38:53.842774 +0000 UTC m=+0.057195212 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:38:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:53.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:55 np0005465987 nova_compute[230713]: 2025-10-02 12:38:55.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:55.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:56 np0005465987 nova_compute[230713]: 2025-10-02 12:38:56.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:57.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:57 np0005465987 nova_compute[230713]: 2025-10-02 12:38:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:58.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:38:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:38:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:59.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:38:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:00.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:00 np0005465987 nova_compute[230713]: 2025-10-02 12:39:00.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:00 np0005465987 podman[285991]: 2025-10-02 12:39:00.882213297 +0000 UTC m=+0.083588999 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  2 08:39:00 np0005465987 podman[285992]: 2025-10-02 12:39:00.891364868 +0000 UTC m=+0.097941402 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd)
Oct  2 08:39:00 np0005465987 podman[285990]: 2025-10-02 12:39:00.92415898 +0000 UTC m=+0.129043778 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct  2 08:39:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:01.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:01 np0005465987 nova_compute[230713]: 2025-10-02 12:39:01.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:01 np0005465987 nova_compute[230713]: 2025-10-02 12:39:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:01 np0005465987 nova_compute[230713]: 2025-10-02 12:39:01.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:39:01 np0005465987 nova_compute[230713]: 2025-10-02 12:39:01.984 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:39:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:02.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:02 np0005465987 ceph-mds[84089]: mds.beacon.cephfs.compute-1.zfhmgy missed beacon ack from the monitors
Oct  2 08:39:03 np0005465987 nova_compute[230713]: 2025-10-02 12:39:03.263 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:39:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:03.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:04.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:05 np0005465987 nova_compute[230713]: 2025-10-02 12:39:05.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:39:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:05.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:39:05 np0005465987 nova_compute[230713]: 2025-10-02 12:39:05.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:05 np0005465987 nova_compute[230713]: 2025-10-02 12:39:05.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:39:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:06.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:06 np0005465987 ceph-mds[84089]: mds.beacon.cephfs.compute-1.zfhmgy missed beacon ack from the monitors
Oct  2 08:39:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).paxos(paxos updating c 4770..5379) lease_timeout -- calling new election
Oct  2 08:39:06 np0005465987 ceph-mon[80822]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Oct  2 08:39:06 np0005465987 ceph-mon[80822]: paxos.2).electionLogic(18) init, last seen epoch 18
Oct  2 08:39:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 08:39:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 08:39:06 np0005465987 nova_compute[230713]: 2025-10-02 12:39:06.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:07.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:08.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 08:39:08 np0005465987 nova_compute[230713]: 2025-10-02 12:39:08.972 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 08:39:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:09.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:10 np0005465987 nova_compute[230713]: 2025-10-02 12:39:10.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e338 e338: 3 total, 3 up, 3 in
Oct  2 08:39:10 np0005465987 ceph-mon[80822]: mon.compute-1 calling monitor election
Oct  2 08:39:10 np0005465987 ceph-mon[80822]: mon.compute-2 calling monitor election
Oct  2 08:39:10 np0005465987 ceph-mon[80822]: mon.compute-0 calling monitor election
Oct  2 08:39:10 np0005465987 ceph-mon[80822]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  2 08:39:10 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 08:39:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:39:10 np0005465987 nova_compute[230713]: 2025-10-02 12:39:10.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:11.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:11 np0005465987 nova_compute[230713]: 2025-10-02 12:39:11.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:11 np0005465987 nova_compute[230713]: 2025-10-02 12:39:11.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:11 np0005465987 nova_compute[230713]: 2025-10-02 12:39:11.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:11 np0005465987 nova_compute[230713]: 2025-10-02 12:39:11.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:11 np0005465987 nova_compute[230713]: 2025-10-02 12:39:11.991 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:11 np0005465987 nova_compute[230713]: 2025-10-02 12:39:11.992 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:11 np0005465987 nova_compute[230713]: 2025-10-02 12:39:11.992 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:11 np0005465987 nova_compute[230713]: 2025-10-02 12:39:11.992 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:39:11 np0005465987 nova_compute[230713]: 2025-10-02 12:39:11.992 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:12.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:39:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:39:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1424316044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.424 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.502 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.503 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.684 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.685 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4233MB free_disk=20.83548355102539GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.685 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.685 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.758 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 0290d502-43fd-4dbe-9981-4589da91ed42 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.759 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.759 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:39:12 np0005465987 nova_compute[230713]: 2025-10-02 12:39:12.961 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1930165027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:13 np0005465987 nova_compute[230713]: 2025-10-02 12:39:13.433 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:13 np0005465987 nova_compute[230713]: 2025-10-02 12:39:13.440 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:39:13 np0005465987 nova_compute[230713]: 2025-10-02 12:39:13.459 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:39:13 np0005465987 nova_compute[230713]: 2025-10-02 12:39:13.491 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:39:13 np0005465987 nova_compute[230713]: 2025-10-02 12:39:13.491 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:13.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:14.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:14 np0005465987 nova_compute[230713]: 2025-10-02 12:39:14.310 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:39:14 np0005465987 nova_compute[230713]: 2025-10-02 12:39:14.491 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:15 np0005465987 nova_compute[230713]: 2025-10-02 12:39:15.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:15.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:16.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e339 e339: 3 total, 3 up, 3 in
Oct  2 08:39:16 np0005465987 nova_compute[230713]: 2025-10-02 12:39:16.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:17.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:17 np0005465987 nova_compute[230713]: 2025-10-02 12:39:17.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:17 np0005465987 nova_compute[230713]: 2025-10-02 12:39:17.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:39:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:18.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:19.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:20.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:20 np0005465987 nova_compute[230713]: 2025-10-02 12:39:20.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:20 np0005465987 ovn_controller[129172]: 2025-10-02T12:39:20Z|00571|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct  2 08:39:20 np0005465987 nova_compute[230713]: 2025-10-02 12:39:20.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:20 np0005465987 nova_compute[230713]: 2025-10-02 12:39:20.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:39:21 np0005465987 nova_compute[230713]: 2025-10-02 12:39:21.031 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:39:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:21.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:21 np0005465987 nova_compute[230713]: 2025-10-02 12:39:21.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:21 np0005465987 nova_compute[230713]: 2025-10-02 12:39:21.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:22.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:23 np0005465987 nova_compute[230713]: 2025-10-02 12:39:23.347 2 INFO nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance shutdown successfully after 41 seconds.#033[00m
Oct  2 08:39:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:23.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:23 np0005465987 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct  2 08:39:23 np0005465987 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000094.scope: Consumed 14.297s CPU time.
Oct  2 08:39:23 np0005465987 systemd-machined[188335]: Machine qemu-66-instance-00000094 terminated.
Oct  2 08:39:23 np0005465987 nova_compute[230713]: 2025-10-02 12:39:23.774 2 INFO nova.virt.libvirt.driver [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance destroyed successfully.#033[00m
Oct  2 08:39:23 np0005465987 nova_compute[230713]: 2025-10-02 12:39:23.780 2 INFO nova.virt.libvirt.driver [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance destroyed successfully.#033[00m
Oct  2 08:39:23 np0005465987 nova_compute[230713]: 2025-10-02 12:39:23.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:39:23.882 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:39:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:39:23.884 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:39:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:24.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:24 np0005465987 podman[286253]: 2025-10-02 12:39:24.870429384 +0000 UTC m=+0.083093614 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 08:39:25 np0005465987 nova_compute[230713]: 2025-10-02 12:39:25.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:25.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:26.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:39:26.886 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:26 np0005465987 nova_compute[230713]: 2025-10-02 12:39:26.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 e340: 3 total, 3 up, 3 in
Oct  2 08:39:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:27.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1677275262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:39:27.964 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:39:27.964 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:39:27.964 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:28.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.277 2 INFO nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Deleting instance files /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42_del#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.278 2 INFO nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Deletion of /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42_del complete#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.462 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.462 2 INFO nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Creating image(s)#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.495 2 DEBUG nova.storage.rbd_utils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.530 2 DEBUG nova.storage.rbd_utils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:29.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.556 2 DEBUG nova.storage.rbd_utils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.559 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.621 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.622 2 DEBUG oslo_concurrency.lockutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.622 2 DEBUG oslo_concurrency.lockutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.623 2 DEBUG oslo_concurrency.lockutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.646 2 DEBUG nova.storage.rbd_utils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:29 np0005465987 nova_compute[230713]: 2025-10-02 12:39:29.649 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 0290d502-43fd-4dbe-9981-4589da91ed42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:30.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:30 np0005465987 nova_compute[230713]: 2025-10-02 12:39:30.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:31 np0005465987 podman[286393]: 2025-10-02 12:39:31.288697608 +0000 UTC m=+0.051582839 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:39:31 np0005465987 podman[286392]: 2025-10-02 12:39:31.31784337 +0000 UTC m=+0.075973640 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:39:31 np0005465987 podman[286391]: 2025-10-02 12:39:31.323520296 +0000 UTC m=+0.093132582 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:39:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:31.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:39:31 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:39:31 np0005465987 nova_compute[230713]: 2025-10-02 12:39:31.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:31 np0005465987 nova_compute[230713]: 2025-10-02 12:39:31.953 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 0290d502-43fd-4dbe-9981-4589da91ed42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:32 np0005465987 nova_compute[230713]: 2025-10-02 12:39:32.033 2 DEBUG nova.storage.rbd_utils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] resizing rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:39:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:32.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:33.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:34.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.905 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.906 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Ensure instance console log exists: /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.908 2 DEBUG oslo_concurrency.lockutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.908 2 DEBUG oslo_concurrency.lockutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.909 2 DEBUG oslo_concurrency.lockutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.912 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.918 2 WARNING nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.927 2 DEBUG nova.virt.libvirt.host [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.928 2 DEBUG nova.virt.libvirt.host [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.939 2 DEBUG nova.virt.libvirt.host [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.940 2 DEBUG nova.virt.libvirt.host [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.942 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.943 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.944 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.945 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.945 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.946 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.946 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.947 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.948 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.949 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.949 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.950 2 DEBUG nova.virt.hardware [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.951 2 DEBUG nova.objects.instance [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:34 np0005465987 nova_compute[230713]: 2025-10-02 12:39:34.978 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2582492956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:35 np0005465987 nova_compute[230713]: 2025-10-02 12:39:35.424 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:35 np0005465987 nova_compute[230713]: 2025-10-02 12:39:35.462 2 DEBUG nova.storage.rbd_utils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:35 np0005465987 nova_compute[230713]: 2025-10-02 12:39:35.467 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:35 np0005465987 nova_compute[230713]: 2025-10-02 12:39:35.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:35.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/457775009' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:35 np0005465987 nova_compute[230713]: 2025-10-02 12:39:35.900 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:35 np0005465987 nova_compute[230713]: 2025-10-02 12:39:35.904 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <uuid>0290d502-43fd-4dbe-9981-4589da91ed42</uuid>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <name>instance-00000094</name>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerShowV254Test-server-785058548</nova:name>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:39:34</nova:creationTime>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <nova:user uuid="d608dcd0370740e78efce5555136abfd">tempest-ServerShowV254Test-625415953-project-member</nova:user>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <nova:project uuid="0c7abd0bd350497b8968db8ec003111e">tempest-ServerShowV254Test-625415953</nova:project>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="db05f54c-61f8-42d6-a1e2-da3219a77b12"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <nova:ports/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <entry name="serial">0290d502-43fd-4dbe-9981-4589da91ed42</entry>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <entry name="uuid">0290d502-43fd-4dbe-9981-4589da91ed42</entry>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/0290d502-43fd-4dbe-9981-4589da91ed42_disk">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/0290d502-43fd-4dbe-9981-4589da91ed42_disk.config">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/console.log" append="off"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:39:35 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:39:35 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:39:35 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:39:35 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:39:35 np0005465987 nova_compute[230713]: 2025-10-02 12:39:35.995 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:39:35 np0005465987 nova_compute[230713]: 2025-10-02 12:39:35.995 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:39:35 np0005465987 nova_compute[230713]: 2025-10-02 12:39:35.996 2 INFO nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Using config drive#033[00m
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.023 2 DEBUG nova.storage.rbd_utils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.047 2 DEBUG nova.objects.instance [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:36.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.302 2 INFO nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Creating config drive at /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config#033[00m
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.310 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1npcqak4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.449 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1npcqak4" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.489 2 DEBUG nova.storage.rbd_utils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] rbd image 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.493 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.792 2 DEBUG oslo_concurrency.processutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config 0290d502-43fd-4dbe-9981-4589da91ed42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.793 2 INFO nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Deleting local config drive /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42/disk.config because it was imported into RBD.#033[00m
Oct  2 08:39:36 np0005465987 systemd-machined[188335]: New machine qemu-67-instance-00000094.
Oct  2 08:39:36 np0005465987 systemd[1]: Started Virtual Machine qemu-67-instance-00000094.
Oct  2 08:39:36 np0005465987 nova_compute[230713]: 2025-10-02 12:39:36.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:37.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:38.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.765 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 0290d502-43fd-4dbe-9981-4589da91ed42 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.766 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408778.7650578, 0290d502-43fd-4dbe-9981-4589da91ed42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.766 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.768 2 DEBUG nova.compute.manager [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.769 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.772 2 INFO nova.virt.libvirt.driver [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance spawned successfully.#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.773 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.871 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.872 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.873 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.873 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.874 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.875 2 DEBUG nova.virt.libvirt.driver [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.881 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.887 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.969 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.969 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408778.765941, 0290d502-43fd-4dbe-9981-4589da91ed42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:38 np0005465987 nova_compute[230713]: 2025-10-02 12:39:38.969 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] VM Started (Lifecycle Event)#033[00m
Oct  2 08:39:39 np0005465987 nova_compute[230713]: 2025-10-02 12:39:39.129 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:39 np0005465987 nova_compute[230713]: 2025-10-02 12:39:39.132 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:39 np0005465987 nova_compute[230713]: 2025-10-02 12:39:39.136 2 DEBUG nova.compute.manager [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:39 np0005465987 nova_compute[230713]: 2025-10-02 12:39:39.338 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:39:39 np0005465987 nova_compute[230713]: 2025-10-02 12:39:39.416 2 DEBUG oslo_concurrency.lockutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:39 np0005465987 nova_compute[230713]: 2025-10-02 12:39:39.417 2 DEBUG oslo_concurrency.lockutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:39 np0005465987 nova_compute[230713]: 2025-10-02 12:39:39.417 2 DEBUG nova.objects.instance [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:39:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:39.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:39 np0005465987 nova_compute[230713]: 2025-10-02 12:39:39.629 2 DEBUG oslo_concurrency.lockutils [None req-4a81c35d-3d7f-4410-8e3e-f603f67ad318 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:40.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:40 np0005465987 nova_compute[230713]: 2025-10-02 12:39:40.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:41.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:41 np0005465987 nova_compute[230713]: 2025-10-02 12:39:41.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:42.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:43.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:44.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.126 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "0290d502-43fd-4dbe-9981-4589da91ed42" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.126 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "0290d502-43fd-4dbe-9981-4589da91ed42" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.126 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "0290d502-43fd-4dbe-9981-4589da91ed42-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.127 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "0290d502-43fd-4dbe-9981-4589da91ed42-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.127 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "0290d502-43fd-4dbe-9981-4589da91ed42-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.128 2 INFO nova.compute.manager [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Terminating instance#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.128 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "refresh_cache-0290d502-43fd-4dbe-9981-4589da91ed42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.129 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquired lock "refresh_cache-0290d502-43fd-4dbe-9981-4589da91ed42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.129 2 DEBUG nova.network.neutron [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.530 2 DEBUG nova.network.neutron [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:39:45 np0005465987 nova_compute[230713]: 2025-10-02 12:39:45.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:45.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/327545541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:46.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:46 np0005465987 nova_compute[230713]: 2025-10-02 12:39:46.194 2 DEBUG nova.network.neutron [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:46 np0005465987 nova_compute[230713]: 2025-10-02 12:39:46.293 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Releasing lock "refresh_cache-0290d502-43fd-4dbe-9981-4589da91ed42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:39:46 np0005465987 nova_compute[230713]: 2025-10-02 12:39:46.294 2 DEBUG nova.compute.manager [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:39:46 np0005465987 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct  2 08:39:46 np0005465987 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000094.scope: Consumed 9.444s CPU time.
Oct  2 08:39:46 np0005465987 systemd-machined[188335]: Machine qemu-67-instance-00000094 terminated.
Oct  2 08:39:46 np0005465987 nova_compute[230713]: 2025-10-02 12:39:46.711 2 INFO nova.virt.libvirt.driver [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance destroyed successfully.#033[00m
Oct  2 08:39:46 np0005465987 nova_compute[230713]: 2025-10-02 12:39:46.712 2 DEBUG nova.objects.instance [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lazy-loading 'resources' on Instance uuid 0290d502-43fd-4dbe-9981-4589da91ed42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:46 np0005465987 nova_compute[230713]: 2025-10-02 12:39:46.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:39:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:47.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:39:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:48.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:49.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:50.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:50 np0005465987 nova_compute[230713]: 2025-10-02 12:39:50.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:51 np0005465987 nova_compute[230713]: 2025-10-02 12:39:51.401 2 INFO nova.virt.libvirt.driver [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Deleting instance files /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42_del#033[00m
Oct  2 08:39:51 np0005465987 nova_compute[230713]: 2025-10-02 12:39:51.402 2 INFO nova.virt.libvirt.driver [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Deletion of /var/lib/nova/instances/0290d502-43fd-4dbe-9981-4589da91ed42_del complete#033[00m
Oct  2 08:39:51 np0005465987 nova_compute[230713]: 2025-10-02 12:39:51.455 2 INFO nova.compute.manager [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Took 5.16 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:39:51 np0005465987 nova_compute[230713]: 2025-10-02 12:39:51.456 2 DEBUG oslo.service.loopingcall [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:39:51 np0005465987 nova_compute[230713]: 2025-10-02 12:39:51.457 2 DEBUG nova.compute.manager [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:39:51 np0005465987 nova_compute[230713]: 2025-10-02 12:39:51.457 2 DEBUG nova.network.neutron [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:39:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:51.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:51 np0005465987 nova_compute[230713]: 2025-10-02 12:39:51.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:52.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:52 np0005465987 nova_compute[230713]: 2025-10-02 12:39:52.334 2 DEBUG nova.network.neutron [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:39:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:53.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:53 np0005465987 nova_compute[230713]: 2025-10-02 12:39:53.769 2 DEBUG nova.network.neutron [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:53 np0005465987 nova_compute[230713]: 2025-10-02 12:39:53.826 2 INFO nova.compute.manager [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Took 2.37 seconds to deallocate network for instance.#033[00m
Oct  2 08:39:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:54.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:54 np0005465987 nova_compute[230713]: 2025-10-02 12:39:54.259 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:54 np0005465987 nova_compute[230713]: 2025-10-02 12:39:54.259 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:54 np0005465987 nova_compute[230713]: 2025-10-02 12:39:54.349 2 DEBUG oslo_concurrency.processutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1960292889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:54 np0005465987 nova_compute[230713]: 2025-10-02 12:39:54.758 2 DEBUG oslo_concurrency.processutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:54 np0005465987 nova_compute[230713]: 2025-10-02 12:39:54.764 2 DEBUG nova.compute.provider_tree [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:39:55 np0005465987 nova_compute[230713]: 2025-10-02 12:39:55.146 2 DEBUG nova.scheduler.client.report [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:39:55 np0005465987 nova_compute[230713]: 2025-10-02 12:39:55.281 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:55 np0005465987 nova_compute[230713]: 2025-10-02 12:39:55.370 2 INFO nova.scheduler.client.report [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Deleted allocations for instance 0290d502-43fd-4dbe-9981-4589da91ed42#033[00m
Oct  2 08:39:55 np0005465987 nova_compute[230713]: 2025-10-02 12:39:55.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:55.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:55 np0005465987 podman[286773]: 2025-10-02 12:39:55.844265602 +0000 UTC m=+0.057497642 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:39:55 np0005465987 nova_compute[230713]: 2025-10-02 12:39:55.960 2 DEBUG oslo_concurrency.lockutils [None req-b35b0fa1-c750-465c-8eb4-a43321022ca5 d608dcd0370740e78efce5555136abfd 0c7abd0bd350497b8968db8ec003111e - - default default] Lock "0290d502-43fd-4dbe-9981-4589da91ed42" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:56.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:56 np0005465987 nova_compute[230713]: 2025-10-02 12:39:56.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:39:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:57.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:39:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:58.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:58 np0005465987 nova_compute[230713]: 2025-10-02 12:39:58.999 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:39:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:59.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:00.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:00 np0005465987 nova_compute[230713]: 2025-10-02 12:40:00.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 08:40:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:01.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:01 np0005465987 nova_compute[230713]: 2025-10-02 12:40:01.711 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408786.7101743, 0290d502-43fd-4dbe-9981-4589da91ed42 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:01 np0005465987 nova_compute[230713]: 2025-10-02 12:40:01.712 2 INFO nova.compute.manager [-] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:40:01 np0005465987 podman[286795]: 2025-10-02 12:40:01.892151174 +0000 UTC m=+0.100071582 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:40:01 np0005465987 podman[286793]: 2025-10-02 12:40:01.902794666 +0000 UTC m=+0.117519391 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  2 08:40:01 np0005465987 nova_compute[230713]: 2025-10-02 12:40:01.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:01 np0005465987 podman[286794]: 2025-10-02 12:40:01.922310482 +0000 UTC m=+0.126923629 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:40:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:02.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:02 np0005465987 nova_compute[230713]: 2025-10-02 12:40:02.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:02 np0005465987 nova_compute[230713]: 2025-10-02 12:40:02.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:40:02 np0005465987 nova_compute[230713]: 2025-10-02 12:40:02.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:40:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:03.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:04.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:05.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:05 np0005465987 nova_compute[230713]: 2025-10-02 12:40:05.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:06.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:06 np0005465987 nova_compute[230713]: 2025-10-02 12:40:06.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:07.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:08.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:09.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:09 np0005465987 nova_compute[230713]: 2025-10-02 12:40:09.959 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:40:09 np0005465987 nova_compute[230713]: 2025-10-02 12:40:09.988 2 DEBUG nova.compute.manager [None req-0c2232ed-5bb5-4940-a30f-0ff35f53574a - - - - - -] [instance: 0290d502-43fd-4dbe-9981-4589da91ed42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:10.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:10 np0005465987 nova_compute[230713]: 2025-10-02 12:40:10.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:10.688 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:40:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:10.689 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:40:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:11.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:11.692 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:11 np0005465987 nova_compute[230713]: 2025-10-02 12:40:11.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:11 np0005465987 nova_compute[230713]: 2025-10-02 12:40:11.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:11 np0005465987 nova_compute[230713]: 2025-10-02 12:40:11.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:11 np0005465987 nova_compute[230713]: 2025-10-02 12:40:11.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:12.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.415885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812415955, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1888, "num_deletes": 253, "total_data_size": 4253382, "memory_usage": 4322480, "flush_reason": "Manual Compaction"}
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812428164, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 1734257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55300, "largest_seqno": 57183, "table_properties": {"data_size": 1728274, "index_size": 2993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16574, "raw_average_key_size": 21, "raw_value_size": 1714755, "raw_average_value_size": 2235, "num_data_blocks": 131, "num_entries": 767, "num_filter_entries": 767, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408650, "oldest_key_time": 1759408650, "file_creation_time": 1759408812, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 12317 microseconds, and 6538 cpu microseconds.
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.428211) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 1734257 bytes OK
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.428230) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.430435) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.430453) EVENT_LOG_v1 {"time_micros": 1759408812430447, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.430473) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 4244833, prev total WAL file size 4244833, number of live WAL files 2.
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.431650) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373539' seq:72057594037927935, type:22 .. '6D6772737461740032303130' seq:0, type:0; will stop at (end)
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(1693KB)], [108(11MB)]
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812431726, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 14199046, "oldest_snapshot_seqno": -1}
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8242 keys, 11384226 bytes, temperature: kUnknown
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812507537, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 11384226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11330483, "index_size": 32063, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 212759, "raw_average_key_size": 25, "raw_value_size": 11185128, "raw_average_value_size": 1357, "num_data_blocks": 1257, "num_entries": 8242, "num_filter_entries": 8242, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759408812, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.507898) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11384226 bytes
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.511979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.1 rd, 150.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.9 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(14.8) write-amplify(6.6) OK, records in: 8701, records dropped: 459 output_compression: NoCompression
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.512011) EVENT_LOG_v1 {"time_micros": 1759408812511998, "job": 68, "event": "compaction_finished", "compaction_time_micros": 75887, "compaction_time_cpu_micros": 34730, "output_level": 6, "num_output_files": 1, "total_output_size": 11384226, "num_input_records": 8701, "num_output_records": 8242, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812512607, "job": 68, "event": "table_file_deletion", "file_number": 110}
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408812514729, "job": 68, "event": "table_file_deletion", "file_number": 108}
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.431546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.514805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.514811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.514812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.514814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:12 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:40:12.514817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:12 np0005465987 nova_compute[230713]: 2025-10-02 12:40:12.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:12 np0005465987 nova_compute[230713]: 2025-10-02 12:40:12.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.398 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.399 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.399 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.399 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.399 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:13.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:40:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/520256467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.841 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.995 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.996 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4410MB free_disk=20.89044189453125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.997 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:13 np0005465987 nova_compute[230713]: 2025-10-02 12:40:13.997 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:14.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:14 np0005465987 nova_compute[230713]: 2025-10-02 12:40:14.244 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:40:14 np0005465987 nova_compute[230713]: 2025-10-02 12:40:14.244 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:40:14 np0005465987 nova_compute[230713]: 2025-10-02 12:40:14.263 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:40:14 np0005465987 nova_compute[230713]: 2025-10-02 12:40:14.462 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:40:14 np0005465987 nova_compute[230713]: 2025-10-02 12:40:14.463 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:40:14 np0005465987 nova_compute[230713]: 2025-10-02 12:40:14.502 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:40:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:14 np0005465987 nova_compute[230713]: 2025-10-02 12:40:14.572 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:40:14 np0005465987 nova_compute[230713]: 2025-10-02 12:40:14.663 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:40:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3480621312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:40:15 np0005465987 nova_compute[230713]: 2025-10-02 12:40:15.094 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:15 np0005465987 nova_compute[230713]: 2025-10-02 12:40:15.099 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:40:15 np0005465987 nova_compute[230713]: 2025-10-02 12:40:15.122 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:40:15 np0005465987 nova_compute[230713]: 2025-10-02 12:40:15.242 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:40:15 np0005465987 nova_compute[230713]: 2025-10-02 12:40:15.243 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:15.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:15 np0005465987 nova_compute[230713]: 2025-10-02 12:40:15.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:16.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:16 np0005465987 nova_compute[230713]: 2025-10-02 12:40:16.243 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:17 np0005465987 nova_compute[230713]: 2025-10-02 12:40:17.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:17.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 e341: 3 total, 3 up, 3 in
Oct  2 08:40:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:18.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:18 np0005465987 nova_compute[230713]: 2025-10-02 12:40:18.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:18 np0005465987 nova_compute[230713]: 2025-10-02 12:40:18.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:40:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:19.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:20.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:20 np0005465987 nova_compute[230713]: 2025-10-02 12:40:20.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:21.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:22 np0005465987 nova_compute[230713]: 2025-10-02 12:40:22.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:22.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:23.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:24.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:25 np0005465987 nova_compute[230713]: 2025-10-02 12:40:25.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:26.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:26 np0005465987 podman[286902]: 2025-10-02 12:40:26.886131728 +0000 UTC m=+0.083712812 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:40:27 np0005465987 nova_compute[230713]: 2025-10-02 12:40:27.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:27.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:27 np0005465987 nova_compute[230713]: 2025-10-02 12:40:27.779 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:27 np0005465987 nova_compute[230713]: 2025-10-02 12:40:27.780 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:27 np0005465987 nova_compute[230713]: 2025-10-02 12:40:27.804 2 DEBUG nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:40:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:27.965 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:27.966 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:27.966 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:27 np0005465987 nova_compute[230713]: 2025-10-02 12:40:27.996 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:27 np0005465987 nova_compute[230713]: 2025-10-02 12:40:27.997 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:28 np0005465987 nova_compute[230713]: 2025-10-02 12:40:28.008 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:40:28 np0005465987 nova_compute[230713]: 2025-10-02 12:40:28.008 2 INFO nova.compute.claims [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:40:28 np0005465987 nova_compute[230713]: 2025-10-02 12:40:28.113 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:28.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:28 np0005465987 nova_compute[230713]: 2025-10-02 12:40:28.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:29.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:29 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 8.273301125s
Oct  2 08:40:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:29 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 8.273302078s
Oct  2 08:40:29 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.273507118s, txc = 0x558e0a0ac900
Oct  2 08:40:29 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.791255951s, txc = 0x558e0a638c00
Oct  2 08:40:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:40:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2396500098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.097 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.984s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.106 2 DEBUG nova.compute.provider_tree [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.128 2 DEBUG nova.scheduler.client.report [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:40:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:30.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.171 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.172 2 DEBUG nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.239 2 DEBUG nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.240 2 DEBUG nova.network.neutron [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.296 2 INFO nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.330 2 DEBUG nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.473 2 DEBUG nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.475 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.476 2 INFO nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Creating image(s)#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.524 2 DEBUG nova.storage.rbd_utils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.577 2 DEBUG nova.storage.rbd_utils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.622 2 DEBUG nova.storage.rbd_utils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.626 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.656 2 DEBUG nova.policy [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c02d1dcc10ea4e57bbc6b7a3c100dc7b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6822f02d5ca04c659329a75d487054cf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.690 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.691 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.692 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.692 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.725 2 DEBUG nova.storage.rbd_utils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.729 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:30 np0005465987 nova_compute[230713]: 2025-10-02 12:40:30.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:31.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:32 np0005465987 nova_compute[230713]: 2025-10-02 12:40:32.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:32.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:32 np0005465987 nova_compute[230713]: 2025-10-02 12:40:32.142 2 DEBUG nova.network.neutron [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Successfully created port: 933f98f8-c463-4299-8008-a38a3a76a0be _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:40:32 np0005465987 podman[287183]: 2025-10-02 12:40:32.232010812 +0000 UTC m=+0.074542771 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:40:32 np0005465987 podman[287182]: 2025-10-02 12:40:32.252270568 +0000 UTC m=+0.099304440 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:40:32 np0005465987 podman[287181]: 2025-10-02 12:40:32.283420374 +0000 UTC m=+0.131075493 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:40:32 np0005465987 podman[287274]: 2025-10-02 12:40:32.604177012 +0000 UTC m=+0.290816345 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 08:40:32 np0005465987 podman[287274]: 2025-10-02 12:40:32.73213795 +0000 UTC m=+0.418777193 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 08:40:33 np0005465987 nova_compute[230713]: 2025-10-02 12:40:33.103 2 DEBUG nova.network.neutron [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Successfully updated port: 933f98f8-c463-4299-8008-a38a3a76a0be _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:40:33 np0005465987 nova_compute[230713]: 2025-10-02 12:40:33.131 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:40:33 np0005465987 nova_compute[230713]: 2025-10-02 12:40:33.131 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquired lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:40:33 np0005465987 nova_compute[230713]: 2025-10-02 12:40:33.132 2 DEBUG nova.network.neutron [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:40:33 np0005465987 nova_compute[230713]: 2025-10-02 12:40:33.243 2 DEBUG nova.compute.manager [req-dbeae9e4-cd5c-4d5a-ac9a-907d15e77bd3 req-d2648248-ef9b-49ff-909a-7cb8e75bd7ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received event network-changed-933f98f8-c463-4299-8008-a38a3a76a0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:33 np0005465987 nova_compute[230713]: 2025-10-02 12:40:33.244 2 DEBUG nova.compute.manager [req-dbeae9e4-cd5c-4d5a-ac9a-907d15e77bd3 req-d2648248-ef9b-49ff-909a-7cb8e75bd7ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Refreshing instance network info cache due to event network-changed-933f98f8-c463-4299-8008-a38a3a76a0be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:40:33 np0005465987 nova_compute[230713]: 2025-10-02 12:40:33.245 2 DEBUG oslo_concurrency.lockutils [req-dbeae9e4-cd5c-4d5a-ac9a-907d15e77bd3 req-d2648248-ef9b-49ff-909a-7cb8e75bd7ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:40:33 np0005465987 nova_compute[230713]: 2025-10-02 12:40:33.320 2 DEBUG nova.network.neutron [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:40:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:33.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:34.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:35 np0005465987 nova_compute[230713]: 2025-10-02 12:40:35.348 2 DEBUG nova.network.neutron [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updating instance_info_cache with network_info: [{"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:35 np0005465987 nova_compute[230713]: 2025-10-02 12:40:35.433 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Releasing lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:40:35 np0005465987 nova_compute[230713]: 2025-10-02 12:40:35.433 2 DEBUG nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Instance network_info: |[{"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:40:35 np0005465987 nova_compute[230713]: 2025-10-02 12:40:35.434 2 DEBUG oslo_concurrency.lockutils [req-dbeae9e4-cd5c-4d5a-ac9a-907d15e77bd3 req-d2648248-ef9b-49ff-909a-7cb8e75bd7ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:40:35 np0005465987 nova_compute[230713]: 2025-10-02 12:40:35.435 2 DEBUG nova.network.neutron [req-dbeae9e4-cd5c-4d5a-ac9a-907d15e77bd3 req-d2648248-ef9b-49ff-909a-7cb8e75bd7ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Refreshing network info cache for port 933f98f8-c463-4299-8008-a38a3a76a0be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:40:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:35.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:35 np0005465987 nova_compute[230713]: 2025-10-02 12:40:35.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:36.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:37 np0005465987 nova_compute[230713]: 2025-10-02 12:40:37.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:37 np0005465987 nova_compute[230713]: 2025-10-02 12:40:37.276 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:37 np0005465987 nova_compute[230713]: 2025-10-02 12:40:37.402 2 DEBUG nova.network.neutron [req-dbeae9e4-cd5c-4d5a-ac9a-907d15e77bd3 req-d2648248-ef9b-49ff-909a-7cb8e75bd7ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updated VIF entry in instance network info cache for port 933f98f8-c463-4299-8008-a38a3a76a0be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:40:37 np0005465987 nova_compute[230713]: 2025-10-02 12:40:37.403 2 DEBUG nova.network.neutron [req-dbeae9e4-cd5c-4d5a-ac9a-907d15e77bd3 req-d2648248-ef9b-49ff-909a-7cb8e75bd7ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updating instance_info_cache with network_info: [{"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:37 np0005465987 nova_compute[230713]: 2025-10-02 12:40:37.408 2 DEBUG nova.storage.rbd_utils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] resizing rbd image 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:40:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:40:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:37.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:38 np0005465987 nova_compute[230713]: 2025-10-02 12:40:38.097 2 DEBUG oslo_concurrency.lockutils [req-dbeae9e4-cd5c-4d5a-ac9a-907d15e77bd3 req-d2648248-ef9b-49ff-909a-7cb8e75bd7ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:40:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:38.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:40:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:39.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:40.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:40 np0005465987 nova_compute[230713]: 2025-10-02 12:40:40.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.338 2 DEBUG nova.objects.instance [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'migration_context' on Instance uuid 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.373 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.373 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Ensure instance console log exists: /var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.374 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.374 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.374 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.376 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Start _get_guest_xml network_info=[{"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.380 2 WARNING nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.386 2 DEBUG nova.virt.libvirt.host [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.387 2 DEBUG nova.virt.libvirt.host [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.390 2 DEBUG nova.virt.libvirt.host [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.390 2 DEBUG nova.virt.libvirt.host [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.391 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.392 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.392 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.392 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.393 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.393 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.393 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.394 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.394 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.394 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.394 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.395 2 DEBUG nova.virt.hardware [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:40:41 np0005465987 nova_compute[230713]: 2025-10-02 12:40:41.397 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:41.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:40:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:40:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:40:41 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2007664150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.096 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.119 2 DEBUG nova.storage.rbd_utils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.123 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:42.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:40:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2489439363' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.736 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.738 2 DEBUG nova.virt.libvirt.vif [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:40:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1365792511',display_name='tempest-ServerActionsTestOtherA-server-1365792511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1365792511',id=151,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-p6gd7ic0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:40:30Z,user_data=None,user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.739 2 DEBUG nova.network.os_vif_util [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.740 2 DEBUG nova.network.os_vif_util [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:19:7f,bridge_name='br-int',has_traffic_filtering=True,id=933f98f8-c463-4299-8008-a38a3a76a0be,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933f98f8-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.741 2 DEBUG nova.objects.instance [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'pci_devices' on Instance uuid 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.765 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <uuid>31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc</uuid>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <name>instance-00000097</name>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerActionsTestOtherA-server-1365792511</nova:name>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:40:41</nova:creationTime>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <nova:user uuid="c02d1dcc10ea4e57bbc6b7a3c100dc7b">tempest-ServerActionsTestOtherA-1680083910-project-member</nova:user>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <nova:project uuid="6822f02d5ca04c659329a75d487054cf">tempest-ServerActionsTestOtherA-1680083910</nova:project>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <nova:port uuid="933f98f8-c463-4299-8008-a38a3a76a0be">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <entry name="serial">31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc</entry>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <entry name="uuid">31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc</entry>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk.config">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:8f:19:7f"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <target dev="tap933f98f8-c4"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc/console.log" append="off"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:40:42 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:40:42 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:40:42 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:40:42 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.766 2 DEBUG nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Preparing to wait for external event network-vif-plugged-933f98f8-c463-4299-8008-a38a3a76a0be prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.766 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.766 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.767 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.767 2 DEBUG nova.virt.libvirt.vif [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:40:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1365792511',display_name='tempest-ServerActionsTestOtherA-server-1365792511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1365792511',id=151,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-p6gd7ic0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:40:30Z,user_data=None,user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.768 2 DEBUG nova.network.os_vif_util [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.768 2 DEBUG nova.network.os_vif_util [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:19:7f,bridge_name='br-int',has_traffic_filtering=True,id=933f98f8-c463-4299-8008-a38a3a76a0be,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933f98f8-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.768 2 DEBUG os_vif [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:19:7f,bridge_name='br-int',has_traffic_filtering=True,id=933f98f8-c463-4299-8008-a38a3a76a0be,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933f98f8-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.770 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap933f98f8-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap933f98f8-c4, col_values=(('external_ids', {'iface-id': '933f98f8-c463-4299-8008-a38a3a76a0be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:19:7f', 'vm-uuid': '31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:42 np0005465987 NetworkManager[44910]: <info>  [1759408842.7761] manager: (tap933f98f8-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.783 2 INFO os_vif [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:19:7f,bridge_name='br-int',has_traffic_filtering=True,id=933f98f8-c463-4299-8008-a38a3a76a0be,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933f98f8-c4')#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.880 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.880 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.881 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No VIF found with MAC fa:16:3e:8f:19:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.881 2 INFO nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Using config drive#033[00m
Oct  2 08:40:42 np0005465987 nova_compute[230713]: 2025-10-02 12:40:42.901 2 DEBUG nova.storage.rbd_utils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:43 np0005465987 nova_compute[230713]: 2025-10-02 12:40:43.306 2 INFO nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Creating config drive at /var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc/disk.config#033[00m
Oct  2 08:40:43 np0005465987 nova_compute[230713]: 2025-10-02 12:40:43.314 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpos0tkciw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:43 np0005465987 nova_compute[230713]: 2025-10-02 12:40:43.449 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpos0tkciw" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:43 np0005465987 nova_compute[230713]: 2025-10-02 12:40:43.476 2 DEBUG nova.storage.rbd_utils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:43 np0005465987 nova_compute[230713]: 2025-10-02 12:40:43.480 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc/disk.config 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:40:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:40:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:40:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:43.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:44.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.549 2 DEBUG oslo_concurrency.processutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc/disk.config 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.550 2 INFO nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Deleting local config drive /var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc/disk.config because it was imported into RBD.#033[00m
Oct  2 08:40:44 np0005465987 kernel: tap933f98f8-c4: entered promiscuous mode
Oct  2 08:40:44 np0005465987 NetworkManager[44910]: <info>  [1759408844.5938] manager: (tap933f98f8-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Oct  2 08:40:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:40:44Z|00572|binding|INFO|Claiming lport 933f98f8-c463-4299-8008-a38a3a76a0be for this chassis.
Oct  2 08:40:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:40:44Z|00573|binding|INFO|933f98f8-c463-4299-8008-a38a3a76a0be: Claiming fa:16:3e:8f:19:7f 10.100.0.11
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 NetworkManager[44910]: <info>  [1759408844.6144] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Oct  2 08:40:44 np0005465987 NetworkManager[44910]: <info>  [1759408844.6150] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.618 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:19:7f 10.100.0.11'], port_security=['fa:16:3e:8f:19:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02b03dd3-c87c-43e1-9317-5837e2c47d47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=933f98f8-c463-4299-8008-a38a3a76a0be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.619 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 933f98f8-c463-4299-8008-a38a3a76a0be in datapath 00455285-97a7-4fa2-ba83-e8060936877e bound to our chassis#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.620 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00455285-97a7-4fa2-ba83-e8060936877e#033[00m
Oct  2 08:40:44 np0005465987 systemd-udevd[287732]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.632 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c30d20-d12b-4c74-9393-c6796ce79ce3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.633 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00455285-91 in ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:40:44 np0005465987 systemd-machined[188335]: New machine qemu-68-instance-00000097.
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.634 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00455285-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.634 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[65bb6116-3da6-4095-a403-6ba0bbb52776]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.635 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e8de0b07-ca64-49ea-ad59-8a9ec84ce4fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 NetworkManager[44910]: <info>  [1759408844.6376] device (tap933f98f8-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:40:44 np0005465987 NetworkManager[44910]: <info>  [1759408844.6384] device (tap933f98f8-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:40:44 np0005465987 systemd[1]: Started Virtual Machine qemu-68-instance-00000097.
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.651 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[bd30b729-6b4e-43cb-aa5d-15b421cb5632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.676 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd5a87e-4171-4379-b414-2690278ea7a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.703 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[fb787f74-a2d6-492a-80ea-2db79a9adf64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.712 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8f17bbe0-568d-4b27-8326-21d58e4691c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 NetworkManager[44910]: <info>  [1759408844.7135] manager: (tap00455285-90): new Veth device (/org/freedesktop/NetworkManager/Devices/269)
Oct  2 08:40:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.737 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cf0944-9744-4333-a79b-11ff60b31f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.740 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6d868024-da8a-4b18-babc-9a4814bff0fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 NetworkManager[44910]: <info>  [1759408844.7600] device (tap00455285-90): carrier: link connected
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.765 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcf5633-d0a5-4e5d-ad44-ca5826ba985f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.781 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb50fb9-7472-46bc-9857-55acf2863a00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691333, 'reachable_time': 29036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287766, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.795 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f02453-e9e7-4957-a919-418c2ba805f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:8a3c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691333, 'tstamp': 691333}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287767, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.809 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c7724d91-7954-44b2-97cc-c4a4c4aa16a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691333, 'reachable_time': 29036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287768, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:40:44Z|00574|binding|INFO|Setting lport 933f98f8-c463-4299-8008-a38a3a76a0be ovn-installed in OVS
Oct  2 08:40:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:40:44Z|00575|binding|INFO|Setting lport 933f98f8-c463-4299-8008-a38a3a76a0be up in Southbound
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.838 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0b3c46-ba29-4e09-8582-afb9a29ea19a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.887 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6f44b594-2914-4107-8301-cad15406f81f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.889 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.889 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.889 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00455285-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 NetworkManager[44910]: <info>  [1759408844.8920] manager: (tap00455285-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Oct  2 08:40:44 np0005465987 kernel: tap00455285-90: entered promiscuous mode
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.894 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00455285-90, col_values=(('external_ids', {'iface-id': '293fb87a-10df-4698-a69e-3023bca5a6a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:40:44Z|00576|binding|INFO|Releasing lport 293fb87a-10df-4698-a69e-3023bca5a6a3 from this chassis (sb_readonly=0)
Oct  2 08:40:44 np0005465987 nova_compute[230713]: 2025-10-02 12:40:44.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.909 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.910 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d01240c1-fd5d-43e5-af30-872c6565c5d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.911 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-00455285-97a7-4fa2-ba83-e8060936877e
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/00455285-97a7-4fa2-ba83-e8060936877e.pid.haproxy
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 00455285-97a7-4fa2-ba83-e8060936877e
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:40:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:40:44.912 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'env', 'PROCESS_TAG=haproxy-00455285-97a7-4fa2-ba83-e8060936877e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00455285-97a7-4fa2-ba83-e8060936877e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:40:45 np0005465987 podman[287819]: 2025-10-02 12:40:45.25017651 +0000 UTC m=+0.047400054 container create ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:40:45 np0005465987 systemd[1]: Started libpod-conmon-ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca.scope.
Oct  2 08:40:45 np0005465987 podman[287819]: 2025-10-02 12:40:45.22251365 +0000 UTC m=+0.019737204 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:40:45 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:40:45 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928277ab5a6c5269999ced4fe1fb97a02e59512abc35780b75e45ad3618d762c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:40:45 np0005465987 podman[287819]: 2025-10-02 12:40:45.347731512 +0000 UTC m=+0.144955086 container init ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:40:45 np0005465987 podman[287819]: 2025-10-02 12:40:45.353063269 +0000 UTC m=+0.150286803 container start ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:40:45 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[287834]: [NOTICE]   (287838) : New worker (287840) forked
Oct  2 08:40:45 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[287834]: [NOTICE]   (287838) : Loading success.
Oct  2 08:40:45 np0005465987 nova_compute[230713]: 2025-10-02 12:40:45.466 2 DEBUG nova.compute.manager [req-97956e16-ceb0-4e86-8556-1f7689efcb4f req-1c1a83a2-9d9b-4555-921b-a1f9365ccf40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received event network-vif-plugged-933f98f8-c463-4299-8008-a38a3a76a0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:45 np0005465987 nova_compute[230713]: 2025-10-02 12:40:45.466 2 DEBUG oslo_concurrency.lockutils [req-97956e16-ceb0-4e86-8556-1f7689efcb4f req-1c1a83a2-9d9b-4555-921b-a1f9365ccf40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:45 np0005465987 nova_compute[230713]: 2025-10-02 12:40:45.467 2 DEBUG oslo_concurrency.lockutils [req-97956e16-ceb0-4e86-8556-1f7689efcb4f req-1c1a83a2-9d9b-4555-921b-a1f9365ccf40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:45 np0005465987 nova_compute[230713]: 2025-10-02 12:40:45.467 2 DEBUG oslo_concurrency.lockutils [req-97956e16-ceb0-4e86-8556-1f7689efcb4f req-1c1a83a2-9d9b-4555-921b-a1f9365ccf40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:45 np0005465987 nova_compute[230713]: 2025-10-02 12:40:45.467 2 DEBUG nova.compute.manager [req-97956e16-ceb0-4e86-8556-1f7689efcb4f req-1c1a83a2-9d9b-4555-921b-a1f9365ccf40 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Processing event network-vif-plugged-933f98f8-c463-4299-8008-a38a3a76a0be _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:40:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:45.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:45 np0005465987 nova_compute[230713]: 2025-10-02 12:40:45.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:46.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.495 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408846.4946437, 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.495 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] VM Started (Lifecycle Event)#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.498 2 DEBUG nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.500 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.503 2 INFO nova.virt.libvirt.driver [-] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Instance spawned successfully.#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.503 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.523 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.527 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.531 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.531 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.532 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.532 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.532 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.533 2 DEBUG nova.virt.libvirt.driver [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.555 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.556 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408846.4966717, 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.556 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.570 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.573 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408846.4997501, 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.574 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.601 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.604 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.616 2 INFO nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Took 16.14 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.616 2 DEBUG nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.628 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.686 2 INFO nova.compute.manager [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Took 18.76 seconds to build instance.#033[00m
Oct  2 08:40:46 np0005465987 nova_compute[230713]: 2025-10-02 12:40:46.710 2 DEBUG oslo_concurrency.lockutils [None req-2537da3b-780a-449a-abff-c8052613ef91 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:47.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:47 np0005465987 nova_compute[230713]: 2025-10-02 12:40:47.677 2 DEBUG nova.compute.manager [req-da271ea4-8469-4751-a201-3bbb3d5cf47a req-9761f9e3-4fdb-44f6-adde-18b47e8aeb44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received event network-vif-plugged-933f98f8-c463-4299-8008-a38a3a76a0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:47 np0005465987 nova_compute[230713]: 2025-10-02 12:40:47.678 2 DEBUG oslo_concurrency.lockutils [req-da271ea4-8469-4751-a201-3bbb3d5cf47a req-9761f9e3-4fdb-44f6-adde-18b47e8aeb44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:47 np0005465987 nova_compute[230713]: 2025-10-02 12:40:47.678 2 DEBUG oslo_concurrency.lockutils [req-da271ea4-8469-4751-a201-3bbb3d5cf47a req-9761f9e3-4fdb-44f6-adde-18b47e8aeb44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:47 np0005465987 nova_compute[230713]: 2025-10-02 12:40:47.678 2 DEBUG oslo_concurrency.lockutils [req-da271ea4-8469-4751-a201-3bbb3d5cf47a req-9761f9e3-4fdb-44f6-adde-18b47e8aeb44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:47 np0005465987 nova_compute[230713]: 2025-10-02 12:40:47.678 2 DEBUG nova.compute.manager [req-da271ea4-8469-4751-a201-3bbb3d5cf47a req-9761f9e3-4fdb-44f6-adde-18b47e8aeb44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] No waiting events found dispatching network-vif-plugged-933f98f8-c463-4299-8008-a38a3a76a0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:40:47 np0005465987 nova_compute[230713]: 2025-10-02 12:40:47.679 2 WARNING nova.compute.manager [req-da271ea4-8469-4751-a201-3bbb3d5cf47a req-9761f9e3-4fdb-44f6-adde-18b47e8aeb44 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received unexpected event network-vif-plugged-933f98f8-c463-4299-8008-a38a3a76a0be for instance with vm_state active and task_state None.#033[00m
Oct  2 08:40:47 np0005465987 nova_compute[230713]: 2025-10-02 12:40:47.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:48.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:49.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:50.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:50 np0005465987 nova_compute[230713]: 2025-10-02 12:40:50.684 2 DEBUG nova.compute.manager [req-57ce378b-b38d-47d0-b8db-1be1f66f4a1b req-74e0e39a-ca13-43a7-add8-077fe1b605c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received event network-changed-933f98f8-c463-4299-8008-a38a3a76a0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:50 np0005465987 nova_compute[230713]: 2025-10-02 12:40:50.685 2 DEBUG nova.compute.manager [req-57ce378b-b38d-47d0-b8db-1be1f66f4a1b req-74e0e39a-ca13-43a7-add8-077fe1b605c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Refreshing instance network info cache due to event network-changed-933f98f8-c463-4299-8008-a38a3a76a0be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:40:50 np0005465987 nova_compute[230713]: 2025-10-02 12:40:50.685 2 DEBUG oslo_concurrency.lockutils [req-57ce378b-b38d-47d0-b8db-1be1f66f4a1b req-74e0e39a-ca13-43a7-add8-077fe1b605c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:40:50 np0005465987 nova_compute[230713]: 2025-10-02 12:40:50.686 2 DEBUG oslo_concurrency.lockutils [req-57ce378b-b38d-47d0-b8db-1be1f66f4a1b req-74e0e39a-ca13-43a7-add8-077fe1b605c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:40:50 np0005465987 nova_compute[230713]: 2025-10-02 12:40:50.686 2 DEBUG nova.network.neutron [req-57ce378b-b38d-47d0-b8db-1be1f66f4a1b req-74e0e39a-ca13-43a7-add8-077fe1b605c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Refreshing network info cache for port 933f98f8-c463-4299-8008-a38a3a76a0be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:40:50 np0005465987 nova_compute[230713]: 2025-10-02 12:40:50.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:51.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:40:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:52.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:40:52 np0005465987 nova_compute[230713]: 2025-10-02 12:40:52.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:52 np0005465987 nova_compute[230713]: 2025-10-02 12:40:52.957 2 DEBUG nova.network.neutron [req-57ce378b-b38d-47d0-b8db-1be1f66f4a1b req-74e0e39a-ca13-43a7-add8-077fe1b605c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updated VIF entry in instance network info cache for port 933f98f8-c463-4299-8008-a38a3a76a0be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:40:52 np0005465987 nova_compute[230713]: 2025-10-02 12:40:52.958 2 DEBUG nova.network.neutron [req-57ce378b-b38d-47d0-b8db-1be1f66f4a1b req-74e0e39a-ca13-43a7-add8-077fe1b605c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updating instance_info_cache with network_info: [{"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:52 np0005465987 nova_compute[230713]: 2025-10-02 12:40:52.982 2 DEBUG oslo_concurrency.lockutils [req-57ce378b-b38d-47d0-b8db-1be1f66f4a1b req-74e0e39a-ca13-43a7-add8-077fe1b605c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:40:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:54.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:55 np0005465987 nova_compute[230713]: 2025-10-02 12:40:55.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:56.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:57 np0005465987 nova_compute[230713]: 2025-10-02 12:40:57.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:57 np0005465987 nova_compute[230713]: 2025-10-02 12:40:57.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:57 np0005465987 podman[287874]: 2025-10-02 12:40:57.838083662 +0000 UTC m=+0.049622714 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:40:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:40:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:58.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:40:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:40:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:59.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:00.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:00 np0005465987 nova_compute[230713]: 2025-10-02 12:41:00.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:00 np0005465987 nova_compute[230713]: 2025-10-02 12:41:00.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:02.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:02 np0005465987 nova_compute[230713]: 2025-10-02 12:41:02.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:02 np0005465987 nova_compute[230713]: 2025-10-02 12:41:02.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:02 np0005465987 podman[287895]: 2025-10-02 12:41:02.859397246 +0000 UTC m=+0.071720193 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:41:02 np0005465987 podman[287894]: 2025-10-02 12:41:02.867606702 +0000 UTC m=+0.080125684 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:41:02 np0005465987 podman[287893]: 2025-10-02 12:41:02.885913155 +0000 UTC m=+0.103482606 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:41:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:04.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:04 np0005465987 nova_compute[230713]: 2025-10-02 12:41:04.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:04 np0005465987 nova_compute[230713]: 2025-10-02 12:41:04.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:41:04 np0005465987 nova_compute[230713]: 2025-10-02 12:41:04.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:41:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:41:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:41:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:05.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:05 np0005465987 nova_compute[230713]: 2025-10-02 12:41:05.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:06.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:06 np0005465987 nova_compute[230713]: 2025-10-02 12:41:06.630 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:06 np0005465987 nova_compute[230713]: 2025-10-02 12:41:06.631 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:06 np0005465987 nova_compute[230713]: 2025-10-02 12:41:06.632 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:41:06 np0005465987 nova_compute[230713]: 2025-10-02 12:41:06.632 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:07 np0005465987 nova_compute[230713]: 2025-10-02 12:41:07.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:08.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:41:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1884910576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:41:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:41:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1884910576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:41:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:09.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:10.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:10 np0005465987 nova_compute[230713]: 2025-10-02 12:41:10.340 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updating instance_info_cache with network_info: [{"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:10 np0005465987 nova_compute[230713]: 2025-10-02 12:41:10.390 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:10 np0005465987 nova_compute[230713]: 2025-10-02 12:41:10.391 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:41:10 np0005465987 nova_compute[230713]: 2025-10-02 12:41:10.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:11.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:12.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:12 np0005465987 nova_compute[230713]: 2025-10-02 12:41:12.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:12 np0005465987 nova_compute[230713]: 2025-10-02 12:41:12.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:12 np0005465987 nova_compute[230713]: 2025-10-02 12:41:12.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:12 np0005465987 nova_compute[230713]: 2025-10-02 12:41:12.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:12 np0005465987 nova_compute[230713]: 2025-10-02 12:41:12.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.015 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.016 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.016 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.016 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.017 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/480575300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.450 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.575 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.575 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:13.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.748 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.749 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4225MB free_disk=20.89733123779297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.749 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.749 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.876 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.877 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.877 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:41:13 np0005465987 nova_compute[230713]: 2025-10-02 12:41:13.971 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.003271) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874003325, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 852, "num_deletes": 251, "total_data_size": 1575641, "memory_usage": 1606120, "flush_reason": "Manual Compaction"}
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874064433, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 1027815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57188, "largest_seqno": 58035, "table_properties": {"data_size": 1023837, "index_size": 1694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9658, "raw_average_key_size": 20, "raw_value_size": 1015525, "raw_average_value_size": 2111, "num_data_blocks": 74, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408812, "oldest_key_time": 1759408812, "file_creation_time": 1759408874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 61221 microseconds, and 4042 cpu microseconds.
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.064493) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 1027815 bytes OK
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.064519) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.076771) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.076798) EVENT_LOG_v1 {"time_micros": 1759408874076789, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.076822) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1571214, prev total WAL file size 1586657, number of live WAL files 2.
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.078025) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(1003KB)], [111(10MB)]
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874078095, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12412041, "oldest_snapshot_seqno": -1}
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8199 keys, 10536603 bytes, temperature: kUnknown
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874129796, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 10536603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10483913, "index_size": 31074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 212666, "raw_average_key_size": 25, "raw_value_size": 10340050, "raw_average_value_size": 1261, "num_data_blocks": 1210, "num_entries": 8199, "num_filter_entries": 8199, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759408874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.130143) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10536603 bytes
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.131450) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.7 rd, 203.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.9 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(22.3) write-amplify(10.3) OK, records in: 8723, records dropped: 524 output_compression: NoCompression
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.131482) EVENT_LOG_v1 {"time_micros": 1759408874131468, "job": 70, "event": "compaction_finished", "compaction_time_micros": 51790, "compaction_time_cpu_micros": 24509, "output_level": 6, "num_output_files": 1, "total_output_size": 10536603, "num_input_records": 8723, "num_output_records": 8199, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874131978, "job": 70, "event": "table_file_deletion", "file_number": 113}
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408874136550, "job": 70, "event": "table_file_deletion", "file_number": 111}
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.077885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.136638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.136645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.136648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.136650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:41:14.136652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:41:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:14.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3784981824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:14 np0005465987 nova_compute[230713]: 2025-10-02 12:41:14.449 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:14 np0005465987 nova_compute[230713]: 2025-10-02 12:41:14.459 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:41:14 np0005465987 nova_compute[230713]: 2025-10-02 12:41:14.529 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:14 np0005465987 nova_compute[230713]: 2025-10-02 12:41:14.570 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:41:14 np0005465987 nova_compute[230713]: 2025-10-02 12:41:14.570 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:15 np0005465987 nova_compute[230713]: 2025-10-02 12:41:15.570 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:15 np0005465987 nova_compute[230713]: 2025-10-02 12:41:15.570 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:15 np0005465987 nova_compute[230713]: 2025-10-02 12:41:15.571 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:15.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:15 np0005465987 nova_compute[230713]: 2025-10-02 12:41:15.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:16.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:17.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:17 np0005465987 nova_compute[230713]: 2025-10-02 12:41:17.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:18.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:19 np0005465987 nova_compute[230713]: 2025-10-02 12:41:19.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:19.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:20.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.337 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.338 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.380 2 DEBUG nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.513 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.513 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.522 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.522 2 INFO nova.compute.claims [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.738 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:20 np0005465987 nova_compute[230713]: 2025-10-02 12:41:20.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:41:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2986351854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:21 np0005465987 nova_compute[230713]: 2025-10-02 12:41:21.208 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:21 np0005465987 nova_compute[230713]: 2025-10-02 12:41:21.214 2 DEBUG nova.compute.provider_tree [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:41:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:21.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:22.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:22 np0005465987 nova_compute[230713]: 2025-10-02 12:41:22.477 2 DEBUG nova.scheduler.client.report [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:22 np0005465987 nova_compute[230713]: 2025-10-02 12:41:22.538 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:22 np0005465987 nova_compute[230713]: 2025-10-02 12:41:22.538 2 DEBUG nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:41:22 np0005465987 nova_compute[230713]: 2025-10-02 12:41:22.613 2 DEBUG nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:41:22 np0005465987 nova_compute[230713]: 2025-10-02 12:41:22.614 2 DEBUG nova.network.neutron [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:41:22 np0005465987 nova_compute[230713]: 2025-10-02 12:41:22.698 2 INFO nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:41:22 np0005465987 nova_compute[230713]: 2025-10-02 12:41:22.754 2 DEBUG nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:41:22 np0005465987 nova_compute[230713]: 2025-10-02 12:41:22.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:22 np0005465987 nova_compute[230713]: 2025-10-02 12:41:22.829 2 INFO nova.virt.block_device [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Booting with volume 6916418d-7e23-49fb-b41f-d247b7619f6f at /dev/vda#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.023 2 DEBUG nova.policy [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c02d1dcc10ea4e57bbc6b7a3c100dc7b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6822f02d5ca04c659329a75d487054cf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.158 2 DEBUG os_brick.utils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.160 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.175 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.176 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[a4057e79-55c6-4cf7-a44f-a5137870d9a3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.177 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.189 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.189 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[230256f9-704b-4bb9-853a-663665f6ae42]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.191 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.200 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.200 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[93e1d5f0-5f14-49e7-9056-25d41e4bab8c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.202 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[3fac0453-d78b-415d-b938-3b8105e6f228]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.203 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.236 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.240 2 DEBUG os_brick.initiator.connectors.lightos [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.241 2 DEBUG os_brick.initiator.connectors.lightos [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.241 2 DEBUG os_brick.initiator.connectors.lightos [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.242 2 DEBUG os_brick.utils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:41:23 np0005465987 nova_compute[230713]: 2025-10-02 12:41:23.243 2 DEBUG nova.virt.block_device [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating existing volume attachment record: 5dbea56f-5a7c-4179-b71f-9d1d827e5b78 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:41:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e342 e342: 3 total, 3 up, 3 in
Oct  2 08:41:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:23.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:24.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:24 np0005465987 nova_compute[230713]: 2025-10-02 12:41:24.618 2 DEBUG nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:41:24 np0005465987 nova_compute[230713]: 2025-10-02 12:41:24.619 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:41:24 np0005465987 nova_compute[230713]: 2025-10-02 12:41:24.620 2 INFO nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Creating image(s)#033[00m
Oct  2 08:41:24 np0005465987 nova_compute[230713]: 2025-10-02 12:41:24.620 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:41:24 np0005465987 nova_compute[230713]: 2025-10-02 12:41:24.620 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Ensure instance console log exists: /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:41:24 np0005465987 nova_compute[230713]: 2025-10-02 12:41:24.620 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:24 np0005465987 nova_compute[230713]: 2025-10-02 12:41:24.621 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:24 np0005465987 nova_compute[230713]: 2025-10-02 12:41:24.621 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:25 np0005465987 nova_compute[230713]: 2025-10-02 12:41:25.133 2 DEBUG nova.network.neutron [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Successfully created port: ff63f6e3-cb88-42d6-98e3-4f78430c7896 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:41:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:25.570 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:25.572 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:41:25 np0005465987 nova_compute[230713]: 2025-10-02 12:41:25.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:25.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:25 np0005465987 nova_compute[230713]: 2025-10-02 12:41:25.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:26.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:27.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:27 np0005465987 nova_compute[230713]: 2025-10-02 12:41:27.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:27.966 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:27.967 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:27.968 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:28.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:28 np0005465987 podman[288083]: 2025-10-02 12:41:28.868691028 +0000 UTC m=+0.075674391 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:41:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:29.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:30.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:30.575 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:30 np0005465987 nova_compute[230713]: 2025-10-02 12:41:30.666 2 DEBUG nova.network.neutron [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Successfully updated port: ff63f6e3-cb88-42d6-98e3-4f78430c7896 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:41:30 np0005465987 nova_compute[230713]: 2025-10-02 12:41:30.698 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:30 np0005465987 nova_compute[230713]: 2025-10-02 12:41:30.699 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquired lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:30 np0005465987 nova_compute[230713]: 2025-10-02 12:41:30.699 2 DEBUG nova.network.neutron [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:41:30 np0005465987 nova_compute[230713]: 2025-10-02 12:41:30.895 2 DEBUG nova.compute.manager [req-bc1b6ca3-b765-4a21-9635-0a4129ce5a70 req-ba5c7c50-53f5-44b7-b080-b0b5d693d1bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-changed-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:30 np0005465987 nova_compute[230713]: 2025-10-02 12:41:30.896 2 DEBUG nova.compute.manager [req-bc1b6ca3-b765-4a21-9635-0a4129ce5a70 req-ba5c7c50-53f5-44b7-b080-b0b5d693d1bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Refreshing instance network info cache due to event network-changed-ff63f6e3-cb88-42d6-98e3-4f78430c7896. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:41:30 np0005465987 nova_compute[230713]: 2025-10-02 12:41:30.896 2 DEBUG oslo_concurrency.lockutils [req-bc1b6ca3-b765-4a21-9635-0a4129ce5a70 req-ba5c7c50-53f5-44b7-b080-b0b5d693d1bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:30 np0005465987 nova_compute[230713]: 2025-10-02 12:41:30.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:31 np0005465987 nova_compute[230713]: 2025-10-02 12:41:31.020 2 DEBUG nova.network.neutron [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:41:31 np0005465987 nova_compute[230713]: 2025-10-02 12:41:31.591 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:31 np0005465987 nova_compute[230713]: 2025-10-02 12:41:31.592 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:31 np0005465987 nova_compute[230713]: 2025-10-02 12:41:31.616 2 DEBUG nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:41:31 np0005465987 nova_compute[230713]: 2025-10-02 12:41:31.714 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:31 np0005465987 nova_compute[230713]: 2025-10-02 12:41:31.714 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:31.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:31 np0005465987 nova_compute[230713]: 2025-10-02 12:41:31.721 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:41:31 np0005465987 nova_compute[230713]: 2025-10-02 12:41:31.721 2 INFO nova.compute.claims [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:41:31 np0005465987 nova_compute[230713]: 2025-10-02 12:41:31.948 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:32.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:32 np0005465987 nova_compute[230713]: 2025-10-02 12:41:32.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3687820285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:32 np0005465987 nova_compute[230713]: 2025-10-02 12:41:32.366 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:32 np0005465987 nova_compute[230713]: 2025-10-02 12:41:32.372 2 DEBUG nova.compute.provider_tree [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:41:32 np0005465987 nova_compute[230713]: 2025-10-02 12:41:32.436 2 DEBUG nova.network.neutron [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating instance_info_cache with network_info: [{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:32 np0005465987 nova_compute[230713]: 2025-10-02 12:41:32.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.261 2 DEBUG nova.scheduler.client.report [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.273 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Releasing lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.273 2 DEBUG nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance network_info: |[{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.275 2 DEBUG oslo_concurrency.lockutils [req-bc1b6ca3-b765-4a21-9635-0a4129ce5a70 req-ba5c7c50-53f5-44b7-b080-b0b5d693d1bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.275 2 DEBUG nova.network.neutron [req-bc1b6ca3-b765-4a21-9635-0a4129ce5a70 req-ba5c7c50-53f5-44b7-b080-b0b5d693d1bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Refreshing network info cache for port ff63f6e3-cb88-42d6-98e3-4f78430c7896 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.281 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Start _get_guest_xml network_info=[{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '5dbea56f-5a7c-4179-b71f-9d1d827e5b78', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6916418d-7e23-49fb-b41f-d247b7619f6f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6916418d-7e23-49fb-b41f-d247b7619f6f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f3566799-fdd0-46bf-8256-0294a227030a', 'attached_at': '', 'detached_at': '', 'volume_id': '6916418d-7e23-49fb-b41f-d247b7619f6f', 'serial': '6916418d-7e23-49fb-b41f-d247b7619f6f'}, 'guest_format': None, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.291 2 WARNING nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.301 2 DEBUG nova.virt.libvirt.host [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.302 2 DEBUG nova.virt.libvirt.host [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.310 2 DEBUG nova.virt.libvirt.host [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.310 2 DEBUG nova.virt.libvirt.host [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.312 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.313 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.314 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.314 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.315 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.315 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.316 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.317 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.317 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.318 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.318 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.319 2 DEBUG nova.virt.hardware [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.359 2 DEBUG nova.storage.rbd_utils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image f3566799-fdd0-46bf-8256-0294a227030a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.364 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.449 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.450 2 DEBUG nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:41:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:33.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:41:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2362597005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:41:33 np0005465987 nova_compute[230713]: 2025-10-02 12:41:33.849 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:33 np0005465987 podman[288165]: 2025-10-02 12:41:33.870337339 +0000 UTC m=+0.071722343 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  2 08:41:33 np0005465987 podman[288164]: 2025-10-02 12:41:33.888861208 +0000 UTC m=+0.084075522 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  2 08:41:33 np0005465987 podman[288163]: 2025-10-02 12:41:33.914800812 +0000 UTC m=+0.116671359 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.003 2 DEBUG nova.virt.libvirt.vif [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-73692290',display_name='tempest-ServerActionsTestOtherA-server-73692290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-73692290',id=154,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-la5chqsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=f3566799-fdd0-46bf-8256-0294a227030a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.003 2 DEBUG nova.network.os_vif_util [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.004 2 DEBUG nova.network.os_vif_util [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.005 2 DEBUG nova.objects.instance [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'pci_devices' on Instance uuid f3566799-fdd0-46bf-8256-0294a227030a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.010 2 DEBUG nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.010 2 DEBUG nova.network.neutron [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.037 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <uuid>f3566799-fdd0-46bf-8256-0294a227030a</uuid>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <name>instance-0000009a</name>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerActionsTestOtherA-server-73692290</nova:name>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:41:33</nova:creationTime>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <nova:user uuid="c02d1dcc10ea4e57bbc6b7a3c100dc7b">tempest-ServerActionsTestOtherA-1680083910-project-member</nova:user>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <nova:project uuid="6822f02d5ca04c659329a75d487054cf">tempest-ServerActionsTestOtherA-1680083910</nova:project>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <nova:port uuid="ff63f6e3-cb88-42d6-98e3-4f78430c7896">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <entry name="serial">f3566799-fdd0-46bf-8256-0294a227030a</entry>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <entry name="uuid">f3566799-fdd0-46bf-8256-0294a227030a</entry>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/f3566799-fdd0-46bf-8256-0294a227030a_disk.config">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-6916418d-7e23-49fb-b41f-d247b7619f6f">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <serial>6916418d-7e23-49fb-b41f-d247b7619f6f</serial>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:1d:c1:2f"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <target dev="tapff63f6e3-cb"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/console.log" append="off"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:41:34 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:41:34 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:41:34 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:41:34 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.039 2 DEBUG nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Preparing to wait for external event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.039 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.040 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.040 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.041 2 DEBUG nova.virt.libvirt.vif [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-73692290',display_name='tempest-ServerActionsTestOtherA-server-73692290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-73692290',id=154,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-la5chqsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=f3566799-fdd0-46bf-8256-0294a227030a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.041 2 DEBUG nova.network.os_vif_util [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.042 2 DEBUG nova.network.os_vif_util [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.042 2 DEBUG os_vif [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.044 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff63f6e3-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:34 np0005465987 NetworkManager[44910]: <info>  [1759408894.0510] manager: (tapff63f6e3-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff63f6e3-cb, col_values=(('external_ids', {'iface-id': 'ff63f6e3-cb88-42d6-98e3-4f78430c7896', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:c1:2f', 'vm-uuid': 'f3566799-fdd0-46bf-8256-0294a227030a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.052 2 INFO nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.062 2 INFO os_vif [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb')#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.105 2 DEBUG nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.145 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.146 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.146 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] No VIF found with MAC fa:16:3e:1d:c1:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.147 2 INFO nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Using config drive#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.175 2 DEBUG nova.storage.rbd_utils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image f3566799-fdd0-46bf-8256-0294a227030a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:34.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.238 2 DEBUG nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.240 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.240 2 INFO nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Creating image(s)#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.267 2 DEBUG nova.storage.rbd_utils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.301 2 DEBUG nova.storage.rbd_utils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.335 2 DEBUG nova.storage.rbd_utils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.339 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "b205c32a62e786c93a2dbd49e2f9a3b4e1110882" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.340 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "b205c32a62e786c93a2dbd49e2f9a3b4e1110882" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.436 2 DEBUG nova.policy [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '734ae44830d540d8ab51c2a3d75ecd80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:41:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:34 np0005465987 nova_compute[230713]: 2025-10-02 12:41:34.983 2 DEBUG nova.virt.libvirt.imagebackend [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/881d7140-1aef-4d5d-b04d-76b7b19d6837/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/881d7140-1aef-4d5d-b04d-76b7b19d6837/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.005 2 INFO nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Creating config drive at /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/disk.config#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.017 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaigkxrhp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.163 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaigkxrhp" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.203 2 DEBUG nova.storage.rbd_utils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image f3566799-fdd0-46bf-8256-0294a227030a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.207 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/disk.config f3566799-fdd0-46bf-8256-0294a227030a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.668 2 DEBUG oslo_concurrency.processutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/disk.config f3566799-fdd0-46bf-8256-0294a227030a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.670 2 INFO nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Deleting local config drive /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/disk.config because it was imported into RBD.#033[00m
Oct  2 08:41:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:35.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:35 np0005465987 kernel: tapff63f6e3-cb: entered promiscuous mode
Oct  2 08:41:35 np0005465987 NetworkManager[44910]: <info>  [1759408895.7541] manager: (tapff63f6e3-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Oct  2 08:41:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:35Z|00577|binding|INFO|Claiming lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 for this chassis.
Oct  2 08:41:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:35Z|00578|binding|INFO|ff63f6e3-cb88-42d6-98e3-4f78430c7896: Claiming fa:16:3e:1d:c1:2f 10.100.0.14
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.772 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:c1:2f 10.100.0.14'], port_security=['fa:16:3e:1d:c1:2f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f3566799-fdd0-46bf-8256-0294a227030a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b290a23e-28e7-483f-bf5b-c42418308591', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ff63f6e3-cb88-42d6-98e3-4f78430c7896) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:35Z|00579|binding|INFO|Setting lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 ovn-installed in OVS
Oct  2 08:41:35 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:35Z|00580|binding|INFO|Setting lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 up in Southbound
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.774 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ff63f6e3-cb88-42d6-98e3-4f78430c7896 in datapath 00455285-97a7-4fa2-ba83-e8060936877e bound to our chassis#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.778 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00455285-97a7-4fa2-ba83-e8060936877e#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.802 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[637b0422-66c1-4a41-87e7-2a55bae7e58d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:35 np0005465987 systemd-udevd[288353]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:41:35 np0005465987 systemd-machined[188335]: New machine qemu-69-instance-0000009a.
Oct  2 08:41:35 np0005465987 NetworkManager[44910]: <info>  [1759408895.8236] device (tapff63f6e3-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:41:35 np0005465987 systemd[1]: Started Virtual Machine qemu-69-instance-0000009a.
Oct  2 08:41:35 np0005465987 NetworkManager[44910]: <info>  [1759408895.8265] device (tapff63f6e3-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.848 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[680b47a2-6345-4449-977a-8f1e774cd3ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.854 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef7740d-4caf-4bb4-b09e-5b162d4444c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.884 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd66b9d-e629-4048-8d31-2d4f3c0f0d2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.906 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1f029c12-9976-4634-8cbd-3e811e4043b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691333, 'reachable_time': 38229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288366, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.923 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5b84b22a-9125-4bd6-8ba2-71dbdad4985e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00455285-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691343, 'tstamp': 691343}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288368, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00455285-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691345, 'tstamp': 691345}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288368, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.925 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.928 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00455285-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.928 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:35 np0005465987 nova_compute[230713]: 2025-10-02 12:41:35.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.928 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00455285-90, col_values=(('external_ids', {'iface-id': '293fb87a-10df-4698-a69e-3023bca5a6a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:35.929 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:36.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.340 2 DEBUG nova.compute.manager [req-4b5f2098-7198-4ba9-b12c-37492c5cabfe req-871e972d-a4f8-4008-8c17-f8de53dd18d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.340 2 DEBUG oslo_concurrency.lockutils [req-4b5f2098-7198-4ba9-b12c-37492c5cabfe req-871e972d-a4f8-4008-8c17-f8de53dd18d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.341 2 DEBUG oslo_concurrency.lockutils [req-4b5f2098-7198-4ba9-b12c-37492c5cabfe req-871e972d-a4f8-4008-8c17-f8de53dd18d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.341 2 DEBUG oslo_concurrency.lockutils [req-4b5f2098-7198-4ba9-b12c-37492c5cabfe req-871e972d-a4f8-4008-8c17-f8de53dd18d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.341 2 DEBUG nova.compute.manager [req-4b5f2098-7198-4ba9-b12c-37492c5cabfe req-871e972d-a4f8-4008-8c17-f8de53dd18d9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Processing event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.382 2 DEBUG nova.network.neutron [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Successfully created port: e389cd1d-f50c-460c-b8ab-e0a384159932 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.532 2 DEBUG nova.network.neutron [req-bc1b6ca3-b765-4a21-9635-0a4129ce5a70 req-ba5c7c50-53f5-44b7-b080-b0b5d693d1bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updated VIF entry in instance network info cache for port ff63f6e3-cb88-42d6-98e3-4f78430c7896. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.533 2 DEBUG nova.network.neutron [req-bc1b6ca3-b765-4a21-9635-0a4129ce5a70 req-ba5c7c50-53f5-44b7-b080-b0b5d693d1bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating instance_info_cache with network_info: [{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.581 2 DEBUG oslo_concurrency.lockutils [req-bc1b6ca3-b765-4a21-9635-0a4129ce5a70 req-ba5c7c50-53f5-44b7-b080-b0b5d693d1bd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.713 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.813 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882.part --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.815 2 DEBUG nova.virt.images [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] 881d7140-1aef-4d5d-b04d-76b7b19d6837 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.816 2 DEBUG nova.privsep.utils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 08:41:36 np0005465987 nova_compute[230713]: 2025-10-02 12:41:36.817 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882.part /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.024 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882.part /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882.converted" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.033 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.095 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408897.0946052, f3566799-fdd0-46bf-8256-0294a227030a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.097 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] VM Started (Lifecycle Event)#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.103 2 DEBUG nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.108 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.111 2 INFO nova.virt.libvirt.driver [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance spawned successfully.#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.112 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.137 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882.converted --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.139 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "b205c32a62e786c93a2dbd49e2f9a3b4e1110882" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.172 2 DEBUG nova.storage.rbd_utils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.176 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:37.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.830 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:37 np0005465987 nova_compute[230713]: 2025-10-02 12:41:37.925 2 DEBUG nova.storage.rbd_utils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] resizing rbd image 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:41:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:38.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:38 np0005465987 nova_compute[230713]: 2025-10-02 12:41:38.363 2 DEBUG nova.objects.instance [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 70956832-8f1d-46df-8eeb-3f75ddf40e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:39 np0005465987 nova_compute[230713]: 2025-10-02 12:41:39.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:40.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.411 2 DEBUG nova.compute.manager [req-20270d94-e987-4131-8929-73a59e9ca2e5 req-d6a0ef6a-7cea-47b4-a9e8-261e1b95b427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.411 2 DEBUG oslo_concurrency.lockutils [req-20270d94-e987-4131-8929-73a59e9ca2e5 req-d6a0ef6a-7cea-47b4-a9e8-261e1b95b427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.412 2 DEBUG oslo_concurrency.lockutils [req-20270d94-e987-4131-8929-73a59e9ca2e5 req-d6a0ef6a-7cea-47b4-a9e8-261e1b95b427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.412 2 DEBUG oslo_concurrency.lockutils [req-20270d94-e987-4131-8929-73a59e9ca2e5 req-d6a0ef6a-7cea-47b4-a9e8-261e1b95b427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.412 2 DEBUG nova.compute.manager [req-20270d94-e987-4131-8929-73a59e9ca2e5 req-d6a0ef6a-7cea-47b4-a9e8-261e1b95b427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.412 2 WARNING nova.compute.manager [req-20270d94-e987-4131-8929-73a59e9ca2e5 req-d6a0ef6a-7cea-47b4-a9e8-261e1b95b427 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.441 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.445 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.446 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Ensure instance console log exists: /var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.446 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.447 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.447 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.449 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.451 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.451 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.452 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.452 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.453 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.453 2 DEBUG nova.virt.libvirt.driver [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.490 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.490 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408897.0948677, f3566799-fdd0-46bf-8256-0294a227030a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.491 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.561 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.566 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408897.1066787, f3566799-fdd0-46bf-8256-0294a227030a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.566 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.601 2 INFO nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Took 15.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.602 2 DEBUG nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.604 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.616 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.734 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.771 2 INFO nova.compute.manager [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Took 20.32 seconds to build instance.#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.821 2 DEBUG oslo_concurrency.lockutils [None req-1454b403-25e7-438c-a2df-81b6e2e0b2e4 c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:40 np0005465987 nova_compute[230713]: 2025-10-02 12:41:40.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:41 np0005465987 nova_compute[230713]: 2025-10-02 12:41:41.025 2 DEBUG nova.network.neutron [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Successfully updated port: e389cd1d-f50c-460c-b8ab-e0a384159932 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:41:41 np0005465987 nova_compute[230713]: 2025-10-02 12:41:41.048 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:41 np0005465987 nova_compute[230713]: 2025-10-02 12:41:41.049 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:41 np0005465987 nova_compute[230713]: 2025-10-02 12:41:41.049 2 DEBUG nova.network.neutron [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:41:41 np0005465987 nova_compute[230713]: 2025-10-02 12:41:41.267 2 DEBUG nova.network.neutron [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:41:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:41.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:42.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:42 np0005465987 nova_compute[230713]: 2025-10-02 12:41:42.648 2 DEBUG nova.compute.manager [req-add38708-97b9-4708-be5d-e740d3377760 req-7e72aac3-4398-4279-8afc-8ee01e3edabd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:42 np0005465987 nova_compute[230713]: 2025-10-02 12:41:42.650 2 DEBUG nova.compute.manager [req-add38708-97b9-4708-be5d-e740d3377760 req-7e72aac3-4398-4279-8afc-8ee01e3edabd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing instance network info cache due to event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:41:42 np0005465987 nova_compute[230713]: 2025-10-02 12:41:42.651 2 DEBUG oslo_concurrency.lockutils [req-add38708-97b9-4708-be5d-e740d3377760 req-7e72aac3-4398-4279-8afc-8ee01e3edabd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:43 np0005465987 nova_compute[230713]: 2025-10-02 12:41:43.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:43.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.004 2 DEBUG nova.network.neutron [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.031 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.032 2 DEBUG nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Instance network_info: |[{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.033 2 DEBUG oslo_concurrency.lockutils [req-add38708-97b9-4708-be5d-e740d3377760 req-7e72aac3-4398-4279-8afc-8ee01e3edabd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.034 2 DEBUG nova.network.neutron [req-add38708-97b9-4708-be5d-e740d3377760 req-7e72aac3-4398-4279-8afc-8ee01e3edabd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.038 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Start _get_guest_xml network_info=[{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:41:20Z,direct_url=<?>,disk_format='qcow2',id=881d7140-1aef-4d5d-b04d-76b7b19d6837,min_disk=0,min_ram=0,name='tempest-scenario-img--1133442680',owner='2e53064cd4d645f09bd59bbca09b98e0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:41:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': '881d7140-1aef-4d5d-b04d-76b7b19d6837'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.045 2 WARNING nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.054 2 DEBUG nova.virt.libvirt.host [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.055 2 DEBUG nova.virt.libvirt.host [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.069 2 DEBUG nova.virt.libvirt.host [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.070 2 DEBUG nova.virt.libvirt.host [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.072 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.073 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:41:20Z,direct_url=<?>,disk_format='qcow2',id=881d7140-1aef-4d5d-b04d-76b7b19d6837,min_disk=0,min_ram=0,name='tempest-scenario-img--1133442680',owner='2e53064cd4d645f09bd59bbca09b98e0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:41:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.074 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.075 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.076 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.077 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.077 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.078 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.078 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.079 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.080 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.080 2 DEBUG nova.virt.hardware [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.085 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:44.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:41:44 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1658884160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.605 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.633 2 DEBUG nova.storage.rbd_utils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:44 np0005465987 nova_compute[230713]: 2025-10-02 12:41:44.637 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:41:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3628699595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.070 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.072 2 DEBUG nova.virt.libvirt.vif [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1443375856',display_name='tempest-TestMinimumBasicScenario-server-1443375856',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1443375856',id=156,image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGzB0sJYXYPOfWvtpNcxNGXcGvdcBf7QwVJ9OKoFMmmKZSO6fZE5N4Z9luCuMJc0D1eqyZgLUzVemVKb7pJpNrlCg/Ws/Z2apKl6RmNDh6IMXB331e3RDriHaz2BQyxtg==',key_name='tempest-TestMinimumBasicScenario-222087753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e53064cd4d645f09bd59bbca09b98e0',ramdisk_id='',reservation_id='r-0su4hzgy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-999813940',owner_user_name='tempest-TestMinimumBasicScenario-999813940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:34Z,user_data=None,user_id='734ae44830d540d8ab51c2a3d75ecd80',uuid=70956832-8f1d-46df-8eeb-3f75ddf40e84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.073 2 DEBUG nova.network.os_vif_util [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converting VIF {"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.074 2 DEBUG nova.network.os_vif_util [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.075 2 DEBUG nova.objects.instance [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 70956832-8f1d-46df-8eeb-3f75ddf40e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.097 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <uuid>70956832-8f1d-46df-8eeb-3f75ddf40e84</uuid>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <name>instance-0000009c</name>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestMinimumBasicScenario-server-1443375856</nova:name>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:41:44</nova:creationTime>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <nova:user uuid="734ae44830d540d8ab51c2a3d75ecd80">tempest-TestMinimumBasicScenario-999813940-project-member</nova:user>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <nova:project uuid="2e53064cd4d645f09bd59bbca09b98e0">tempest-TestMinimumBasicScenario-999813940</nova:project>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="881d7140-1aef-4d5d-b04d-76b7b19d6837"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <nova:port uuid="e389cd1d-f50c-460c-b8ab-e0a384159932">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <entry name="serial">70956832-8f1d-46df-8eeb-3f75ddf40e84</entry>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <entry name="uuid">70956832-8f1d-46df-8eeb-3f75ddf40e84</entry>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/70956832-8f1d-46df-8eeb-3f75ddf40e84_disk">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/70956832-8f1d-46df-8eeb-3f75ddf40e84_disk.config">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:0c:87:e1"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <target dev="tape389cd1d-f5"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84/console.log" append="off"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:41:45 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:41:45 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:41:45 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:41:45 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.105 2 DEBUG nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Preparing to wait for external event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.106 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.106 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.106 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.107 2 DEBUG nova.virt.libvirt.vif [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1443375856',display_name='tempest-TestMinimumBasicScenario-server-1443375856',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1443375856',id=156,image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGzB0sJYXYPOfWvtpNcxNGXcGvdcBf7QwVJ9OKoFMmmKZSO6fZE5N4Z9luCuMJc0D1eqyZgLUzVemVKb7pJpNrlCg/Ws/Z2apKl6RmNDh6IMXB331e3RDriHaz2BQyxtg==',key_name='tempest-TestMinimumBasicScenario-222087753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e53064cd4d645f09bd59bbca09b98e0',ramdisk_id='',reservation_id='r-0su4hzgy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-999813940',owner_user_name='tempest-TestMinimumBasicScenario-999813940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:34Z,user_data=None,user_id='734ae44830d540d8ab51c2a3d75ecd80',uuid=70956832-8f1d-46df-8eeb-3f75ddf40e84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.107 2 DEBUG nova.network.os_vif_util [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converting VIF {"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.108 2 DEBUG nova.network.os_vif_util [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.109 2 DEBUG os_vif [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape389cd1d-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.115 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape389cd1d-f5, col_values=(('external_ids', {'iface-id': 'e389cd1d-f50c-460c-b8ab-e0a384159932', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:87:e1', 'vm-uuid': '70956832-8f1d-46df-8eeb-3f75ddf40e84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:45 np0005465987 NetworkManager[44910]: <info>  [1759408905.1178] manager: (tape389cd1d-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.124 2 INFO os_vif [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5')#033[00m
Oct  2 08:41:45 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.230 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.231 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.231 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No VIF found with MAC fa:16:3e:0c:87:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.231 2 INFO nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Using config drive#033[00m
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.261 2 DEBUG nova.storage.rbd_utils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:45.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:45 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:45.999 2 INFO nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Creating config drive at /var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84/disk.config#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.009 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4j8tlhks execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.147 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4j8tlhks" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.198 2 DEBUG nova.storage.rbd_utils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] rbd image 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.204 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84/disk.config 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:46.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.365 2 DEBUG oslo_concurrency.processutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84/disk.config 70956832-8f1d-46df-8eeb-3f75ddf40e84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.366 2 INFO nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Deleting local config drive /var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84/disk.config because it was imported into RBD.#033[00m
Oct  2 08:41:46 np0005465987 kernel: tape389cd1d-f5: entered promiscuous mode
Oct  2 08:41:46 np0005465987 NetworkManager[44910]: <info>  [1759408906.4109] manager: (tape389cd1d-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Oct  2 08:41:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:46Z|00581|binding|INFO|Claiming lport e389cd1d-f50c-460c-b8ab-e0a384159932 for this chassis.
Oct  2 08:41:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:46Z|00582|binding|INFO|e389cd1d-f50c-460c-b8ab-e0a384159932: Claiming fa:16:3e:0c:87:e1 10.100.0.12
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.423 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:87:e1 10.100.0.12'], port_security=['fa:16:3e:0c:87:e1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '70956832-8f1d-46df-8eeb-3f75ddf40e84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e53961-97c9-4d79-b2bc-ba336c204821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3789894-a1c1-47ad-b248-e3a6730a7778', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a249b7-28a6-4e82-83ff-74dd7c2a70f8, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=e389cd1d-f50c-460c-b8ab-e0a384159932) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:46Z|00583|binding|INFO|Setting lport e389cd1d-f50c-460c-b8ab-e0a384159932 ovn-installed in OVS
Oct  2 08:41:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:46Z|00584|binding|INFO|Setting lport e389cd1d-f50c-460c-b8ab-e0a384159932 up in Southbound
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.425 138325 INFO neutron.agent.ovn.metadata.agent [-] Port e389cd1d-f50c-460c-b8ab-e0a384159932 in datapath 99e53961-97c9-4d79-b2bc-ba336c204821 bound to our chassis#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.427 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99e53961-97c9-4d79-b2bc-ba336c204821#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.439 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0e66af-f650-4d6a-a2c2-db27960993fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.440 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99e53961-91 in ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.442 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99e53961-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.442 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[03ef459e-a2c9-48e0-99c3-8d03510879cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.444 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7fad4d-6a24-4b37-99c2-f06a461e8675]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 systemd-machined[188335]: New machine qemu-70-instance-0000009c.
Oct  2 08:41:46 np0005465987 systemd-udevd[288673]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:41:46 np0005465987 systemd[1]: Started Virtual Machine qemu-70-instance-0000009c.
Oct  2 08:41:46 np0005465987 NetworkManager[44910]: <info>  [1759408906.4603] device (tape389cd1d-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:41:46 np0005465987 NetworkManager[44910]: <info>  [1759408906.4612] device (tape389cd1d-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.461 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[24df8d7e-063b-47ba-b81e-06dd8e585484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.491 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe7c2b4-e057-4e1c-87b7-6e73dc5d590f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.524 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[797aa6f7-add4-4ced-8dfc-eb2eb1f5e434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 NetworkManager[44910]: <info>  [1759408906.5305] manager: (tap99e53961-90): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.529 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8882f9e7-31aa-43ec-80c8-6a8e4df977a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.575 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa75ab4-5f60-4022-9d69-dde9db1377d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.578 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3c298b-715c-493a-b94d-a8c334b8ad3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 NetworkManager[44910]: <info>  [1759408906.6163] device (tap99e53961-90): carrier: link connected
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.627 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[75f4a67b-3dfd-4df9-a471-48eba1fe0580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.648 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b8969748-fc68-4ce3-ba43-15add3e4039d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e53961-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:15:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697519, 'reachable_time': 34711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288705, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.668 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2f580269-1497-4a68-bdb0-c99a79196654]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:1562'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697519, 'tstamp': 697519}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288706, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.687 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[39e64d2c-1ece-45e1-a848-3fd167923697]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e53961-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:15:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697519, 'reachable_time': 34711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288707, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.723 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[441eaaf0-7df3-4809-b781-401e4fc981c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.799 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5e169cf3-979e-44fd-b5a1-41c62adaad74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.800 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e53961-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.801 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.801 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99e53961-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:46 np0005465987 NetworkManager[44910]: <info>  [1759408906.8039] manager: (tap99e53961-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Oct  2 08:41:46 np0005465987 kernel: tap99e53961-90: entered promiscuous mode
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.806 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99e53961-90, col_values=(('external_ids', {'iface-id': '15e9653f-d8e0-49a9-859e-c8a4f714df4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:46Z|00585|binding|INFO|Releasing lport 15e9653f-d8e0-49a9-859e-c8a4f714df4e from this chassis (sb_readonly=0)
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.822 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.823 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[269b994e-6aed-47c8-90e2-e769b46b178c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.824 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-99e53961-97c9-4d79-b2bc-ba336c204821
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 99e53961-97c9-4d79-b2bc-ba336c204821
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:41:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:46.825 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'env', 'PROCESS_TAG=haproxy-99e53961-97c9-4d79-b2bc-ba336c204821', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99e53961-97c9-4d79-b2bc-ba336c204821.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.913 2 DEBUG nova.network.neutron [req-add38708-97b9-4708-be5d-e740d3377760 req-7e72aac3-4398-4279-8afc-8ee01e3edabd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updated VIF entry in instance network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.913 2 DEBUG nova.network.neutron [req-add38708-97b9-4708-be5d-e740d3377760 req-7e72aac3-4398-4279-8afc-8ee01e3edabd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:46 np0005465987 nova_compute[230713]: 2025-10-02 12:41:46.953 2 DEBUG oslo_concurrency.lockutils [req-add38708-97b9-4708-be5d-e740d3377760 req-7e72aac3-4398-4279-8afc-8ee01e3edabd d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.067 2 DEBUG nova.compute.manager [req-1f572c26-c7c1-4881-95b2-57a1a8afcc9d req-db5e5c4c-8196-4f27-8fc9-79dd9bfcf335 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.067 2 DEBUG oslo_concurrency.lockutils [req-1f572c26-c7c1-4881-95b2-57a1a8afcc9d req-db5e5c4c-8196-4f27-8fc9-79dd9bfcf335 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.068 2 DEBUG oslo_concurrency.lockutils [req-1f572c26-c7c1-4881-95b2-57a1a8afcc9d req-db5e5c4c-8196-4f27-8fc9-79dd9bfcf335 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.068 2 DEBUG oslo_concurrency.lockutils [req-1f572c26-c7c1-4881-95b2-57a1a8afcc9d req-db5e5c4c-8196-4f27-8fc9-79dd9bfcf335 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.068 2 DEBUG nova.compute.manager [req-1f572c26-c7c1-4881-95b2-57a1a8afcc9d req-db5e5c4c-8196-4f27-8fc9-79dd9bfcf335 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Processing event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:41:47 np0005465987 podman[288739]: 2025-10-02 12:41:47.274074617 +0000 UTC m=+0.074783076 container create 323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:41:47 np0005465987 systemd[1]: Started libpod-conmon-323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04.scope.
Oct  2 08:41:47 np0005465987 podman[288739]: 2025-10-02 12:41:47.234302614 +0000 UTC m=+0.035011183 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:41:47 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:41:47 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ccc671353d98e2673345b2790dafd542fc33d9f8ef6ae72c396debc5c0e8b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:41:47 np0005465987 podman[288739]: 2025-10-02 12:41:47.356965226 +0000 UTC m=+0.157673695 container init 323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:41:47 np0005465987 podman[288739]: 2025-10-02 12:41:47.362752945 +0000 UTC m=+0.163461394 container start 323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:41:47 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[288788]: [NOTICE]   (288800) : New worker (288802) forked
Oct  2 08:41:47 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[288788]: [NOTICE]   (288800) : Loading success.
Oct  2 08:41:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.791 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408907.7914276, 70956832-8f1d-46df-8eeb-3f75ddf40e84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.792 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] VM Started (Lifecycle Event)#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.794 2 DEBUG nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.796 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.800 2 INFO nova.virt.libvirt.driver [-] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Instance spawned successfully.#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.800 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.843 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.846 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.854 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.855 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.855 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.855 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.856 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.856 2 DEBUG nova.virt.libvirt.driver [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.912 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.912 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408907.792289, 70956832-8f1d-46df-8eeb-3f75ddf40e84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.912 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.948 2 INFO nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Took 13.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.949 2 DEBUG nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.962 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.965 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408907.7962663, 70956832-8f1d-46df-8eeb-3f75ddf40e84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:47 np0005465987 nova_compute[230713]: 2025-10-02 12:41:47.965 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:41:48 np0005465987 nova_compute[230713]: 2025-10-02 12:41:48.000 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:48 np0005465987 nova_compute[230713]: 2025-10-02 12:41:48.003 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:41:48 np0005465987 nova_compute[230713]: 2025-10-02 12:41:48.014 2 INFO nova.compute.manager [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Took 16.33 seconds to build instance.#033[00m
Oct  2 08:41:48 np0005465987 nova_compute[230713]: 2025-10-02 12:41:48.053 2 DEBUG oslo_concurrency.lockutils [None req-d019ad94-9a1c-4414-b1cc-9cd557ad15cd 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:48 np0005465987 nova_compute[230713]: 2025-10-02 12:41:48.207 2 DEBUG nova.compute.manager [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-changed-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:48 np0005465987 nova_compute[230713]: 2025-10-02 12:41:48.208 2 DEBUG nova.compute.manager [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Refreshing instance network info cache due to event network-changed-ff63f6e3-cb88-42d6-98e3-4f78430c7896. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:41:48 np0005465987 nova_compute[230713]: 2025-10-02 12:41:48.208 2 DEBUG oslo_concurrency.lockutils [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:48 np0005465987 nova_compute[230713]: 2025-10-02 12:41:48.208 2 DEBUG oslo_concurrency.lockutils [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:48 np0005465987 nova_compute[230713]: 2025-10-02 12:41:48.208 2 DEBUG nova.network.neutron [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Refreshing network info cache for port ff63f6e3-cb88-42d6-98e3-4f78430c7896 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:41:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:48.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:49 np0005465987 nova_compute[230713]: 2025-10-02 12:41:49.235 2 DEBUG nova.compute.manager [req-a6abf6aa-5d49-4a35-ba2c-6b20ffcdb931 req-99e625ca-f5be-490a-8b74-824da78c2189 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:49 np0005465987 nova_compute[230713]: 2025-10-02 12:41:49.235 2 DEBUG oslo_concurrency.lockutils [req-a6abf6aa-5d49-4a35-ba2c-6b20ffcdb931 req-99e625ca-f5be-490a-8b74-824da78c2189 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:49 np0005465987 nova_compute[230713]: 2025-10-02 12:41:49.235 2 DEBUG oslo_concurrency.lockutils [req-a6abf6aa-5d49-4a35-ba2c-6b20ffcdb931 req-99e625ca-f5be-490a-8b74-824da78c2189 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:49 np0005465987 nova_compute[230713]: 2025-10-02 12:41:49.236 2 DEBUG oslo_concurrency.lockutils [req-a6abf6aa-5d49-4a35-ba2c-6b20ffcdb931 req-99e625ca-f5be-490a-8b74-824da78c2189 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:49 np0005465987 nova_compute[230713]: 2025-10-02 12:41:49.236 2 DEBUG nova.compute.manager [req-a6abf6aa-5d49-4a35-ba2c-6b20ffcdb931 req-99e625ca-f5be-490a-8b74-824da78c2189 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] No waiting events found dispatching network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:49 np0005465987 nova_compute[230713]: 2025-10-02 12:41:49.236 2 WARNING nova.compute.manager [req-a6abf6aa-5d49-4a35-ba2c-6b20ffcdb931 req-99e625ca-f5be-490a-8b74-824da78c2189 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received unexpected event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:41:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:50 np0005465987 nova_compute[230713]: 2025-10-02 12:41:50.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:50.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:50Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:c1:2f 10.100.0.14
Oct  2 08:41:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:50Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:c1:2f 10.100.0.14
Oct  2 08:41:50 np0005465987 nova_compute[230713]: 2025-10-02 12:41:50.910 2 DEBUG nova.network.neutron [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updated VIF entry in instance network info cache for port ff63f6e3-cb88-42d6-98e3-4f78430c7896. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:41:50 np0005465987 nova_compute[230713]: 2025-10-02 12:41:50.911 2 DEBUG nova.network.neutron [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating instance_info_cache with network_info: [{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:50 np0005465987 nova_compute[230713]: 2025-10-02 12:41:50.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:50 np0005465987 nova_compute[230713]: 2025-10-02 12:41:50.956 2 DEBUG oslo_concurrency.lockutils [req-dc1c5fce-5e58-4ada-97f9-2a261924d991 req-3c2ffcd7-7557-4b8e-b965-e4be007d4169 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:51.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:51 np0005465987 nova_compute[230713]: 2025-10-02 12:41:51.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:52.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:53 np0005465987 nova_compute[230713]: 2025-10-02 12:41:53.133 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:53 np0005465987 nova_compute[230713]: 2025-10-02 12:41:53.134 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquired lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:53 np0005465987 nova_compute[230713]: 2025-10-02 12:41:53.134 2 DEBUG nova.network.neutron [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:41:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:41:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:53.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:41:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:54.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:54 np0005465987 nova_compute[230713]: 2025-10-02 12:41:54.916 2 DEBUG oslo_concurrency.lockutils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:54 np0005465987 nova_compute[230713]: 2025-10-02 12:41:54.917 2 DEBUG oslo_concurrency.lockutils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:54 np0005465987 nova_compute[230713]: 2025-10-02 12:41:54.933 2 DEBUG nova.objects.instance [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'flavor' on Instance uuid 70956832-8f1d-46df-8eeb-3f75ddf40e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:54 np0005465987 nova_compute[230713]: 2025-10-02 12:41:54.980 2 DEBUG oslo_concurrency.lockutils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.260 2 DEBUG nova.network.neutron [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating instance_info_cache with network_info: [{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.279 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Releasing lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.354 2 DEBUG oslo_concurrency.lockutils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.355 2 DEBUG oslo_concurrency.lockutils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.356 2 INFO nova.compute.manager [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Attaching volume daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef to /dev/vdb#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.416 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.417 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Creating file /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/e8ddce5c76cd4dac8c8a17a4c1f7075c.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.417 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/e8ddce5c76cd4dac8c8a17a4c1f7075c.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.618 2 DEBUG os_brick.utils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.620 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.631 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.632 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[f3003abe-1c06-44ea-8dff-19ae62084165]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.634 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.642 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.642 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[18bb7284-ea89-451b-a874-a77ce7fab911]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.644 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.652 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.653 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[59651eec-6605-4f14-bd2b-dc03a092b7ad]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.655 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd05a5f-977b-456b-a044-428c8ec4eeb5]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.655 2 DEBUG oslo_concurrency.processutils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.688 2 DEBUG oslo_concurrency.processutils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.690 2 DEBUG os_brick.initiator.connectors.lightos [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.691 2 DEBUG os_brick.initiator.connectors.lightos [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.691 2 DEBUG os_brick.initiator.connectors.lightos [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.691 2 DEBUG os_brick.utils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.692 2 DEBUG nova.virt.block_device [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating existing volume attachment record: 5e3f76e3-514c-43ff-9e3a-a3c21da6cf10 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:41:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:55.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.879 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/e8ddce5c76cd4dac8c8a17a4c1f7075c.tmp" returned: 1 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.880 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a/e8ddce5c76cd4dac8c8a17a4c1f7075c.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.880 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Creating directory /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.880 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:55 np0005465987 nova_compute[230713]: 2025-10-02 12:41:55.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:56 np0005465987 nova_compute[230713]: 2025-10-02 12:41:56.152 2 DEBUG oslo_concurrency.processutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/f3566799-fdd0-46bf-8256-0294a227030a" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:56 np0005465987 nova_compute[230713]: 2025-10-02 12:41:56.157 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:41:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:56.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:41:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3597104220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:41:56 np0005465987 nova_compute[230713]: 2025-10-02 12:41:56.535 2 DEBUG nova.objects.instance [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'flavor' on Instance uuid 70956832-8f1d-46df-8eeb-3f75ddf40e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:56 np0005465987 nova_compute[230713]: 2025-10-02 12:41:56.563 2 DEBUG nova.virt.libvirt.driver [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Attempting to attach volume daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:41:56 np0005465987 nova_compute[230713]: 2025-10-02 12:41:56.565 2 DEBUG nova.virt.libvirt.guest [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:41:56 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:41:56 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef">
Oct  2 08:41:56 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:41:56 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:41:56 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:41:56 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:41:56 np0005465987 nova_compute[230713]:  <auth username="openstack">
Oct  2 08:41:56 np0005465987 nova_compute[230713]:    <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:41:56 np0005465987 nova_compute[230713]:  </auth>
Oct  2 08:41:56 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:41:56 np0005465987 nova_compute[230713]:  <serial>daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef</serial>
Oct  2 08:41:56 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:41:56 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:41:56 np0005465987 nova_compute[230713]: 2025-10-02 12:41:56.700 2 DEBUG nova.virt.libvirt.driver [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:56 np0005465987 nova_compute[230713]: 2025-10-02 12:41:56.701 2 DEBUG nova.virt.libvirt.driver [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:56 np0005465987 nova_compute[230713]: 2025-10-02 12:41:56.702 2 DEBUG nova.virt.libvirt.driver [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:56 np0005465987 nova_compute[230713]: 2025-10-02 12:41:56.702 2 DEBUG nova.virt.libvirt.driver [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] No VIF found with MAC fa:16:3e:0c:87:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:41:57 np0005465987 nova_compute[230713]: 2025-10-02 12:41:57.096 2 DEBUG oslo_concurrency.lockutils [None req-d91bf764-bc35-48fb-bd6f-0baa28299cef 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:58.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:58 np0005465987 kernel: tapff63f6e3-cb (unregistering): left promiscuous mode
Oct  2 08:41:58 np0005465987 NetworkManager[44910]: <info>  [1759408918.4353] device (tapff63f6e3-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:41:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:58Z|00586|binding|INFO|Releasing lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 from this chassis (sb_readonly=0)
Oct  2 08:41:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:58Z|00587|binding|INFO|Setting lport ff63f6e3-cb88-42d6-98e3-4f78430c7896 down in Southbound
Oct  2 08:41:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:41:58Z|00588|binding|INFO|Removing iface tapff63f6e3-cb ovn-installed in OVS
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.461 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:c1:2f 10.100.0.14'], port_security=['fa:16:3e:1d:c1:2f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f3566799-fdd0-46bf-8256-0294a227030a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b290a23e-28e7-483f-bf5b-c42418308591', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ff63f6e3-cb88-42d6-98e3-4f78430c7896) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.462 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ff63f6e3-cb88-42d6-98e3-4f78430c7896 in datapath 00455285-97a7-4fa2-ba83-e8060936877e unbound from our chassis#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.464 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00455285-97a7-4fa2-ba83-e8060936877e#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.481 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbbd6ab-d719-4562-b945-5f50a0c40020]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.510 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c35807ba-1001-4baa-b69a-68298daeb8e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:58 np0005465987 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Oct  2 08:41:58 np0005465987 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000009a.scope: Consumed 14.661s CPU time.
Oct  2 08:41:58 np0005465987 systemd-machined[188335]: Machine qemu-69-instance-0000009a terminated.
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.518 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4746f2-25e7-4c09-b9ce-9a1824f69c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.548 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4b14ed-aecd-48a5-b464-4d6f0aa5f5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.564 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[71714a16-c0c6-4edb-bba1-e8340c90f28b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00455285-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:8a:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691333, 'reachable_time': 38229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288854, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.580 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5abe57ca-9dc4-4ad7-97d9-2cfeb341cf73]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00455285-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691343, 'tstamp': 691343}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288855, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00455285-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691345, 'tstamp': 691345}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288855, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.582 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.588 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00455285-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.589 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.589 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00455285-90, col_values=(('external_ids', {'iface-id': '293fb87a-10df-4698-a69e-3023bca5a6a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:41:58.590 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.816 2 DEBUG nova.compute.manager [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-unplugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.817 2 DEBUG oslo_concurrency.lockutils [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.817 2 DEBUG oslo_concurrency.lockutils [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.817 2 DEBUG oslo_concurrency.lockutils [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.818 2 DEBUG nova.compute.manager [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-unplugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:58 np0005465987 nova_compute[230713]: 2025-10-02 12:41:58.818 2 WARNING nova.compute.manager [req-4a45301b-bbfa-49c7-a148-f847fa1e7176 req-641987fe-fa55-4ec0-b47a-ff683b745ce3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-unplugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:41:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e343 e343: 3 total, 3 up, 3 in
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.172 2 INFO nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.181 2 INFO nova.virt.libvirt.driver [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Instance destroyed successfully.#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.183 2 DEBUG nova.virt.libvirt.vif [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-73692290',display_name='tempest-ServerActionsTestOtherA-server-73692290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-73692290',id=154,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-la5chqsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:41:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=f3566799-fdd0-46bf-8256-0294a227030a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1293599148-network", "vif_mac": "fa:16:3e:1d:c1:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.184 2 DEBUG nova.network.os_vif_util [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1293599148-network", "vif_mac": "fa:16:3e:1d:c1:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.186 2 DEBUG nova.network.os_vif_util [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.187 2 DEBUG os_vif [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff63f6e3-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.199 2 INFO os_vif [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb')#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.206 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.207 2 DEBUG nova.virt.libvirt.driver [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:41:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:59 np0005465987 nova_compute[230713]: 2025-10-02 12:41:59.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:59 np0005465987 podman[288868]: 2025-10-02 12:41:59.873367941 +0000 UTC m=+0.075733923 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:42:00 np0005465987 nova_compute[230713]: 2025-10-02 12:42:00.254 2 DEBUG neutronclient.v2_0.client [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ff63f6e3-cb88-42d6-98e3-4f78430c7896 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:42:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:00.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:00 np0005465987 nova_compute[230713]: 2025-10-02 12:42:00.378 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:00 np0005465987 nova_compute[230713]: 2025-10-02 12:42:00.379 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:00 np0005465987 nova_compute[230713]: 2025-10-02 12:42:00.379 2 DEBUG oslo_concurrency.lockutils [None req-d590bf95-d28c-46eb-b7da-2be6d7faedff c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:00 np0005465987 nova_compute[230713]: 2025-10-02 12:42:00.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:00Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:87:e1 10.100.0.12
Oct  2 08:42:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:00Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:87:e1 10.100.0.12
Oct  2 08:42:01 np0005465987 nova_compute[230713]: 2025-10-02 12:42:01.265 2 DEBUG nova.compute.manager [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:01 np0005465987 nova_compute[230713]: 2025-10-02 12:42:01.266 2 DEBUG oslo_concurrency.lockutils [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:01 np0005465987 nova_compute[230713]: 2025-10-02 12:42:01.266 2 DEBUG oslo_concurrency.lockutils [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:01 np0005465987 nova_compute[230713]: 2025-10-02 12:42:01.266 2 DEBUG oslo_concurrency.lockutils [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:01 np0005465987 nova_compute[230713]: 2025-10-02 12:42:01.267 2 DEBUG nova.compute.manager [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:01 np0005465987 nova_compute[230713]: 2025-10-02 12:42:01.267 2 WARNING nova.compute.manager [req-2cda7f49-254e-4f61-9746-e32dca1cf2da req-2acbaad1-06ac-4ba5-8de3-1f83d48b8c9e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:42:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:02.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:02 np0005465987 nova_compute[230713]: 2025-10-02 12:42:02.737 2 DEBUG nova.compute.manager [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-changed-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:02 np0005465987 nova_compute[230713]: 2025-10-02 12:42:02.738 2 DEBUG nova.compute.manager [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Refreshing instance network info cache due to event network-changed-ff63f6e3-cb88-42d6-98e3-4f78430c7896. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:42:02 np0005465987 nova_compute[230713]: 2025-10-02 12:42:02.738 2 DEBUG oslo_concurrency.lockutils [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:02 np0005465987 nova_compute[230713]: 2025-10-02 12:42:02.738 2 DEBUG oslo_concurrency.lockutils [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:02 np0005465987 nova_compute[230713]: 2025-10-02 12:42:02.738 2 DEBUG nova.network.neutron [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Refreshing network info cache for port ff63f6e3-cb88-42d6-98e3-4f78430c7896 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:42:02 np0005465987 nova_compute[230713]: 2025-10-02 12:42:02.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:04 np0005465987 nova_compute[230713]: 2025-10-02 12:42:04.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:04.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:04 np0005465987 podman[288890]: 2025-10-02 12:42:04.85249209 +0000 UTC m=+0.057282405 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 08:42:04 np0005465987 podman[288891]: 2025-10-02 12:42:04.855585295 +0000 UTC m=+0.057274945 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 08:42:04 np0005465987 podman[288889]: 2025-10-02 12:42:04.891650686 +0000 UTC m=+0.097639735 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 08:42:05 np0005465987 nova_compute[230713]: 2025-10-02 12:42:05.002 2 DEBUG nova.network.neutron [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updated VIF entry in instance network info cache for port ff63f6e3-cb88-42d6-98e3-4f78430c7896. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:42:05 np0005465987 nova_compute[230713]: 2025-10-02 12:42:05.003 2 DEBUG nova.network.neutron [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating instance_info_cache with network_info: [{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:05 np0005465987 nova_compute[230713]: 2025-10-02 12:42:05.057 2 DEBUG oslo_concurrency.lockutils [req-6b75bf9d-f636-4c54-95b0-488e57a0eef1 req-7bebc560-eb01-4219-b3e1-0708f66d4484 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:05 np0005465987 nova_compute[230713]: 2025-10-02 12:42:05.128 2 DEBUG nova.compute.manager [req-421d35f4-fb63-406e-86cc-0804a44146e6 req-07bad2e8-5881-45f3-851e-ec2b24c27950 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:05 np0005465987 nova_compute[230713]: 2025-10-02 12:42:05.128 2 DEBUG nova.compute.manager [req-421d35f4-fb63-406e-86cc-0804a44146e6 req-07bad2e8-5881-45f3-851e-ec2b24c27950 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing instance network info cache due to event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:42:05 np0005465987 nova_compute[230713]: 2025-10-02 12:42:05.128 2 DEBUG oslo_concurrency.lockutils [req-421d35f4-fb63-406e-86cc-0804a44146e6 req-07bad2e8-5881-45f3-851e-ec2b24c27950 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:05 np0005465987 nova_compute[230713]: 2025-10-02 12:42:05.129 2 DEBUG oslo_concurrency.lockutils [req-421d35f4-fb63-406e-86cc-0804a44146e6 req-07bad2e8-5881-45f3-851e-ec2b24c27950 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:05 np0005465987 nova_compute[230713]: 2025-10-02 12:42:05.129 2 DEBUG nova.network.neutron [req-421d35f4-fb63-406e-86cc-0804a44146e6 req-07bad2e8-5881-45f3-851e-ec2b24c27950 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:42:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:05 np0005465987 nova_compute[230713]: 2025-10-02 12:42:05.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:06.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:42:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:42:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:42:06 np0005465987 nova_compute[230713]: 2025-10-02 12:42:06.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:06 np0005465987 nova_compute[230713]: 2025-10-02 12:42:06.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:42:06 np0005465987 nova_compute[230713]: 2025-10-02 12:42:06.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:42:06 np0005465987 nova_compute[230713]: 2025-10-02 12:42:06.996 2 DEBUG nova.compute.manager [req-86220d7b-c79e-4379-8275-4c9f1700e774 req-743dced6-14fe-4493-90b5-d08f3b3de43c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:06 np0005465987 nova_compute[230713]: 2025-10-02 12:42:06.997 2 DEBUG nova.compute.manager [req-86220d7b-c79e-4379-8275-4c9f1700e774 req-743dced6-14fe-4493-90b5-d08f3b3de43c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing instance network info cache due to event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:42:06 np0005465987 nova_compute[230713]: 2025-10-02 12:42:06.997 2 DEBUG oslo_concurrency.lockutils [req-86220d7b-c79e-4379-8275-4c9f1700e774 req-743dced6-14fe-4493-90b5-d08f3b3de43c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:42:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2890916842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:42:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:42:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2890916842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:42:07 np0005465987 nova_compute[230713]: 2025-10-02 12:42:07.266 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:07 np0005465987 nova_compute[230713]: 2025-10-02 12:42:07.266 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:07 np0005465987 nova_compute[230713]: 2025-10-02 12:42:07.266 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:42:07 np0005465987 nova_compute[230713]: 2025-10-02 12:42:07.266 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:07 np0005465987 nova_compute[230713]: 2025-10-02 12:42:07.713 2 DEBUG nova.network.neutron [req-421d35f4-fb63-406e-86cc-0804a44146e6 req-07bad2e8-5881-45f3-851e-ec2b24c27950 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updated VIF entry in instance network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:42:07 np0005465987 nova_compute[230713]: 2025-10-02 12:42:07.713 2 DEBUG nova.network.neutron [req-421d35f4-fb63-406e-86cc-0804a44146e6 req-07bad2e8-5881-45f3-851e-ec2b24c27950 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:07.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:08 np0005465987 nova_compute[230713]: 2025-10-02 12:42:08.021 2 DEBUG oslo_concurrency.lockutils [req-421d35f4-fb63-406e-86cc-0804a44146e6 req-07bad2e8-5881-45f3-851e-ec2b24c27950 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:08 np0005465987 nova_compute[230713]: 2025-10-02 12:42:08.022 2 DEBUG oslo_concurrency.lockutils [req-86220d7b-c79e-4379-8275-4c9f1700e774 req-743dced6-14fe-4493-90b5-d08f3b3de43c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:08 np0005465987 nova_compute[230713]: 2025-10-02 12:42:08.022 2 DEBUG nova.network.neutron [req-86220d7b-c79e-4379-8275-4c9f1700e774 req-743dced6-14fe-4493-90b5-d08f3b3de43c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:42:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:08.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:08 np0005465987 nova_compute[230713]: 2025-10-02 12:42:08.957 2 DEBUG nova.compute.manager [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:08 np0005465987 nova_compute[230713]: 2025-10-02 12:42:08.958 2 DEBUG oslo_concurrency.lockutils [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:08 np0005465987 nova_compute[230713]: 2025-10-02 12:42:08.958 2 DEBUG oslo_concurrency.lockutils [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:08 np0005465987 nova_compute[230713]: 2025-10-02 12:42:08.958 2 DEBUG oslo_concurrency.lockutils [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:08 np0005465987 nova_compute[230713]: 2025-10-02 12:42:08.958 2 DEBUG nova.compute.manager [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:08 np0005465987 nova_compute[230713]: 2025-10-02 12:42:08.959 2 WARNING nova.compute.manager [req-7f7b3df7-42ed-4c90-a5ee-a172441e9a38 req-a27e413a-b559-417f-81f5-44093ba1d8c1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:42:09 np0005465987 nova_compute[230713]: 2025-10-02 12:42:09.144 2 DEBUG nova.compute.manager [req-f363dad6-f538-42d4-a42c-4442d8bcf272 req-376d2725-c7f8-46bd-83d0-ab6b6ba408f8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:09 np0005465987 nova_compute[230713]: 2025-10-02 12:42:09.145 2 DEBUG nova.compute.manager [req-f363dad6-f538-42d4-a42c-4442d8bcf272 req-376d2725-c7f8-46bd-83d0-ab6b6ba408f8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing instance network info cache due to event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:42:09 np0005465987 nova_compute[230713]: 2025-10-02 12:42:09.145 2 DEBUG oslo_concurrency.lockutils [req-f363dad6-f538-42d4-a42c-4442d8bcf272 req-376d2725-c7f8-46bd-83d0-ab6b6ba408f8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:09 np0005465987 nova_compute[230713]: 2025-10-02 12:42:09.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:09 np0005465987 nova_compute[230713]: 2025-10-02 12:42:09.205 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updating instance_info_cache with network_info: [{"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:09 np0005465987 nova_compute[230713]: 2025-10-02 12:42:09.234 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:09 np0005465987 nova_compute[230713]: 2025-10-02 12:42:09.235 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:42:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:42:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:09.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:42:09 np0005465987 nova_compute[230713]: 2025-10-02 12:42:09.985 2 DEBUG nova.network.neutron [req-86220d7b-c79e-4379-8275-4c9f1700e774 req-743dced6-14fe-4493-90b5-d08f3b3de43c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updated VIF entry in instance network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:42:09 np0005465987 nova_compute[230713]: 2025-10-02 12:42:09.985 2 DEBUG nova.network.neutron [req-86220d7b-c79e-4379-8275-4c9f1700e774 req-743dced6-14fe-4493-90b5-d08f3b3de43c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:10 np0005465987 nova_compute[230713]: 2025-10-02 12:42:10.009 2 DEBUG oslo_concurrency.lockutils [req-86220d7b-c79e-4379-8275-4c9f1700e774 req-743dced6-14fe-4493-90b5-d08f3b3de43c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:10 np0005465987 nova_compute[230713]: 2025-10-02 12:42:10.010 2 DEBUG oslo_concurrency.lockutils [req-f363dad6-f538-42d4-a42c-4442d8bcf272 req-376d2725-c7f8-46bd-83d0-ab6b6ba408f8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:10 np0005465987 nova_compute[230713]: 2025-10-02 12:42:10.011 2 DEBUG nova.network.neutron [req-f363dad6-f538-42d4-a42c-4442d8bcf272 req-376d2725-c7f8-46bd-83d0-ab6b6ba408f8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:42:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:10 np0005465987 nova_compute[230713]: 2025-10-02 12:42:10.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.167 2 DEBUG nova.compute.manager [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.168 2 DEBUG oslo_concurrency.lockutils [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.168 2 DEBUG oslo_concurrency.lockutils [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.168 2 DEBUG oslo_concurrency.lockutils [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.168 2 DEBUG nova.compute.manager [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] No waiting events found dispatching network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.168 2 WARNING nova.compute.manager [req-a3c6b7a3-0f7e-4888-b4dd-59d722a1d336 req-119b6972-0102-40d6-a0c0-e8510f242ba5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Received unexpected event network-vif-plugged-ff63f6e3-cb88-42d6-98e3-4f78430c7896 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.357 2 DEBUG oslo_concurrency.lockutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "f3566799-fdd0-46bf-8256-0294a227030a" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.357 2 DEBUG oslo_concurrency.lockutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.358 2 DEBUG nova.compute.manager [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Going to confirm migration 20 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:42:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:11.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.843 2 DEBUG neutronclient.v2_0.client [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ff63f6e3-cb88-42d6-98e3-4f78430c7896 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.844 2 DEBUG oslo_concurrency.lockutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.845 2 DEBUG oslo_concurrency.lockutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquired lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.845 2 DEBUG nova.network.neutron [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:42:11 np0005465987 nova_compute[230713]: 2025-10-02 12:42:11.845 2 DEBUG nova.objects.instance [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'info_cache' on Instance uuid f3566799-fdd0-46bf-8256-0294a227030a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:12 np0005465987 nova_compute[230713]: 2025-10-02 12:42:12.166 2 DEBUG nova.network.neutron [req-f363dad6-f538-42d4-a42c-4442d8bcf272 req-376d2725-c7f8-46bd-83d0-ab6b6ba408f8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updated VIF entry in instance network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:42:12 np0005465987 nova_compute[230713]: 2025-10-02 12:42:12.167 2 DEBUG nova.network.neutron [req-f363dad6-f538-42d4-a42c-4442d8bcf272 req-376d2725-c7f8-46bd-83d0-ab6b6ba408f8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:12 np0005465987 nova_compute[230713]: 2025-10-02 12:42:12.190 2 DEBUG oslo_concurrency.lockutils [req-f363dad6-f538-42d4-a42c-4442d8bcf272 req-376d2725-c7f8-46bd-83d0-ab6b6ba408f8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:12.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:42:13 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:42:13 np0005465987 nova_compute[230713]: 2025-10-02 12:42:13.694 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408918.6926951, f3566799-fdd0-46bf-8256-0294a227030a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:13 np0005465987 nova_compute[230713]: 2025-10-02 12:42:13.695 2 INFO nova.compute.manager [-] [instance: f3566799-fdd0-46bf-8256-0294a227030a] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:42:13 np0005465987 nova_compute[230713]: 2025-10-02 12:42:13.717 2 DEBUG nova.compute.manager [None req-3c2a58e1-338b-4408-81df-ec155e606ad9 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:13 np0005465987 nova_compute[230713]: 2025-10-02 12:42:13.720 2 DEBUG nova.compute.manager [None req-3c2a58e1-338b-4408-81df-ec155e606ad9 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:42:13 np0005465987 nova_compute[230713]: 2025-10-02 12:42:13.739 2 INFO nova.compute.manager [None req-3c2a58e1-338b-4408-81df-ec155e606ad9 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:42:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:13.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:13 np0005465987 nova_compute[230713]: 2025-10-02 12:42:13.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:13 np0005465987 nova_compute[230713]: 2025-10-02 12:42:13.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:13 np0005465987 nova_compute[230713]: 2025-10-02 12:42:13.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.005 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.005 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.005 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/919380508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.464 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.589 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.590 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.590 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.593 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.593 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.596 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.596 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.758 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.759 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3910MB free_disk=20.8511962890625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.759 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.760 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.797 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Migration for instance f3566799-fdd0-46bf-8256-0294a227030a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.839 2 INFO nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating resource usage from migration 8a38cf1e-2f03-4cc6-90dc-90b3b654c95b#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.840 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Starting to track outgoing migration 8a38cf1e-2f03-4cc6-90dc-90b3b654c95b with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.882 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.882 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 70956832-8f1d-46df-8eeb-3f75ddf40e84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.882 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Migration 8a38cf1e-2f03-4cc6-90dc-90b3b654c95b is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.883 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.883 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:42:14 np0005465987 nova_compute[230713]: 2025-10-02 12:42:14.949 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4030536784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.403 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.410 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.425 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.431 2 DEBUG nova.network.neutron [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: f3566799-fdd0-46bf-8256-0294a227030a] Updating instance_info_cache with network_info: [{"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.447 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.447 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.454 2 DEBUG oslo_concurrency.lockutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Releasing lock "refresh_cache-f3566799-fdd0-46bf-8256-0294a227030a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.455 2 DEBUG nova.objects.instance [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'migration_context' on Instance uuid f3566799-fdd0-46bf-8256-0294a227030a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.497 2 DEBUG nova.storage.rbd_utils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] rbd image f3566799-fdd0-46bf-8256-0294a227030a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.506 2 DEBUG nova.virt.libvirt.vif [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-73692290',display_name='tempest-ServerActionsTestOtherA-server-73692290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-73692290',id=154,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMKHZAu99lFpnj9bTKPOhdWg5y6PnKM9AGAnFElgiPr53bQbl7DsEAEg0Hu4Ea2RYl8QhrjFhPMuXkYw2ubt4hnzcTRuj+jAHGGBwDWRc1fX16YtY5a2rZP1IKxVnq/Inw==',key_name='tempest-keypair-26130845',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:42:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-la5chqsu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=f3566799-fdd0-46bf-8256-0294a227030a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.507 2 DEBUG nova.network.os_vif_util [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "address": "fa:16:3e:1d:c1:2f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff63f6e3-cb", "ovs_interfaceid": "ff63f6e3-cb88-42d6-98e3-4f78430c7896", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.508 2 DEBUG nova.network.os_vif_util [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.508 2 DEBUG os_vif [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff63f6e3-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.512 2 INFO os_vif [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:c1:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff63f6e3-cb88-42d6-98e3-4f78430c7896,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff63f6e3-cb')#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.513 2 DEBUG oslo_concurrency.lockutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.513 2 DEBUG oslo_concurrency.lockutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.599 2 DEBUG oslo_concurrency.processutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:15.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:15 np0005465987 nova_compute[230713]: 2025-10-02 12:42:15.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/558785531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.046 2 DEBUG oslo_concurrency.processutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.055 2 DEBUG nova.compute.provider_tree [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:16.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.345 2 DEBUG nova.scheduler.client.report [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.447 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.448 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.448 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.477 2 DEBUG oslo_concurrency.lockutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.613 2 INFO nova.scheduler.client.report [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Deleted allocation for migration 8a38cf1e-2f03-4cc6-90dc-90b3b654c95b#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.787 2 DEBUG oslo_concurrency.lockutils [None req-413ddbe5-2f07-4d91-ad3d-a831cb430c3d c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "f3566799-fdd0-46bf-8256-0294a227030a" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.864 2 DEBUG nova.compute.manager [req-1997326f-9d33-4463-b64f-73d1430f7d02 req-a5c1943e-3da7-4241-a972-bb0f1cb764e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.865 2 DEBUG nova.compute.manager [req-1997326f-9d33-4463-b64f-73d1430f7d02 req-a5c1943e-3da7-4241-a972-bb0f1cb764e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing instance network info cache due to event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.865 2 DEBUG oslo_concurrency.lockutils [req-1997326f-9d33-4463-b64f-73d1430f7d02 req-a5c1943e-3da7-4241-a972-bb0f1cb764e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.865 2 DEBUG oslo_concurrency.lockutils [req-1997326f-9d33-4463-b64f-73d1430f7d02 req-a5c1943e-3da7-4241-a972-bb0f1cb764e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:16 np0005465987 nova_compute[230713]: 2025-10-02 12:42:16.866 2 DEBUG nova.network.neutron [req-1997326f-9d33-4463-b64f-73d1430f7d02 req-a5c1943e-3da7-4241-a972-bb0f1cb764e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:42:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:17.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:18 np0005465987 nova_compute[230713]: 2025-10-02 12:42:18.461 2 DEBUG nova.network.neutron [req-1997326f-9d33-4463-b64f-73d1430f7d02 req-a5c1943e-3da7-4241-a972-bb0f1cb764e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updated VIF entry in instance network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:42:18 np0005465987 nova_compute[230713]: 2025-10-02 12:42:18.462 2 DEBUG nova.network.neutron [req-1997326f-9d33-4463-b64f-73d1430f7d02 req-a5c1943e-3da7-4241-a972-bb0f1cb764e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:18 np0005465987 nova_compute[230713]: 2025-10-02 12:42:18.484 2 DEBUG oslo_concurrency.lockutils [req-1997326f-9d33-4463-b64f-73d1430f7d02 req-a5c1943e-3da7-4241-a972-bb0f1cb764e1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e344 e344: 3 total, 3 up, 3 in
Oct  2 08:42:19 np0005465987 nova_compute[230713]: 2025-10-02 12:42:19.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:19 np0005465987 nova_compute[230713]: 2025-10-02 12:42:19.734 2 DEBUG oslo_concurrency.lockutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:19 np0005465987 nova_compute[230713]: 2025-10-02 12:42:19.735 2 DEBUG oslo_concurrency.lockutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:19 np0005465987 nova_compute[230713]: 2025-10-02 12:42:19.735 2 INFO nova.compute.manager [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Rebooting instance#033[00m
Oct  2 08:42:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:19 np0005465987 nova_compute[230713]: 2025-10-02 12:42:19.763 2 DEBUG oslo_concurrency.lockutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:19 np0005465987 nova_compute[230713]: 2025-10-02 12:42:19.763 2 DEBUG oslo_concurrency.lockutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:19 np0005465987 nova_compute[230713]: 2025-10-02 12:42:19.764 2 DEBUG nova.network.neutron [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:42:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:19.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:20.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:20 np0005465987 nova_compute[230713]: 2025-10-02 12:42:20.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:21.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:22.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:22 np0005465987 nova_compute[230713]: 2025-10-02 12:42:22.768 2 DEBUG nova.network.neutron [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:22 np0005465987 nova_compute[230713]: 2025-10-02 12:42:22.807 2 DEBUG oslo_concurrency.lockutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:22 np0005465987 nova_compute[230713]: 2025-10-02 12:42:22.808 2 DEBUG nova.compute.manager [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:22 np0005465987 nova_compute[230713]: 2025-10-02 12:42:22.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:22 np0005465987 nova_compute[230713]: 2025-10-02 12:42:22.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:42:23 np0005465987 kernel: tape389cd1d-f5 (unregistering): left promiscuous mode
Oct  2 08:42:23 np0005465987 NetworkManager[44910]: <info>  [1759408943.4689] device (tape389cd1d-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:42:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:23Z|00589|binding|INFO|Releasing lport e389cd1d-f50c-460c-b8ab-e0a384159932 from this chassis (sb_readonly=0)
Oct  2 08:42:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:23Z|00590|binding|INFO|Setting lport e389cd1d-f50c-460c-b8ab-e0a384159932 down in Southbound
Oct  2 08:42:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:23Z|00591|binding|INFO|Removing iface tape389cd1d-f5 ovn-installed in OVS
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.488 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:87:e1 10.100.0.12'], port_security=['fa:16:3e:0c:87:e1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '70956832-8f1d-46df-8eeb-3f75ddf40e84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e53961-97c9-4d79-b2bc-ba336c204821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7ba41a0d-5d1a-413c-a336-411956633f9b e3789894-a1c1-47ad-b248-e3a6730a7778', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a249b7-28a6-4e82-83ff-74dd7c2a70f8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=e389cd1d-f50c-460c-b8ab-e0a384159932) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.490 138325 INFO neutron.agent.ovn.metadata.agent [-] Port e389cd1d-f50c-460c-b8ab-e0a384159932 in datapath 99e53961-97c9-4d79-b2bc-ba336c204821 unbound from our chassis#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.492 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99e53961-97c9-4d79-b2bc-ba336c204821, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.494 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b945c4e9-27cd-4d96-84ab-c11dbf545332]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.494 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 namespace which is not needed anymore#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005465987 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Oct  2 08:42:23 np0005465987 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000009c.scope: Consumed 15.558s CPU time.
Oct  2 08:42:23 np0005465987 systemd-machined[188335]: Machine qemu-70-instance-0000009c terminated.
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.604 2 INFO nova.virt.libvirt.driver [-] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Instance destroyed successfully.#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.605 2 DEBUG nova.objects.instance [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'resources' on Instance uuid 70956832-8f1d-46df-8eeb-3f75ddf40e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.626 2 DEBUG nova.virt.libvirt.vif [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1443375856',display_name='tempest-TestMinimumBasicScenario-server-1443375856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1443375856',id=156,image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGzB0sJYXYPOfWvtpNcxNGXcGvdcBf7QwVJ9OKoFMmmKZSO6fZE5N4Z9luCuMJc0D1eqyZgLUzVemVKb7pJpNrlCg/Ws/Z2apKl6RmNDh6IMXB331e3RDriHaz2BQyxtg==',key_name='tempest-TestMinimumBasicScenario-222087753',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e53064cd4d645f09bd59bbca09b98e0',ramdisk_id='',reservation_id='r-0su4hzgy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-999813940',owner_user_name='tempest-TestMinimumBasicScenario-999813940-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:22Z,user_data=None,user_id='734ae44830d540d8ab51c2a3d75ecd80',uuid=70956832-8f1d-46df-8eeb-3f75ddf40e84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.627 2 DEBUG nova.network.os_vif_util [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converting VIF {"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.628 2 DEBUG nova.network.os_vif_util [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.628 2 DEBUG os_vif [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.631 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape389cd1d-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:23 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[288788]: [NOTICE]   (288800) : haproxy version is 2.8.14-c23fe91
Oct  2 08:42:23 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[288788]: [NOTICE]   (288800) : path to executable is /usr/sbin/haproxy
Oct  2 08:42:23 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[288788]: [WARNING]  (288800) : Exiting Master process...
Oct  2 08:42:23 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[288788]: [WARNING]  (288800) : Exiting Master process...
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[288788]: [ALERT]    (288800) : Current worker (288802) exited with code 143 (Terminated)
Oct  2 08:42:23 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[288788]: [WARNING]  (288800) : All workers exited. Exiting... (0)
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.635 2 INFO os_vif [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5')#033[00m
Oct  2 08:42:23 np0005465987 systemd[1]: libpod-323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04.scope: Deactivated successfully.
Oct  2 08:42:23 np0005465987 conmon[288788]: conmon 323c697633a5347da6d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04.scope/container/memory.events
Oct  2 08:42:23 np0005465987 podman[289248]: 2025-10-02 12:42:23.644477524 +0000 UTC m=+0.050800298 container died 323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.647 2 DEBUG nova.virt.libvirt.driver [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Start _get_guest_xml network_info=[{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=881d7140-1aef-4d5d-b04d-76b7b19d6837,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': '881d7140-1aef-4d5d-b04d-76b7b19d6837'}], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '5e3f76e3-514c-43ff-9e3a-a3c21da6cf10', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '70956832-8f1d-46df-8eeb-3f75ddf40e84', 'attached_at': '', 'detached_at': '', 'volume_id': 'daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef', 'serial': 'daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef'}, 'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.651 2 WARNING nova.virt.libvirt.driver [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.656 2 DEBUG nova.virt.libvirt.host [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.657 2 DEBUG nova.virt.libvirt.host [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.662 2 DEBUG nova.virt.libvirt.host [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.662 2 DEBUG nova.virt.libvirt.host [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.663 2 DEBUG nova.virt.libvirt.driver [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.663 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=881d7140-1aef-4d5d-b04d-76b7b19d6837,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.664 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.664 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.664 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.664 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.664 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.665 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.665 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.665 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.665 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.665 2 DEBUG nova.virt.hardware [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.665 2 DEBUG nova.objects.instance [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 70956832-8f1d-46df-8eeb-3f75ddf40e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:23 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04-userdata-shm.mount: Deactivated successfully.
Oct  2 08:42:23 np0005465987 systemd[1]: var-lib-containers-storage-overlay-41ccc671353d98e2673345b2790dafd542fc33d9f8ef6ae72c396debc5c0e8b4-merged.mount: Deactivated successfully.
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.683 2 DEBUG oslo_concurrency.processutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:23 np0005465987 podman[289248]: 2025-10-02 12:42:23.695019204 +0000 UTC m=+0.101341968 container cleanup 323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:42:23 np0005465987 systemd[1]: libpod-conmon-323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04.scope: Deactivated successfully.
Oct  2 08:42:23 np0005465987 podman[289286]: 2025-10-02 12:42:23.788484003 +0000 UTC m=+0.067939158 container remove 323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.795 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[108d9769-dee7-4fe9-ad3d-4b6f022c8079]: (4, ('Thu Oct  2 12:42:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 (323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04)\n323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04\nThu Oct  2 12:42:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 (323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04)\n323c697633a5347da6d954ff3a0871fb9818593cee90f87455b4b7d65b807b04\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.798 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[27e6dbbf-8f2e-4342-92f0-ec858385c136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.800 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e53961-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:23 np0005465987 kernel: tap99e53961-90: left promiscuous mode
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.806 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1f812a6b-f61e-47e9-81a9-eb43301171ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:23 np0005465987 nova_compute[230713]: 2025-10-02 12:42:23.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.838 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[53580b34-b685-40b6-a1eb-5dd3fbea19dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.839 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[42f07882-e493-40c4-9d16-0bece19d3543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.854 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[31486b24-8ab8-4495-8cbd-ce287879b091]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697509, 'reachable_time': 22375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289320, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:23 np0005465987 systemd[1]: run-netns-ovnmeta\x2d99e53961\x2d97c9\x2d4d79\x2db2bc\x2dba336c204821.mount: Deactivated successfully.
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.857 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:42:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:23.857 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[00b6a2a7-0021-420e-b7a3-e04e98ddfcf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.126 2 DEBUG oslo_concurrency.processutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.157 2 DEBUG oslo_concurrency.processutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:24.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:42:24 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/247854997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.582 2 DEBUG oslo_concurrency.processutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.636 2 DEBUG nova.virt.libvirt.vif [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1443375856',display_name='tempest-TestMinimumBasicScenario-server-1443375856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1443375856',id=156,image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGzB0sJYXYPOfWvtpNcxNGXcGvdcBf7QwVJ9OKoFMmmKZSO6fZE5N4Z9luCuMJc0D1eqyZgLUzVemVKb7pJpNrlCg/Ws/Z2apKl6RmNDh6IMXB331e3RDriHaz2BQyxtg==',key_name='tempest-TestMinimumBasicScenario-222087753',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e53064cd4d645f09bd59bbca09b98e0',ramdisk_id='',reservation_id='r-0su4hzgy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-999813940',owner_user_name='tempest-TestMinimumBasicScenario-999813940-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:22Z,user_data=None,user_id='734ae44830d540d8ab51c2a3d75ecd80',uuid=70956832-8f1d-46df-8eeb-3f75ddf40e84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.636 2 DEBUG nova.network.os_vif_util [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converting VIF {"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.637 2 DEBUG nova.network.os_vif_util [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.638 2 DEBUG nova.objects.instance [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 70956832-8f1d-46df-8eeb-3f75ddf40e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.651 2 DEBUG nova.virt.libvirt.driver [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <uuid>70956832-8f1d-46df-8eeb-3f75ddf40e84</uuid>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <name>instance-0000009c</name>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestMinimumBasicScenario-server-1443375856</nova:name>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:42:23</nova:creationTime>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <nova:user uuid="734ae44830d540d8ab51c2a3d75ecd80">tempest-TestMinimumBasicScenario-999813940-project-member</nova:user>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <nova:project uuid="2e53064cd4d645f09bd59bbca09b98e0">tempest-TestMinimumBasicScenario-999813940</nova:project>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="881d7140-1aef-4d5d-b04d-76b7b19d6837"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <nova:port uuid="e389cd1d-f50c-460c-b8ab-e0a384159932">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <entry name="serial">70956832-8f1d-46df-8eeb-3f75ddf40e84</entry>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <entry name="uuid">70956832-8f1d-46df-8eeb-3f75ddf40e84</entry>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/70956832-8f1d-46df-8eeb-3f75ddf40e84_disk">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/70956832-8f1d-46df-8eeb-3f75ddf40e84_disk.config">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <serial>daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef</serial>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:0c:87:e1"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <target dev="tape389cd1d-f5"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84/console.log" append="off"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <input type="keyboard" bus="usb"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:42:24 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:42:24 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:42:24 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:42:24 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.652 2 DEBUG nova.virt.libvirt.driver [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.652 2 DEBUG nova.virt.libvirt.driver [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.652 2 DEBUG nova.virt.libvirt.driver [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.652 2 DEBUG nova.virt.libvirt.vif [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1443375856',display_name='tempest-TestMinimumBasicScenario-server-1443375856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1443375856',id=156,image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGzB0sJYXYPOfWvtpNcxNGXcGvdcBf7QwVJ9OKoFMmmKZSO6fZE5N4Z9luCuMJc0D1eqyZgLUzVemVKb7pJpNrlCg/Ws/Z2apKl6RmNDh6IMXB331e3RDriHaz2BQyxtg==',key_name='tempest-TestMinimumBasicScenario-222087753',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='2e53064cd4d645f09bd59bbca09b98e0',ramdisk_id='',reservation_id='r-0su4hzgy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-999813940',owner_user_name='tempest-TestMinimumBasicScenario-999813940-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:22Z,user_data=None,user_id='734ae44830d540d8ab51c2a3d75ecd80',uuid=70956832-8f1d-46df-8eeb-3f75ddf40e84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.653 2 DEBUG nova.network.os_vif_util [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converting VIF {"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.653 2 DEBUG nova.network.os_vif_util [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.653 2 DEBUG os_vif [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.655 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.658 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape389cd1d-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.658 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape389cd1d-f5, col_values=(('external_ids', {'iface-id': 'e389cd1d-f50c-460c-b8ab-e0a384159932', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:87:e1', 'vm-uuid': '70956832-8f1d-46df-8eeb-3f75ddf40e84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:24 np0005465987 NetworkManager[44910]: <info>  [1759408944.6607] manager: (tape389cd1d-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.666 2 INFO os_vif [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5')#033[00m
Oct  2 08:42:24 np0005465987 kernel: tape389cd1d-f5: entered promiscuous mode
Oct  2 08:42:24 np0005465987 systemd-udevd[289228]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:42:24 np0005465987 NetworkManager[44910]: <info>  [1759408944.7386] manager: (tape389cd1d-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/278)
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:24Z|00592|binding|INFO|Claiming lport e389cd1d-f50c-460c-b8ab-e0a384159932 for this chassis.
Oct  2 08:42:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:24Z|00593|binding|INFO|e389cd1d-f50c-460c-b8ab-e0a384159932: Claiming fa:16:3e:0c:87:e1 10.100.0.12
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.745 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:87:e1 10.100.0.12'], port_security=['fa:16:3e:0c:87:e1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '70956832-8f1d-46df-8eeb-3f75ddf40e84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e53961-97c9-4d79-b2bc-ba336c204821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7ba41a0d-5d1a-413c-a336-411956633f9b e3789894-a1c1-47ad-b248-e3a6730a7778', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a249b7-28a6-4e82-83ff-74dd7c2a70f8, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=e389cd1d-f50c-460c-b8ab-e0a384159932) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:24 np0005465987 NetworkManager[44910]: <info>  [1759408944.7484] device (tape389cd1d-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:42:24 np0005465987 NetworkManager[44910]: <info>  [1759408944.7501] device (tape389cd1d-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.752 138325 INFO neutron.agent.ovn.metadata.agent [-] Port e389cd1d-f50c-460c-b8ab-e0a384159932 in datapath 99e53961-97c9-4d79-b2bc-ba336c204821 bound to our chassis#033[00m
Oct  2 08:42:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:24Z|00594|binding|INFO|Setting lport e389cd1d-f50c-460c-b8ab-e0a384159932 ovn-installed in OVS
Oct  2 08:42:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:24Z|00595|binding|INFO|Setting lport e389cd1d-f50c-460c-b8ab-e0a384159932 up in Southbound
Oct  2 08:42:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.756 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99e53961-97c9-4d79-b2bc-ba336c204821#033[00m
Oct  2 08:42:24 np0005465987 nova_compute[230713]: 2025-10-02 12:42:24.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.766 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[35baaddf-5c10-4a47-8391-a8a233f17376]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.767 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99e53961-91 in ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.770 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99e53961-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.770 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8af80e02-debd-4c26-80c8-ef4efae02673]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.771 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fd744924-e76a-4b49-abae-68fc3dda996f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.783 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[306a0625-106e-46d0-86bb-55cf455edfaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 systemd-machined[188335]: New machine qemu-71-instance-0000009c.
Oct  2 08:42:24 np0005465987 systemd[1]: Started Virtual Machine qemu-71-instance-0000009c.
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.800 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[903efd34-f226-4de8-b6e9-dc1e95a79aca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.832 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[54464dd0-3b93-4869-bd80-4dd6c6d75a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.836 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ecee3efc-06e0-41cc-9af1-9c6a1d927b3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 NetworkManager[44910]: <info>  [1759408944.8382] manager: (tap99e53961-90): new Veth device (/org/freedesktop/NetworkManager/Devices/279)
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.868 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2a3bd5-4e84-47ef-89bd-bde79ec2ce15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.871 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ee91a6-29ad-4a2e-8c49-d0afdd22533a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 NetworkManager[44910]: <info>  [1759408944.8981] device (tap99e53961-90): carrier: link connected
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.904 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6993376e-9ca0-407b-8743-91e286bb854f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.924 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1c069d51-0b87-4de7-8622-d23ae416437e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e53961-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:15:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701347, 'reachable_time': 35647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289408, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.938 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cfcabc-762f-4d4e-b073-ba7df1b92af1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:1562'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701347, 'tstamp': 701347}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289409, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.956 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bfab82c1-93b0-4f30-86c9-cd66558c278e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99e53961-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:15:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701347, 'reachable_time': 35647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289410, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:24.985 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7a73165d-5e4a-4de6-a985-e03dcab19604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:25.044 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[407aa3a5-85f5-4e79-82a9-27d5b1c4c02e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:25.046 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e53961-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:25.046 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:25.046 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99e53961-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:25 np0005465987 kernel: tap99e53961-90: entered promiscuous mode
Oct  2 08:42:25 np0005465987 NetworkManager[44910]: <info>  [1759408945.0498] manager: (tap99e53961-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:25.054 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99e53961-90, col_values=(('external_ids', {'iface-id': '15e9653f-d8e0-49a9-859e-c8a4f714df4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:25 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:25Z|00596|binding|INFO|Releasing lport 15e9653f-d8e0-49a9-859e-c8a4f714df4e from this chassis (sb_readonly=0)
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:25.058 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:25.059 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8b47065e-e35e-4f13-b1fb-04a48812f0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:25.060 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-99e53961-97c9-4d79-b2bc-ba336c204821
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/99e53961-97c9-4d79-b2bc-ba336c204821.pid.haproxy
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 99e53961-97c9-4d79-b2bc-ba336c204821
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:42:25 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:25.061 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'env', 'PROCESS_TAG=haproxy-99e53961-97c9-4d79-b2bc-ba336c204821', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99e53961-97c9-4d79-b2bc-ba336c204821.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.338 2 DEBUG nova.compute.manager [req-cc534257-bf99-4f51-9928-27d7e05f1421 req-a68f2aab-3d6a-40ab-8c4f-fce89a39b5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-unplugged-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.338 2 DEBUG oslo_concurrency.lockutils [req-cc534257-bf99-4f51-9928-27d7e05f1421 req-a68f2aab-3d6a-40ab-8c4f-fce89a39b5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.339 2 DEBUG oslo_concurrency.lockutils [req-cc534257-bf99-4f51-9928-27d7e05f1421 req-a68f2aab-3d6a-40ab-8c4f-fce89a39b5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.339 2 DEBUG oslo_concurrency.lockutils [req-cc534257-bf99-4f51-9928-27d7e05f1421 req-a68f2aab-3d6a-40ab-8c4f-fce89a39b5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.340 2 DEBUG nova.compute.manager [req-cc534257-bf99-4f51-9928-27d7e05f1421 req-a68f2aab-3d6a-40ab-8c4f-fce89a39b5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] No waiting events found dispatching network-vif-unplugged-e389cd1d-f50c-460c-b8ab-e0a384159932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.340 2 WARNING nova.compute.manager [req-cc534257-bf99-4f51-9928-27d7e05f1421 req-a68f2aab-3d6a-40ab-8c4f-fce89a39b5a8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received unexpected event network-vif-unplugged-e389cd1d-f50c-460c-b8ab-e0a384159932 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  2 08:42:25 np0005465987 podman[289501]: 2025-10-02 12:42:25.448054403 +0000 UTC m=+0.055722453 container create eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:42:25 np0005465987 systemd[1]: Started libpod-conmon-eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6.scope.
Oct  2 08:42:25 np0005465987 podman[289501]: 2025-10-02 12:42:25.422242474 +0000 UTC m=+0.029910574 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:42:25 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:42:25 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fef047c5389d39989256e48f02ae0a72c22295d2b40b45001ed14edbce5fc35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:42:25 np0005465987 podman[289501]: 2025-10-02 12:42:25.555277731 +0000 UTC m=+0.162945841 container init eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:42:25 np0005465987 podman[289501]: 2025-10-02 12:42:25.561554103 +0000 UTC m=+0.169222163 container start eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:42:25 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[289516]: [NOTICE]   (289520) : New worker (289522) forked
Oct  2 08:42:25 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[289516]: [NOTICE]   (289520) : Loading success.
Oct  2 08:42:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.814 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 70956832-8f1d-46df-8eeb-3f75ddf40e84 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.815 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408945.813883, 70956832-8f1d-46df-8eeb-3f75ddf40e84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.815 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.816 2 DEBUG nova.compute.manager [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.819 2 INFO nova.virt.libvirt.driver [-] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Instance rebooted successfully.#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.820 2 DEBUG nova.compute.manager [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.834 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.838 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.870 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.870 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759408945.814037, 70956832-8f1d-46df-8eeb-3f75ddf40e84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.870 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] VM Started (Lifecycle Event)#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.875 2 DEBUG oslo_concurrency.lockutils [None req-2a41390c-065c-40d3-811b-f06ad4fc4254 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.891 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.895 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:42:25 np0005465987 nova_compute[230713]: 2025-10-02 12:42:25.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:26.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.522 2 DEBUG nova.compute.manager [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.523 2 DEBUG oslo_concurrency.lockutils [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.523 2 DEBUG oslo_concurrency.lockutils [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.523 2 DEBUG oslo_concurrency.lockutils [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.524 2 DEBUG nova.compute.manager [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] No waiting events found dispatching network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.524 2 WARNING nova.compute.manager [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received unexpected event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.524 2 DEBUG nova.compute.manager [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.525 2 DEBUG oslo_concurrency.lockutils [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.525 2 DEBUG oslo_concurrency.lockutils [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.525 2 DEBUG oslo_concurrency.lockutils [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.525 2 DEBUG nova.compute.manager [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] No waiting events found dispatching network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.526 2 WARNING nova.compute.manager [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received unexpected event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.526 2 DEBUG nova.compute.manager [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.526 2 DEBUG oslo_concurrency.lockutils [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.527 2 DEBUG oslo_concurrency.lockutils [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.527 2 DEBUG oslo_concurrency.lockutils [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.527 2 DEBUG nova.compute.manager [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] No waiting events found dispatching network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:27 np0005465987 nova_compute[230713]: 2025-10-02 12:42:27.527 2 WARNING nova.compute.manager [req-dc3a6ed4-ae9c-44b1-bac0-0ac010eaff8a req-4864d998-8d04-4a20-8675-24f0ad7e65ad d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received unexpected event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:42:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:42:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:27.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:42:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:27.967 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:27.968 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:27.968 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:28.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e345 e345: 3 total, 3 up, 3 in
Oct  2 08:42:29 np0005465987 nova_compute[230713]: 2025-10-02 12:42:29.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:29.599 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:29.601 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:42:29 np0005465987 nova_compute[230713]: 2025-10-02 12:42:29.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:30 np0005465987 podman[289531]: 2025-10-02 12:42:30.849549256 +0000 UTC m=+0.055995101 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:42:30 np0005465987 nova_compute[230713]: 2025-10-02 12:42:30.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:31 np0005465987 nova_compute[230713]: 2025-10-02 12:42:31.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:32.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:34.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:34 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:34.603 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:34Z|00597|memory|INFO|peak resident set size grew 50% in last 3535.0 seconds, from 16384 kB to 24632 kB
Oct  2 08:42:34 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:34Z|00598|memory|INFO|idl-cells-OVN_Southbound:11936 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:411 lflow-cache-entries-cache-matches:299 lflow-cache-size-KB:1602 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:697 ofctrl_installed_flow_usage-KB:510 ofctrl_sb_flow_ref_usage-KB:260
Oct  2 08:42:34 np0005465987 nova_compute[230713]: 2025-10-02 12:42:34.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:35 np0005465987 podman[289552]: 2025-10-02 12:42:35.867538647 +0000 UTC m=+0.072973297 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:42:35 np0005465987 podman[289553]: 2025-10-02 12:42:35.894437255 +0000 UTC m=+0.094108567 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  2 08:42:35 np0005465987 podman[289551]: 2025-10-02 12:42:35.89461669 +0000 UTC m=+0.103013432 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:42:36 np0005465987 nova_compute[230713]: 2025-10-02 12:42:36.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:37.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:38.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:39Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:87:e1 10.100.0.12
Oct  2 08:42:39 np0005465987 nova_compute[230713]: 2025-10-02 12:42:39.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:42:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:39.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:42:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:40.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:42:40 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/619613514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:42:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:42:40 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/619613514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:42:41 np0005465987 nova_compute[230713]: 2025-10-02 12:42:41.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:41.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:41 np0005465987 nova_compute[230713]: 2025-10-02 12:42:41.945 2 DEBUG oslo_concurrency.lockutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:41 np0005465987 nova_compute[230713]: 2025-10-02 12:42:41.946 2 DEBUG oslo_concurrency.lockutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:41 np0005465987 nova_compute[230713]: 2025-10-02 12:42:41.946 2 DEBUG oslo_concurrency.lockutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:41 np0005465987 nova_compute[230713]: 2025-10-02 12:42:41.947 2 DEBUG oslo_concurrency.lockutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:41 np0005465987 nova_compute[230713]: 2025-10-02 12:42:41.947 2 DEBUG oslo_concurrency.lockutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:41 np0005465987 nova_compute[230713]: 2025-10-02 12:42:41.949 2 INFO nova.compute.manager [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Terminating instance#033[00m
Oct  2 08:42:41 np0005465987 nova_compute[230713]: 2025-10-02 12:42:41.951 2 DEBUG nova.compute.manager [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:42:42 np0005465987 kernel: tap933f98f8-c4 (unregistering): left promiscuous mode
Oct  2 08:42:42 np0005465987 NetworkManager[44910]: <info>  [1759408962.0271] device (tap933f98f8-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:42:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:42Z|00599|binding|INFO|Releasing lport 933f98f8-c463-4299-8008-a38a3a76a0be from this chassis (sb_readonly=0)
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:42Z|00600|binding|INFO|Setting lport 933f98f8-c463-4299-8008-a38a3a76a0be down in Southbound
Oct  2 08:42:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:42:42Z|00601|binding|INFO|Removing iface tap933f98f8-c4 ovn-installed in OVS
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.045 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:19:7f 10.100.0.11'], port_security=['fa:16:3e:8f:19:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00455285-97a7-4fa2-ba83-e8060936877e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6822f02d5ca04c659329a75d487054cf', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f978d0a7-f86b-440f-a8b5-5432c3a4bc91, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=933f98f8-c463-4299-8008-a38a3a76a0be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.047 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 933f98f8-c463-4299-8008-a38a3a76a0be in datapath 00455285-97a7-4fa2-ba83-e8060936877e unbound from our chassis#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.050 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00455285-97a7-4fa2-ba83-e8060936877e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.052 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b7824f54-989a-48d1-9441-825cceac8943]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.053 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e namespace which is not needed anymore#033[00m
Oct  2 08:42:42 np0005465987 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct  2 08:42:42 np0005465987 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000097.scope: Consumed 18.590s CPU time.
Oct  2 08:42:42 np0005465987 systemd-machined[188335]: Machine qemu-68-instance-00000097 terminated.
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.196 2 INFO nova.virt.libvirt.driver [-] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Instance destroyed successfully.#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.196 2 DEBUG nova.objects.instance [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lazy-loading 'resources' on Instance uuid 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:42 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[287834]: [NOTICE]   (287838) : haproxy version is 2.8.14-c23fe91
Oct  2 08:42:42 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[287834]: [NOTICE]   (287838) : path to executable is /usr/sbin/haproxy
Oct  2 08:42:42 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[287834]: [WARNING]  (287838) : Exiting Master process...
Oct  2 08:42:42 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[287834]: [ALERT]    (287838) : Current worker (287840) exited with code 143 (Terminated)
Oct  2 08:42:42 np0005465987 neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e[287834]: [WARNING]  (287838) : All workers exited. Exiting... (0)
Oct  2 08:42:42 np0005465987 systemd[1]: libpod-ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca.scope: Deactivated successfully.
Oct  2 08:42:42 np0005465987 podman[289642]: 2025-10-02 12:42:42.209992155 +0000 UTC m=+0.055341502 container died ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:42:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca-userdata-shm.mount: Deactivated successfully.
Oct  2 08:42:42 np0005465987 systemd[1]: var-lib-containers-storage-overlay-928277ab5a6c5269999ced4fe1fb97a02e59512abc35780b75e45ad3618d762c-merged.mount: Deactivated successfully.
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.243 2 DEBUG nova.virt.libvirt.vif [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:40:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1365792511',display_name='tempest-ServerActionsTestOtherA-server-1365792511',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1365792511',id=151,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6822f02d5ca04c659329a75d487054cf',ramdisk_id='',reservation_id='r-p6gd7ic0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1680083910',owner_user_name='tempest-ServerActionsTestOtherA-1680083910-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:40:46Z,user_data=None,user_id='c02d1dcc10ea4e57bbc6b7a3c100dc7b',uuid=31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.243 2 DEBUG nova.network.os_vif_util [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converting VIF {"id": "933f98f8-c463-4299-8008-a38a3a76a0be", "address": "fa:16:3e:8f:19:7f", "network": {"id": "00455285-97a7-4fa2-ba83-e8060936877e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1293599148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6822f02d5ca04c659329a75d487054cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933f98f8-c4", "ovs_interfaceid": "933f98f8-c463-4299-8008-a38a3a76a0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.244 2 DEBUG nova.network.os_vif_util [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:19:7f,bridge_name='br-int',has_traffic_filtering=True,id=933f98f8-c463-4299-8008-a38a3a76a0be,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933f98f8-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.245 2 DEBUG os_vif [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:19:7f,bridge_name='br-int',has_traffic_filtering=True,id=933f98f8-c463-4299-8008-a38a3a76a0be,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933f98f8-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap933f98f8-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.252 2 INFO os_vif [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:19:7f,bridge_name='br-int',has_traffic_filtering=True,id=933f98f8-c463-4299-8008-a38a3a76a0be,network=Network(00455285-97a7-4fa2-ba83-e8060936877e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933f98f8-c4')#033[00m
Oct  2 08:42:42 np0005465987 podman[289642]: 2025-10-02 12:42:42.253903692 +0000 UTC m=+0.099253049 container cleanup ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:42:42 np0005465987 systemd[1]: libpod-conmon-ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca.scope: Deactivated successfully.
Oct  2 08:42:42 np0005465987 podman[289694]: 2025-10-02 12:42:42.327949018 +0000 UTC m=+0.045367318 container remove ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:42:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.333 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[64a27d7c-3555-43e1-9371-fa31cdf6b55f]: (4, ('Thu Oct  2 12:42:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e (ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca)\nef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca\nThu Oct  2 12:42:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e (ef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca)\nef93bbbf32d5aedd20bc8eee470bd5aa51f51f559cf72741f97a8997426f61ca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.335 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa1a61a-2a4f-490d-ac5b-90067abc043e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.336 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00455285-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 kernel: tap00455285-90: left promiscuous mode
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.354 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ea484391-7933-45de-88c9-3a23e7b6b727]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.391 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[061c02de-80ec-4e36-9d2e-3e1f3e0e82a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.392 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fb225bfb-514c-453d-8eb0-495a6a16e909]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.407 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c25a1078-693b-473e-b09a-5c2210a05a05]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691327, 'reachable_time': 17131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289715, 'error': None, 'target': 'ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:42 np0005465987 systemd[1]: run-netns-ovnmeta\x2d00455285\x2d97a7\x2d4fa2\x2dba83\x2de8060936877e.mount: Deactivated successfully.
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.410 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00455285-97a7-4fa2-ba83-e8060936877e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:42:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:42:42.410 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[d79bf88e-2b1c-40bd-a646-926d0950d028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.621 2 DEBUG nova.compute.manager [req-23c8ebe5-c78b-4c80-9800-6692fd6692c1 req-92af5891-2308-416c-aa19-51b88d9e6ff4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received event network-vif-unplugged-933f98f8-c463-4299-8008-a38a3a76a0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.621 2 DEBUG oslo_concurrency.lockutils [req-23c8ebe5-c78b-4c80-9800-6692fd6692c1 req-92af5891-2308-416c-aa19-51b88d9e6ff4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.621 2 DEBUG oslo_concurrency.lockutils [req-23c8ebe5-c78b-4c80-9800-6692fd6692c1 req-92af5891-2308-416c-aa19-51b88d9e6ff4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.622 2 DEBUG oslo_concurrency.lockutils [req-23c8ebe5-c78b-4c80-9800-6692fd6692c1 req-92af5891-2308-416c-aa19-51b88d9e6ff4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.622 2 DEBUG nova.compute.manager [req-23c8ebe5-c78b-4c80-9800-6692fd6692c1 req-92af5891-2308-416c-aa19-51b88d9e6ff4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] No waiting events found dispatching network-vif-unplugged-933f98f8-c463-4299-8008-a38a3a76a0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:42 np0005465987 nova_compute[230713]: 2025-10-02 12:42:42.622 2 DEBUG nova.compute.manager [req-23c8ebe5-c78b-4c80-9800-6692fd6692c1 req-92af5891-2308-416c-aa19-51b88d9e6ff4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received event network-vif-unplugged-933f98f8-c463-4299-8008-a38a3a76a0be for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:42:43 np0005465987 nova_compute[230713]: 2025-10-02 12:42:43.662 2 INFO nova.virt.libvirt.driver [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Deleting instance files /var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_del#033[00m
Oct  2 08:42:43 np0005465987 nova_compute[230713]: 2025-10-02 12:42:43.664 2 INFO nova.virt.libvirt.driver [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Deletion of /var/lib/nova/instances/31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc_del complete#033[00m
Oct  2 08:42:43 np0005465987 nova_compute[230713]: 2025-10-02 12:42:43.756 2 INFO nova.compute.manager [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Took 1.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:42:43 np0005465987 nova_compute[230713]: 2025-10-02 12:42:43.757 2 DEBUG oslo.service.loopingcall [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:42:43 np0005465987 nova_compute[230713]: 2025-10-02 12:42:43.757 2 DEBUG nova.compute.manager [-] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:42:43 np0005465987 nova_compute[230713]: 2025-10-02 12:42:43.758 2 DEBUG nova.network.neutron [-] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:42:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:43.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:44.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:44 np0005465987 nova_compute[230713]: 2025-10-02 12:42:44.746 2 DEBUG nova.compute.manager [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received event network-vif-plugged-933f98f8-c463-4299-8008-a38a3a76a0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:44 np0005465987 nova_compute[230713]: 2025-10-02 12:42:44.747 2 DEBUG oslo_concurrency.lockutils [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:44 np0005465987 nova_compute[230713]: 2025-10-02 12:42:44.747 2 DEBUG oslo_concurrency.lockutils [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:44 np0005465987 nova_compute[230713]: 2025-10-02 12:42:44.747 2 DEBUG oslo_concurrency.lockutils [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:44 np0005465987 nova_compute[230713]: 2025-10-02 12:42:44.747 2 DEBUG nova.compute.manager [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] No waiting events found dispatching network-vif-plugged-933f98f8-c463-4299-8008-a38a3a76a0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:44 np0005465987 nova_compute[230713]: 2025-10-02 12:42:44.747 2 WARNING nova.compute.manager [req-c7bdb62e-3e7a-4c2f-aee2-c5b878a2c999 req-5560a61a-7bbb-4382-bfd2-97977e7f22fb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received unexpected event network-vif-plugged-933f98f8-c463-4299-8008-a38a3a76a0be for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:42:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:45 np0005465987 nova_compute[230713]: 2025-10-02 12:42:45.255 2 DEBUG nova.compute.manager [req-49b59e3e-c05e-4b19-a145-df63dc75c4c5 req-316c1a2b-c4a1-462e-b2eb-65619adefe69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Received event network-vif-deleted-933f98f8-c463-4299-8008-a38a3a76a0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:45 np0005465987 nova_compute[230713]: 2025-10-02 12:42:45.256 2 INFO nova.compute.manager [req-49b59e3e-c05e-4b19-a145-df63dc75c4c5 req-316c1a2b-c4a1-462e-b2eb-65619adefe69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Neutron deleted interface 933f98f8-c463-4299-8008-a38a3a76a0be; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:42:45 np0005465987 nova_compute[230713]: 2025-10-02 12:42:45.257 2 DEBUG nova.network.neutron [req-49b59e3e-c05e-4b19-a145-df63dc75c4c5 req-316c1a2b-c4a1-462e-b2eb-65619adefe69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:45 np0005465987 nova_compute[230713]: 2025-10-02 12:42:45.320 2 DEBUG nova.network.neutron [-] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:45 np0005465987 nova_compute[230713]: 2025-10-02 12:42:45.740 2 INFO nova.compute.manager [-] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Took 1.98 seconds to deallocate network for instance.#033[00m
Oct  2 08:42:45 np0005465987 nova_compute[230713]: 2025-10-02 12:42:45.749 2 DEBUG nova.compute.manager [req-49b59e3e-c05e-4b19-a145-df63dc75c4c5 req-316c1a2b-c4a1-462e-b2eb-65619adefe69 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Detach interface failed, port_id=933f98f8-c463-4299-8008-a38a3a76a0be, reason: Instance 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:42:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:45.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:45 np0005465987 nova_compute[230713]: 2025-10-02 12:42:45.916 2 DEBUG oslo_concurrency.lockutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:45 np0005465987 nova_compute[230713]: 2025-10-02 12:42:45.916 2 DEBUG oslo_concurrency.lockutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:46 np0005465987 nova_compute[230713]: 2025-10-02 12:42:46.014 2 DEBUG oslo_concurrency.processutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:46 np0005465987 nova_compute[230713]: 2025-10-02 12:42:46.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:46.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/159983714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:46 np0005465987 nova_compute[230713]: 2025-10-02 12:42:46.472 2 DEBUG oslo_concurrency.processutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:46 np0005465987 nova_compute[230713]: 2025-10-02 12:42:46.479 2 DEBUG nova.compute.provider_tree [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:46 np0005465987 nova_compute[230713]: 2025-10-02 12:42:46.495 2 DEBUG nova.scheduler.client.report [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:46 np0005465987 nova_compute[230713]: 2025-10-02 12:42:46.513 2 DEBUG oslo_concurrency.lockutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:46 np0005465987 nova_compute[230713]: 2025-10-02 12:42:46.541 2 INFO nova.scheduler.client.report [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Deleted allocations for instance 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc#033[00m
Oct  2 08:42:46 np0005465987 nova_compute[230713]: 2025-10-02 12:42:46.617 2 DEBUG oslo_concurrency.lockutils [None req-a59f7712-2de4-40df-8cf8-0d734e7e20aa c02d1dcc10ea4e57bbc6b7a3c100dc7b 6822f02d5ca04c659329a75d487054cf - - default default] Lock "31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:47 np0005465987 nova_compute[230713]: 2025-10-02 12:42:47.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:48.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:50.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:51 np0005465987 nova_compute[230713]: 2025-10-02 12:42:51.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e346 e346: 3 total, 3 up, 3 in
Oct  2 08:42:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:52 np0005465987 nova_compute[230713]: 2025-10-02 12:42:52.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:52.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:42:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:42:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1267667080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:42:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:53.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:54.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:42:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 44K writes, 171K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 44K writes, 16K syncs, 2.78 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9226 writes, 32K keys, 9226 commit groups, 1.0 writes per commit group, ingest: 37.21 MB, 0.06 MB/s#012Interval WAL: 9226 writes, 3786 syncs, 2.44 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:42:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:42:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3548777140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:42:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:42:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3548777140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:42:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:55.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e347 e347: 3 total, 3 up, 3 in
Oct  2 08:42:56 np0005465987 nova_compute[230713]: 2025-10-02 12:42:56.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e348 e348: 3 total, 3 up, 3 in
Oct  2 08:42:57 np0005465987 nova_compute[230713]: 2025-10-02 12:42:57.194 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408962.1937768, 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:57 np0005465987 nova_compute[230713]: 2025-10-02 12:42:57.195 2 INFO nova.compute.manager [-] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:42:57 np0005465987 nova_compute[230713]: 2025-10-02 12:42:57.270 2 DEBUG nova.compute.manager [None req-0a8c938c-4677-4a62-899e-a8760e952204 - - - - - -] [instance: 31335bf1-4282-40ec-bdc7-0c5cb3ce5fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:57 np0005465987 nova_compute[230713]: 2025-10-02 12:42:57.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:57.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:58.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:42:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:42:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:59.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:00 np0005465987 nova_compute[230713]: 2025-10-02 12:43:00.659 2 DEBUG nova.compute.manager [req-8e5059a6-141f-429a-b99d-a95c09357bac req-0e86b06a-d5eb-4b54-b64d-a3308c481870 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:00 np0005465987 nova_compute[230713]: 2025-10-02 12:43:00.659 2 DEBUG nova.compute.manager [req-8e5059a6-141f-429a-b99d-a95c09357bac req-0e86b06a-d5eb-4b54-b64d-a3308c481870 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing instance network info cache due to event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:00 np0005465987 nova_compute[230713]: 2025-10-02 12:43:00.660 2 DEBUG oslo_concurrency.lockutils [req-8e5059a6-141f-429a-b99d-a95c09357bac req-0e86b06a-d5eb-4b54-b64d-a3308c481870 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:00 np0005465987 nova_compute[230713]: 2025-10-02 12:43:00.660 2 DEBUG oslo_concurrency.lockutils [req-8e5059a6-141f-429a-b99d-a95c09357bac req-0e86b06a-d5eb-4b54-b64d-a3308c481870 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:00 np0005465987 nova_compute[230713]: 2025-10-02 12:43:00.660 2 DEBUG nova.network.neutron [req-8e5059a6-141f-429a-b99d-a95c09357bac req-0e86b06a-d5eb-4b54-b64d-a3308c481870 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:01 np0005465987 nova_compute[230713]: 2025-10-02 12:43:01.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:01.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:01 np0005465987 podman[289741]: 2025-10-02 12:43:01.85532717 +0000 UTC m=+0.062281373 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:43:02 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:02Z|00602|binding|INFO|Releasing lport 15e9653f-d8e0-49a9-859e-c8a4f714df4e from this chassis (sb_readonly=0)
Oct  2 08:43:02 np0005465987 nova_compute[230713]: 2025-10-02 12:43:02.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:02 np0005465987 nova_compute[230713]: 2025-10-02 12:43:02.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:02 np0005465987 nova_compute[230713]: 2025-10-02 12:43:02.565 2 DEBUG nova.network.neutron [req-8e5059a6-141f-429a-b99d-a95c09357bac req-0e86b06a-d5eb-4b54-b64d-a3308c481870 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updated VIF entry in instance network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:02 np0005465987 nova_compute[230713]: 2025-10-02 12:43:02.566 2 DEBUG nova.network.neutron [req-8e5059a6-141f-429a-b99d-a95c09357bac req-0e86b06a-d5eb-4b54-b64d-a3308c481870 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:02 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:02Z|00603|binding|INFO|Releasing lport 15e9653f-d8e0-49a9-859e-c8a4f714df4e from this chassis (sb_readonly=0)
Oct  2 08:43:02 np0005465987 nova_compute[230713]: 2025-10-02 12:43:02.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:02 np0005465987 nova_compute[230713]: 2025-10-02 12:43:02.844 2 DEBUG oslo_concurrency.lockutils [req-8e5059a6-141f-429a-b99d-a95c09357bac req-0e86b06a-d5eb-4b54-b64d-a3308c481870 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e349 e349: 3 total, 3 up, 3 in
Oct  2 08:43:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:03.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:43:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:04.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:04 np0005465987 nova_compute[230713]: 2025-10-02 12:43:04.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:43:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:05.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:06 np0005465987 nova_compute[230713]: 2025-10-02 12:43:06.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:06.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:06 np0005465987 podman[289763]: 2025-10-02 12:43:06.861907877 +0000 UTC m=+0.071240679 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:43:06 np0005465987 podman[289762]: 2025-10-02 12:43:06.862080862 +0000 UTC m=+0.074788657 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:43:06 np0005465987 podman[289761]: 2025-10-02 12:43:06.869902136 +0000 UTC m=+0.091129075 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:43:06 np0005465987 nova_compute[230713]: 2025-10-02 12:43:06.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:06 np0005465987 nova_compute[230713]: 2025-10-02 12:43:06.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:43:07 np0005465987 nova_compute[230713]: 2025-10-02 12:43:07.182 2 DEBUG nova.compute.manager [req-2d31d970-a98e-4d77-8db0-bcb71da52ec0 req-813a4ca5-d5bd-4d76-980b-dc1c79cbf4ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:07 np0005465987 nova_compute[230713]: 2025-10-02 12:43:07.182 2 DEBUG nova.compute.manager [req-2d31d970-a98e-4d77-8db0-bcb71da52ec0 req-813a4ca5-d5bd-4d76-980b-dc1c79cbf4ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing instance network info cache due to event network-changed-e389cd1d-f50c-460c-b8ab-e0a384159932. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:07 np0005465987 nova_compute[230713]: 2025-10-02 12:43:07.182 2 DEBUG oslo_concurrency.lockutils [req-2d31d970-a98e-4d77-8db0-bcb71da52ec0 req-813a4ca5-d5bd-4d76-980b-dc1c79cbf4ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:07 np0005465987 nova_compute[230713]: 2025-10-02 12:43:07.182 2 DEBUG oslo_concurrency.lockutils [req-2d31d970-a98e-4d77-8db0-bcb71da52ec0 req-813a4ca5-d5bd-4d76-980b-dc1c79cbf4ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:07 np0005465987 nova_compute[230713]: 2025-10-02 12:43:07.183 2 DEBUG nova.network.neutron [req-2d31d970-a98e-4d77-8db0-bcb71da52ec0 req-813a4ca5-d5bd-4d76-980b-dc1c79cbf4ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Refreshing network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:07 np0005465987 nova_compute[230713]: 2025-10-02 12:43:07.284 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:07 np0005465987 nova_compute[230713]: 2025-10-02 12:43:07.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:07.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:08.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.172 2 DEBUG oslo_concurrency.lockutils [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.173 2 DEBUG oslo_concurrency.lockutils [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.233 2 INFO nova.compute.manager [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Detaching volume daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.236 2 DEBUG nova.network.neutron [req-2d31d970-a98e-4d77-8db0-bcb71da52ec0 req-813a4ca5-d5bd-4d76-980b-dc1c79cbf4ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updated VIF entry in instance network info cache for port e389cd1d-f50c-460c-b8ab-e0a384159932. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.237 2 DEBUG nova.network.neutron [req-2d31d970-a98e-4d77-8db0-bcb71da52ec0 req-813a4ca5-d5bd-4d76-980b-dc1c79cbf4ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.291 2 DEBUG oslo_concurrency.lockutils [req-2d31d970-a98e-4d77-8db0-bcb71da52ec0 req-813a4ca5-d5bd-4d76-980b-dc1c79cbf4ae d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.292 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.292 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.433 2 INFO nova.virt.block_device [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Attempting to driver detach volume daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef from mountpoint /dev/vdb#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.445 2 DEBUG nova.virt.libvirt.driver [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Attempting to detach device vdb from instance 70956832-8f1d-46df-8eeb-3f75ddf40e84 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.447 2 DEBUG nova.virt.libvirt.guest [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef">
Oct  2 08:43:09 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <serial>daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef</serial>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:43:09 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.462 2 INFO nova.virt.libvirt.driver [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully detached device vdb from instance 70956832-8f1d-46df-8eeb-3f75ddf40e84 from the persistent domain config.#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.462 2 DEBUG nova.virt.libvirt.driver [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 70956832-8f1d-46df-8eeb-3f75ddf40e84 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.463 2 DEBUG nova.virt.libvirt.guest [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef">
Oct  2 08:43:09 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <serial>daaee0e4-d0d6-43f1-84a2-7bb4d00c48ef</serial>
Oct  2 08:43:09 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct  2 08:43:09 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:43:09 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.598 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759408989.5978334, 70956832-8f1d-46df-8eeb-3f75ddf40e84 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.600 2 DEBUG nova.virt.libvirt.driver [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 70956832-8f1d-46df-8eeb-3f75ddf40e84 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.602 2 INFO nova.virt.libvirt.driver [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully detached device vdb from instance 70956832-8f1d-46df-8eeb-3f75ddf40e84 from the live domain config.#033[00m
Oct  2 08:43:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:09 np0005465987 nova_compute[230713]: 2025-10-02 12:43:09.862 2 DEBUG nova.objects.instance [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'flavor' on Instance uuid 70956832-8f1d-46df-8eeb-3f75ddf40e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:09.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:10 np0005465987 nova_compute[230713]: 2025-10-02 12:43:10.058 2 DEBUG oslo_concurrency.lockutils [None req-7f4e18e4-f1e0-4948-a19e-1b4bbcba091a 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:10.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:11 np0005465987 nova_compute[230713]: 2025-10-02 12:43:11.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005465987 nova_compute[230713]: 2025-10-02 12:43:11.569 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [{"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:11 np0005465987 nova_compute[230713]: 2025-10-02 12:43:11.600 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-70956832-8f1d-46df-8eeb-3f75ddf40e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:11 np0005465987 nova_compute[230713]: 2025-10-02 12:43:11.601 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:43:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:43:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:11.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.216 2 DEBUG oslo_concurrency.lockutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.216 2 DEBUG oslo_concurrency.lockutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.217 2 DEBUG oslo_concurrency.lockutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.218 2 DEBUG oslo_concurrency.lockutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.218 2 DEBUG oslo_concurrency.lockutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.219 2 INFO nova.compute.manager [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Terminating instance#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.220 2 DEBUG nova.compute.manager [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:12.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:12 np0005465987 kernel: tape389cd1d-f5 (unregistering): left promiscuous mode
Oct  2 08:43:12 np0005465987 NetworkManager[44910]: <info>  [1759408992.4023] device (tape389cd1d-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:43:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:12Z|00604|binding|INFO|Releasing lport e389cd1d-f50c-460c-b8ab-e0a384159932 from this chassis (sb_readonly=0)
Oct  2 08:43:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:12Z|00605|binding|INFO|Setting lport e389cd1d-f50c-460c-b8ab-e0a384159932 down in Southbound
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:12Z|00606|binding|INFO|Removing iface tape389cd1d-f5 ovn-installed in OVS
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:12.419 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:87:e1 10.100.0.12'], port_security=['fa:16:3e:0c:87:e1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '70956832-8f1d-46df-8eeb-3f75ddf40e84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e53961-97c9-4d79-b2bc-ba336c204821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e53064cd4d645f09bd59bbca09b98e0', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e3789894-a1c1-47ad-b248-e3a6730a7778', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a249b7-28a6-4e82-83ff-74dd7c2a70f8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=e389cd1d-f50c-460c-b8ab-e0a384159932) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:12.421 138325 INFO neutron.agent.ovn.metadata.agent [-] Port e389cd1d-f50c-460c-b8ab-e0a384159932 in datapath 99e53961-97c9-4d79-b2bc-ba336c204821 unbound from our chassis#033[00m
Oct  2 08:43:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:12.423 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99e53961-97c9-4d79-b2bc-ba336c204821, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:43:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:12.424 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[70dc15fe-acf9-4b51-bb17-cf0ef71ba692]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:12.425 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 namespace which is not needed anymore#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:12 np0005465987 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Oct  2 08:43:12 np0005465987 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009c.scope: Consumed 15.429s CPU time.
Oct  2 08:43:12 np0005465987 systemd-machined[188335]: Machine qemu-71-instance-0000009c terminated.
Oct  2 08:43:12 np0005465987 kernel: tape389cd1d-f5: entered promiscuous mode
Oct  2 08:43:12 np0005465987 kernel: tape389cd1d-f5 (unregistering): left promiscuous mode
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.663 2 INFO nova.virt.libvirt.driver [-] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Instance destroyed successfully.#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.664 2 DEBUG nova.objects.instance [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lazy-loading 'resources' on Instance uuid 70956832-8f1d-46df-8eeb-3f75ddf40e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:12 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[289516]: [NOTICE]   (289520) : haproxy version is 2.8.14-c23fe91
Oct  2 08:43:12 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[289516]: [NOTICE]   (289520) : path to executable is /usr/sbin/haproxy
Oct  2 08:43:12 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[289516]: [WARNING]  (289520) : Exiting Master process...
Oct  2 08:43:12 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[289516]: [WARNING]  (289520) : Exiting Master process...
Oct  2 08:43:12 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[289516]: [ALERT]    (289520) : Current worker (289522) exited with code 143 (Terminated)
Oct  2 08:43:12 np0005465987 neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821[289516]: [WARNING]  (289520) : All workers exited. Exiting... (0)
Oct  2 08:43:12 np0005465987 systemd[1]: libpod-eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6.scope: Deactivated successfully.
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.683 2 DEBUG nova.virt.libvirt.vif [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1443375856',display_name='tempest-TestMinimumBasicScenario-server-1443375856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1443375856',id=156,image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGzB0sJYXYPOfWvtpNcxNGXcGvdcBf7QwVJ9OKoFMmmKZSO6fZE5N4Z9luCuMJc0D1eqyZgLUzVemVKb7pJpNrlCg/Ws/Z2apKl6RmNDh6IMXB331e3RDriHaz2BQyxtg==',key_name='tempest-TestMinimumBasicScenario-222087753',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e53064cd4d645f09bd59bbca09b98e0',ramdisk_id='',reservation_id='r-0su4hzgy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='881d7140-1aef-4d5d-b04d-76b7b19d6837',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-999813940',owner_user_name='tempest-TestMinimumBasicScenario-999813940-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:25Z,user_data=None,user_id='734ae44830d540d8ab51c2a3d75ecd80',uuid=70956832-8f1d-46df-8eeb-3f75ddf40e84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.683 2 DEBUG nova.network.os_vif_util [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converting VIF {"id": "e389cd1d-f50c-460c-b8ab-e0a384159932", "address": "fa:16:3e:0c:87:e1", "network": {"id": "99e53961-97c9-4d79-b2bc-ba336c204821", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-587404365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e53064cd4d645f09bd59bbca09b98e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389cd1d-f5", "ovs_interfaceid": "e389cd1d-f50c-460c-b8ab-e0a384159932", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.684 2 DEBUG nova.network.os_vif_util [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.684 2 DEBUG os_vif [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.686 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape389cd1d-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:12 np0005465987 podman[289852]: 2025-10-02 12:43:12.690103989 +0000 UTC m=+0.177108690 container died eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:43:12 np0005465987 nova_compute[230713]: 2025-10-02 12:43:12.691 2 INFO os_vif [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:87:e1,bridge_name='br-int',has_traffic_filtering=True,id=e389cd1d-f50c-460c-b8ab-e0a384159932,network=Network(99e53961-97c9-4d79-b2bc-ba336c204821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389cd1d-f5')#033[00m
Oct  2 08:43:12 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6-userdata-shm.mount: Deactivated successfully.
Oct  2 08:43:12 np0005465987 systemd[1]: var-lib-containers-storage-overlay-9fef047c5389d39989256e48f02ae0a72c22295d2b40b45001ed14edbce5fc35-merged.mount: Deactivated successfully.
Oct  2 08:43:13 np0005465987 podman[289852]: 2025-10-02 12:43:13.22891546 +0000 UTC m=+0.715920161 container cleanup eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.242 2 DEBUG nova.compute.manager [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-unplugged-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.248 2 DEBUG oslo_concurrency.lockutils [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.249 2 DEBUG oslo_concurrency.lockutils [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.249 2 DEBUG oslo_concurrency.lockutils [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.250 2 DEBUG nova.compute.manager [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] No waiting events found dispatching network-vif-unplugged-e389cd1d-f50c-460c-b8ab-e0a384159932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.250 2 DEBUG nova.compute.manager [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-unplugged-e389cd1d-f50c-460c-b8ab-e0a384159932 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.250 2 DEBUG nova.compute.manager [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.251 2 DEBUG oslo_concurrency.lockutils [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.251 2 DEBUG oslo_concurrency.lockutils [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.251 2 DEBUG oslo_concurrency.lockutils [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.252 2 DEBUG nova.compute.manager [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] No waiting events found dispatching network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.252 2 WARNING nova.compute.manager [req-84b47628-563b-43cd-8fc9-847af706c822 req-a56a8b03-6a40-4918-879d-2ca914fdeb7a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received unexpected event network-vif-plugged-e389cd1d-f50c-460c-b8ab-e0a384159932 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:43:13 np0005465987 systemd[1]: libpod-conmon-eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6.scope: Deactivated successfully.
Oct  2 08:43:13 np0005465987 podman[290003]: 2025-10-02 12:43:13.444015353 +0000 UTC m=+0.188935954 container remove eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:43:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:13.453 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ce28b57c-4c06-4618-85d6-1b446f4387d9]: (4, ('Thu Oct  2 12:43:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 (eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6)\neec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6\nThu Oct  2 12:43:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 (eec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6)\neec88cac0080457e68fe60bb8ad6d752d2199a252feb307c7faa40fe2c5b2ee6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:13.455 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[15dafdc6-e7f2-4e4a-891c-0e98e61e44b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:13.456 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e53961-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:13 np0005465987 kernel: tap99e53961-90: left promiscuous mode
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:13.464 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e57054c6-0ddb-40c6-a8cc-7fd8f032e9f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:13 np0005465987 nova_compute[230713]: 2025-10-02 12:43:13.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:13.502 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[84aeffd3-fc0e-446d-968b-bf7b3a8d93bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:13.503 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e7d4f2-1517-4e2b-9d7e-d39568df6742]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:13.524 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7b065bb5-3421-47bd-b659-cb4e2ddc4bf8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701339, 'reachable_time': 36369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290035, 'error': None, 'target': 'ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:13 np0005465987 systemd[1]: run-netns-ovnmeta\x2d99e53961\x2d97c9\x2d4d79\x2db2bc\x2dba336c204821.mount: Deactivated successfully.
Oct  2 08:43:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:13.529 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99e53961-97c9-4d79-b2bc-ba336c204821 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:43:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:13.529 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c834a7-fe24-45ae-bbd2-b97dde17be74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:13.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:14.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:14 np0005465987 nova_compute[230713]: 2025-10-02 12:43:14.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:14 np0005465987 nova_compute[230713]: 2025-10-02 12:43:14.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:14 np0005465987 nova_compute[230713]: 2025-10-02 12:43:14.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:14 np0005465987 nova_compute[230713]: 2025-10-02 12:43:14.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:14 np0005465987 nova_compute[230713]: 2025-10-02 12:43:14.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:14 np0005465987 nova_compute[230713]: 2025-10-02 12:43:14.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:14.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:14.996 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:14.996 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:43:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:43:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:43:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2558787742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.428 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.536 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.537 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.713 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.714 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4373MB free_disk=20.94219970703125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.714 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.715 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.783 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 70956832-8f1d-46df-8eeb-3f75ddf40e84 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.783 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.784 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:43:15 np0005465987 nova_compute[230713]: 2025-10-02 12:43:15.830 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:43:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:15.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:16 np0005465987 nova_compute[230713]: 2025-10-02 12:43:16.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3929962429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:16 np0005465987 nova_compute[230713]: 2025-10-02 12:43:16.267 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:16 np0005465987 nova_compute[230713]: 2025-10-02 12:43:16.273 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:16 np0005465987 nova_compute[230713]: 2025-10-02 12:43:16.304 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:16.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:16 np0005465987 nova_compute[230713]: 2025-10-02 12:43:16.450 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:43:16 np0005465987 nova_compute[230713]: 2025-10-02 12:43:16.450 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:17 np0005465987 nova_compute[230713]: 2025-10-02 12:43:17.450 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:17 np0005465987 nova_compute[230713]: 2025-10-02 12:43:17.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:43:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:17.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:43:17 np0005465987 nova_compute[230713]: 2025-10-02 12:43:17.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:18.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:43:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:19.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:21 np0005465987 nova_compute[230713]: 2025-10-02 12:43:21.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:21.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:22 np0005465987 nova_compute[230713]: 2025-10-02 12:43:22.125 2 INFO nova.virt.libvirt.driver [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Deleting instance files /var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84_del#033[00m
Oct  2 08:43:22 np0005465987 nova_compute[230713]: 2025-10-02 12:43:22.126 2 INFO nova.virt.libvirt.driver [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Deletion of /var/lib/nova/instances/70956832-8f1d-46df-8eeb-3f75ddf40e84_del complete#033[00m
Oct  2 08:43:22 np0005465987 nova_compute[230713]: 2025-10-02 12:43:22.285 2 INFO nova.compute.manager [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Took 10.06 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:43:22 np0005465987 nova_compute[230713]: 2025-10-02 12:43:22.286 2 DEBUG oslo.service.loopingcall [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:43:22 np0005465987 nova_compute[230713]: 2025-10-02 12:43:22.286 2 DEBUG nova.compute.manager [-] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:43:22 np0005465987 nova_compute[230713]: 2025-10-02 12:43:22.286 2 DEBUG nova.network.neutron [-] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:43:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:22.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:22 np0005465987 nova_compute[230713]: 2025-10-02 12:43:22.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:23.606493) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409003606548, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1685, "num_deletes": 257, "total_data_size": 3584321, "memory_usage": 3628176, "flush_reason": "Manual Compaction"}
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409003704784, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 2362261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58040, "largest_seqno": 59720, "table_properties": {"data_size": 2355162, "index_size": 4042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15631, "raw_average_key_size": 20, "raw_value_size": 2340659, "raw_average_value_size": 3043, "num_data_blocks": 177, "num_entries": 769, "num_filter_entries": 769, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408874, "oldest_key_time": 1759408874, "file_creation_time": 1759409003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 98336 microseconds, and 7125 cpu microseconds.
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:23.704828) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 2362261 bytes OK
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:23.704846) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:23.823106) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:23.823155) EVENT_LOG_v1 {"time_micros": 1759409003823146, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:23.823178) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 3576553, prev total WAL file size 3576834, number of live WAL files 2.
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:23.855195) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303036' seq:72057594037927935, type:22 .. '6C6F676D0032323537' seq:0, type:0; will stop at (end)
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(2306KB)], [114(10MB)]
Oct  2 08:43:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409003855254, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 12898864, "oldest_snapshot_seqno": -1}
Oct  2 08:43:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:23.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:23 np0005465987 nova_compute[230713]: 2025-10-02 12:43:23.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:23 np0005465987 nova_compute[230713]: 2025-10-02 12:43:23.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 8437 keys, 12766517 bytes, temperature: kUnknown
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409004244698, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 12766517, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12709752, "index_size": 34573, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 218856, "raw_average_key_size": 25, "raw_value_size": 12559371, "raw_average_value_size": 1488, "num_data_blocks": 1355, "num_entries": 8437, "num_filter_entries": 8437, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:24.245142) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12766517 bytes
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:24.279398) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.1 rd, 32.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.0 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 8968, records dropped: 531 output_compression: NoCompression
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:24.279438) EVENT_LOG_v1 {"time_micros": 1759409004279423, "job": 72, "event": "compaction_finished", "compaction_time_micros": 389712, "compaction_time_cpu_micros": 30355, "output_level": 6, "num_output_files": 1, "total_output_size": 12766517, "num_input_records": 8968, "num_output_records": 8437, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409004280950, "job": 72, "event": "table_file_deletion", "file_number": 116}
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409004283821, "job": 72, "event": "table_file_deletion", "file_number": 114}
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:23.855050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:24.284069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:24.284077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:24.284080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:24.284083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:43:24.284086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:24.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:24 np0005465987 nova_compute[230713]: 2025-10-02 12:43:24.651 2 DEBUG nova.network.neutron [-] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:24 np0005465987 nova_compute[230713]: 2025-10-02 12:43:24.731 2 DEBUG nova.compute.manager [req-f8d3a649-61ea-4dd9-b02d-074735feb386 req-a175d9e1-abf8-4acf-8c53-69be19193d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Received event network-vif-deleted-e389cd1d-f50c-460c-b8ab-e0a384159932 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:24 np0005465987 nova_compute[230713]: 2025-10-02 12:43:24.731 2 INFO nova.compute.manager [req-f8d3a649-61ea-4dd9-b02d-074735feb386 req-a175d9e1-abf8-4acf-8c53-69be19193d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Neutron deleted interface e389cd1d-f50c-460c-b8ab-e0a384159932; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:43:24 np0005465987 nova_compute[230713]: 2025-10-02 12:43:24.731 2 DEBUG nova.network.neutron [req-f8d3a649-61ea-4dd9-b02d-074735feb386 req-a175d9e1-abf8-4acf-8c53-69be19193d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:24 np0005465987 nova_compute[230713]: 2025-10-02 12:43:24.751 2 INFO nova.compute.manager [-] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Took 2.46 seconds to deallocate network for instance.#033[00m
Oct  2 08:43:24 np0005465987 nova_compute[230713]: 2025-10-02 12:43:24.874 2 DEBUG nova.compute.manager [req-f8d3a649-61ea-4dd9-b02d-074735feb386 req-a175d9e1-abf8-4acf-8c53-69be19193d47 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Detach interface failed, port_id=e389cd1d-f50c-460c-b8ab-e0a384159932, reason: Instance 70956832-8f1d-46df-8eeb-3f75ddf40e84 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:43:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:25 np0005465987 nova_compute[230713]: 2025-10-02 12:43:25.172 2 DEBUG oslo_concurrency.lockutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:25 np0005465987 nova_compute[230713]: 2025-10-02 12:43:25.173 2 DEBUG oslo_concurrency.lockutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:43:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2177798080' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:43:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:43:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2177798080' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:43:25 np0005465987 nova_compute[230713]: 2025-10-02 12:43:25.230 2 DEBUG oslo_concurrency.processutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1609610355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:25 np0005465987 nova_compute[230713]: 2025-10-02 12:43:25.654 2 DEBUG oslo_concurrency.processutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:25 np0005465987 nova_compute[230713]: 2025-10-02 12:43:25.661 2 DEBUG nova.compute.provider_tree [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:25.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.028 2 DEBUG nova.scheduler.client.report [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.101 2 DEBUG oslo_concurrency.lockutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e350 e350: 3 total, 3 up, 3 in
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.201 2 INFO nova.scheduler.client.report [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Deleted allocations for instance 70956832-8f1d-46df-8eeb-3f75ddf40e84#033[00m
Oct  2 08:43:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.434 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquiring lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.434 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.527 2 DEBUG oslo_concurrency.lockutils [None req-7c3aa663-7a4e-4404-b083-9d8526da3eac 734ae44830d540d8ab51c2a3d75ecd80 2e53064cd4d645f09bd59bbca09b98e0 - - default default] Lock "70956832-8f1d-46df-8eeb-3f75ddf40e84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.584 2 DEBUG nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.784 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.784 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.790 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:43:26 np0005465987 nova_compute[230713]: 2025-10-02 12:43:26.790 2 INFO nova.compute.claims [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.013 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:43:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:43:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/424163148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.500 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.506 2 DEBUG nova.compute.provider_tree [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.569 2 DEBUG nova.scheduler.client.report [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.662 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408992.6608098, 70956832-8f1d-46df-8eeb-3f75ddf40e84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.662 2 INFO nova.compute.manager [-] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.736 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.736 2 DEBUG nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:43:27 np0005465987 nova_compute[230713]: 2025-10-02 12:43:27.754 2 DEBUG nova.compute.manager [None req-9a772464-68be-4149-9383-78055643bc19 - - - - - -] [instance: 70956832-8f1d-46df-8eeb-3f75ddf40e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:27.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:27.968 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:27.968 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:27.968 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.034 2 DEBUG nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.034 2 DEBUG nova.network.neutron [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.127 2 INFO nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.223 2 DEBUG nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.311 2 DEBUG nova.policy [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b52b66b6f76243b28f1ac13ea171269c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '71902e570e234b5fa6914a7b1657bf63', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.396 2 DEBUG nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.399 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.399 2 INFO nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Creating image(s)#033[00m
Oct  2 08:43:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:43:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.440 2 DEBUG nova.storage.rbd_utils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] rbd image 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.524 2 DEBUG nova.storage.rbd_utils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] rbd image 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.792 2 DEBUG nova.storage.rbd_utils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] rbd image 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.796 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e351 e351: 3 total, 3 up, 3 in
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.902 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.905 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.906 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.907 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.946 2 DEBUG nova.storage.rbd_utils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] rbd image 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:28 np0005465987 nova_compute[230713]: 2025-10-02 12:43:28.950 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:29 np0005465987 nova_compute[230713]: 2025-10-02 12:43:29.248 2 DEBUG nova.network.neutron [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Successfully created port: 0a9e8992-8631-4d4d-b2ce-92b02026387a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:43:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:29.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:30.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:30 np0005465987 nova_compute[230713]: 2025-10-02 12:43:30.617 2 DEBUG nova.network.neutron [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Successfully updated port: 0a9e8992-8631-4d4d-b2ce-92b02026387a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:43:30 np0005465987 nova_compute[230713]: 2025-10-02 12:43:30.757 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquiring lock "refresh_cache-0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:30 np0005465987 nova_compute[230713]: 2025-10-02 12:43:30.758 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquired lock "refresh_cache-0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:30 np0005465987 nova_compute[230713]: 2025-10-02 12:43:30.758 2 DEBUG nova.network.neutron [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:43:30 np0005465987 nova_compute[230713]: 2025-10-02 12:43:30.773 2 DEBUG nova.compute.manager [req-8588ec47-ff62-4146-a0e9-18e703291d40 req-63a4540e-11d7-4b59-b90a-1d562a63bc28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Received event network-changed-0a9e8992-8631-4d4d-b2ce-92b02026387a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:30 np0005465987 nova_compute[230713]: 2025-10-02 12:43:30.774 2 DEBUG nova.compute.manager [req-8588ec47-ff62-4146-a0e9-18e703291d40 req-63a4540e-11d7-4b59-b90a-1d562a63bc28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Refreshing instance network info cache due to event network-changed-0a9e8992-8631-4d4d-b2ce-92b02026387a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:30 np0005465987 nova_compute[230713]: 2025-10-02 12:43:30.774 2 DEBUG oslo_concurrency.lockutils [req-8588ec47-ff62-4146-a0e9-18e703291d40 req-63a4540e-11d7-4b59-b90a-1d562a63bc28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:31 np0005465987 nova_compute[230713]: 2025-10-02 12:43:31.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:43:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 11K writes, 59K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1582 writes, 7870 keys, 1582 commit groups, 1.0 writes per commit group, ingest: 15.83 MB, 0.03 MB/s#012Interval WAL: 1582 writes, 1582 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     53.3      1.36              0.22        36    0.038       0      0       0.0       0.0#012  L6      1/0   12.18 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    106.6     90.4      3.72              0.99        35    0.106    224K    19K       0.0       0.0#012 Sum      1/0   12.18 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     78.1     80.5      5.08              1.22        71    0.072    224K    19K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0     83.6     86.8      0.91              0.23        12    0.076     51K   3093       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    106.6     90.4      3.72              0.99        35    0.106    224K    19K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     53.4      1.36              0.22        35    0.039       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.071, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.40 GB write, 0.10 MB/s write, 0.39 GB read, 0.09 MB/s read, 5.1 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 43.56 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000364 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2532,41.88 MB,13.7765%) FilterBlock(71,634.17 KB,0.20372%) IndexBlock(71,1.06 MB,0.350014%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:43:31 np0005465987 nova_compute[230713]: 2025-10-02 12:43:31.472 2 DEBUG nova.network.neutron [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:43:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:43:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:31.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:43:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:32.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:32 np0005465987 nova_compute[230713]: 2025-10-02 12:43:32.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:32 np0005465987 nova_compute[230713]: 2025-10-02 12:43:32.782 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.832s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:32 np0005465987 podman[290287]: 2025-10-02 12:43:32.842339065 +0000 UTC m=+0.057749369 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:43:32 np0005465987 nova_compute[230713]: 2025-10-02 12:43:32.870 2 DEBUG nova.storage.rbd_utils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] resizing rbd image 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.147 2 DEBUG nova.network.neutron [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Updating instance_info_cache with network_info: [{"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.338 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Releasing lock "refresh_cache-0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.338 2 DEBUG nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Instance network_info: |[{"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.339 2 DEBUG oslo_concurrency.lockutils [req-8588ec47-ff62-4146-a0e9-18e703291d40 req-63a4540e-11d7-4b59-b90a-1d562a63bc28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.339 2 DEBUG nova.network.neutron [req-8588ec47-ff62-4146-a0e9-18e703291d40 req-63a4540e-11d7-4b59-b90a-1d562a63bc28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Refreshing network info cache for port 0a9e8992-8631-4d4d-b2ce-92b02026387a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e352 e352: 3 total, 3 up, 3 in
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.719 2 DEBUG nova.objects.instance [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lazy-loading 'migration_context' on Instance uuid 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.815 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.815 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Ensure instance console log exists: /var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.816 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.816 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.816 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.819 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Start _get_guest_xml network_info=[{"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.824 2 WARNING nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.830 2 DEBUG nova.virt.libvirt.host [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.831 2 DEBUG nova.virt.libvirt.host [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.834 2 DEBUG nova.virt.libvirt.host [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.835 2 DEBUG nova.virt.libvirt.host [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.836 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.837 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.837 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.837 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.838 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.838 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.838 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.839 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.839 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.839 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.840 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.840 2 DEBUG nova.virt.hardware [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:43:33 np0005465987 nova_compute[230713]: 2025-10-02 12:43:33.843 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:33.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:34 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2805326133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.318 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.360 2 DEBUG nova.storage.rbd_utils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] rbd image 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.364 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:34.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:34 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3377471684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.841 2 DEBUG nova.network.neutron [req-8588ec47-ff62-4146-a0e9-18e703291d40 req-63a4540e-11d7-4b59-b90a-1d562a63bc28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Updated VIF entry in instance network info cache for port 0a9e8992-8631-4d4d-b2ce-92b02026387a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.842 2 DEBUG nova.network.neutron [req-8588ec47-ff62-4146-a0e9-18e703291d40 req-63a4540e-11d7-4b59-b90a-1d562a63bc28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Updating instance_info_cache with network_info: [{"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.910 2 DEBUG oslo_concurrency.lockutils [req-8588ec47-ff62-4146-a0e9-18e703291d40 req-63a4540e-11d7-4b59-b90a-1d562a63bc28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.985 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.987 2 DEBUG nova.virt.libvirt.vif [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-2092028784',display_name='tempest-ServerAddressesNegativeTestJSON-server-2092028784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-2092028784',id=158,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71902e570e234b5fa6914a7b1657bf63',ramdisk_id='',reservation_id='r-uqqv5y4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1380649246',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1380649246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:28Z,user_data=None,user_id='b52b66b6f76243b28f1ac13ea171269c',uuid=0057ecea-247f-44ea-a5d7-c65a1c9fb3cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.987 2 DEBUG nova.network.os_vif_util [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Converting VIF {"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.988 2 DEBUG nova.network.os_vif_util [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:9d:3f,bridge_name='br-int',has_traffic_filtering=True,id=0a9e8992-8631-4d4d-b2ce-92b02026387a,network=Network(8a417644-7c63-42cb-ab23-31a1d1f2a049),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a9e8992-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:34 np0005465987 nova_compute[230713]: 2025-10-02 12:43:34.990 2 DEBUG nova.objects.instance [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.039 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <uuid>0057ecea-247f-44ea-a5d7-c65a1c9fb3cd</uuid>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <name>instance-0000009e</name>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-2092028784</nova:name>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:43:33</nova:creationTime>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <nova:user uuid="b52b66b6f76243b28f1ac13ea171269c">tempest-ServerAddressesNegativeTestJSON-1380649246-project-member</nova:user>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <nova:project uuid="71902e570e234b5fa6914a7b1657bf63">tempest-ServerAddressesNegativeTestJSON-1380649246</nova:project>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <nova:port uuid="0a9e8992-8631-4d4d-b2ce-92b02026387a">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <entry name="serial">0057ecea-247f-44ea-a5d7-c65a1c9fb3cd</entry>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <entry name="uuid">0057ecea-247f-44ea-a5d7-c65a1c9fb3cd</entry>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk.config">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:b1:9d:3f"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <target dev="tap0a9e8992-86"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd/console.log" append="off"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:43:35 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:43:35 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:43:35 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:43:35 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.041 2 DEBUG nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Preparing to wait for external event network-vif-plugged-0a9e8992-8631-4d4d-b2ce-92b02026387a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.042 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquiring lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.042 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.042 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.043 2 DEBUG nova.virt.libvirt.vif [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-2092028784',display_name='tempest-ServerAddressesNegativeTestJSON-server-2092028784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-2092028784',id=158,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71902e570e234b5fa6914a7b1657bf63',ramdisk_id='',reservation_id='r-uqqv5y4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1380649246',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1380649246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:28Z,user_data=None,user_id='b52b66b6f76243b28f1ac13ea171269c',uuid=0057ecea-247f-44ea-a5d7-c65a1c9fb3cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.043 2 DEBUG nova.network.os_vif_util [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Converting VIF {"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.044 2 DEBUG nova.network.os_vif_util [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:9d:3f,bridge_name='br-int',has_traffic_filtering=True,id=0a9e8992-8631-4d4d-b2ce-92b02026387a,network=Network(8a417644-7c63-42cb-ab23-31a1d1f2a049),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a9e8992-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.044 2 DEBUG os_vif [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:9d:3f,bridge_name='br-int',has_traffic_filtering=True,id=0a9e8992-8631-4d4d-b2ce-92b02026387a,network=Network(8a417644-7c63-42cb-ab23-31a1d1f2a049),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a9e8992-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a9e8992-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a9e8992-86, col_values=(('external_ids', {'iface-id': '0a9e8992-8631-4d4d-b2ce-92b02026387a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:9d:3f', 'vm-uuid': '0057ecea-247f-44ea-a5d7-c65a1c9fb3cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:35 np0005465987 NetworkManager[44910]: <info>  [1759409015.0550] manager: (tap0a9e8992-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.061 2 INFO os_vif [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:9d:3f,bridge_name='br-int',has_traffic_filtering=True,id=0a9e8992-8631-4d4d-b2ce-92b02026387a,network=Network(8a417644-7c63-42cb-ab23-31a1d1f2a049),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a9e8992-86')#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.285 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.286 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.286 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] No VIF found with MAC fa:16:3e:b1:9d:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.286 2 INFO nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Using config drive#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.308 2 DEBUG nova.storage.rbd_utils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] rbd image 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.772 2 INFO nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Creating config drive at /var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd/disk.config#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.781 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdhnf2wxe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.915 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdhnf2wxe" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:35.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.957 2 DEBUG nova.storage.rbd_utils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] rbd image 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:35 np0005465987 nova_compute[230713]: 2025-10-02 12:43:35.962 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd/disk.config 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:36 np0005465987 nova_compute[230713]: 2025-10-02 12:43:36.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:36.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:37 np0005465987 nova_compute[230713]: 2025-10-02 12:43:37.466 2 DEBUG oslo_concurrency.processutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd/disk.config 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:37 np0005465987 nova_compute[230713]: 2025-10-02 12:43:37.466 2 INFO nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Deleting local config drive /var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd/disk.config because it was imported into RBD.#033[00m
Oct  2 08:43:37 np0005465987 kernel: tap0a9e8992-86: entered promiscuous mode
Oct  2 08:43:37 np0005465987 nova_compute[230713]: 2025-10-02 12:43:37.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:37 np0005465987 NetworkManager[44910]: <info>  [1759409017.5313] manager: (tap0a9e8992-86): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Oct  2 08:43:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:37Z|00607|binding|INFO|Claiming lport 0a9e8992-8631-4d4d-b2ce-92b02026387a for this chassis.
Oct  2 08:43:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:37Z|00608|binding|INFO|0a9e8992-8631-4d4d-b2ce-92b02026387a: Claiming fa:16:3e:b1:9d:3f 10.100.0.12
Oct  2 08:43:37 np0005465987 nova_compute[230713]: 2025-10-02 12:43:37.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e353 e353: 3 total, 3 up, 3 in
Oct  2 08:43:37 np0005465987 systemd[1]: Started Virtual Machine qemu-72-instance-0000009e.
Oct  2 08:43:37 np0005465987 systemd-machined[188335]: New machine qemu-72-instance-0000009e.
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.588 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:9d:3f 10.100.0.12'], port_security=['fa:16:3e:b1:9d:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0057ecea-247f-44ea-a5d7-c65a1c9fb3cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a417644-7c63-42cb-ab23-31a1d1f2a049', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71902e570e234b5fa6914a7b1657bf63', 'neutron:revision_number': '2', 'neutron:security_group_ids': '148bd323-f3df-4b83-9d34-5a5be46331de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6d9fd1a2-7395-4427-a134-e15def871506, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0a9e8992-8631-4d4d-b2ce-92b02026387a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.589 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0a9e8992-8631-4d4d-b2ce-92b02026387a in datapath 8a417644-7c63-42cb-ab23-31a1d1f2a049 bound to our chassis#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.591 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a417644-7c63-42cb-ab23-31a1d1f2a049#033[00m
Oct  2 08:43:37 np0005465987 systemd-udevd[290543]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.603 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6d4baf-4480-44b9-bfad-e5e33b973beb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.603 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a417644-71 in ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.605 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a417644-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.605 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1e333218-fe3e-4cf3-b0d0-82279b0ad409]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.606 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[98547360-8378-4eb0-9def-dd99f59c79ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 NetworkManager[44910]: <info>  [1759409017.6143] device (tap0a9e8992-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:43:37 np0005465987 NetworkManager[44910]: <info>  [1759409017.6154] device (tap0a9e8992-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.622 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[dbef534e-d8fb-469a-a085-1c8ee9dd7b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 nova_compute[230713]: 2025-10-02 12:43:37.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:37Z|00609|binding|INFO|Setting lport 0a9e8992-8631-4d4d-b2ce-92b02026387a ovn-installed in OVS
Oct  2 08:43:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:37Z|00610|binding|INFO|Setting lport 0a9e8992-8631-4d4d-b2ce-92b02026387a up in Southbound
Oct  2 08:43:37 np0005465987 nova_compute[230713]: 2025-10-02 12:43:37.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.636 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8c394135-4e23-46fd-9dd3-2f2e2130040e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 podman[290513]: 2025-10-02 12:43:37.651575058 +0000 UTC m=+0.086697065 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.668 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc22c03-c8ed-4e0d-bd37-054302259108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 NetworkManager[44910]: <info>  [1759409017.6738] manager: (tap8a417644-70): new Veth device (/org/freedesktop/NetworkManager/Devices/283)
Oct  2 08:43:37 np0005465987 systemd-udevd[290558]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:43:37 np0005465987 podman[290512]: 2025-10-02 12:43:37.680432411 +0000 UTC m=+0.115605189 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:43:37 np0005465987 podman[290510]: 2025-10-02 12:43:37.680602305 +0000 UTC m=+0.115689961 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.676 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[db32c0d2-d486-4f59-b438-7fec5eb38766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.713 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[691ee5e0-c380-434c-8f7a-eb900a768a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.715 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9d79acc5-a822-4c61-8ffa-e15cc20f7d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 NetworkManager[44910]: <info>  [1759409017.7353] device (tap8a417644-70): carrier: link connected
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.740 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[21a648fd-dc9c-46b5-bef6-134ee49b45c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.754 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc0f50a-1be0-4dda-8408-e2230fe52841]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a417644-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a6:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708631, 'reachable_time': 42840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290604, 'error': None, 'target': 'ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.768 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[801db358-40e4-4319-bee1-28431e222f4d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:a65e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708631, 'tstamp': 708631}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290605, 'error': None, 'target': 'ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.787 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[38fb9dbe-7afe-44bc-8839-e797bf5fe4e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a417644-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a6:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708631, 'reachable_time': 42840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290606, 'error': None, 'target': 'ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.814 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[392242bd-c71d-4588-8ec8-3d1a5dcd4bc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.864 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e914125a-9ce8-4b59-b804-ab4424758a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.866 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a417644-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.866 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.866 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a417644-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:37 np0005465987 NetworkManager[44910]: <info>  [1759409017.8690] manager: (tap8a417644-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Oct  2 08:43:37 np0005465987 kernel: tap8a417644-70: entered promiscuous mode
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.871 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a417644-70, col_values=(('external_ids', {'iface-id': '6432d730-232b-4961-b564-8e43b630f009'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:37Z|00611|binding|INFO|Releasing lport 6432d730-232b-4961-b564-8e43b630f009 from this chassis (sb_readonly=0)
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.874 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a417644-7c63-42cb-ab23-31a1d1f2a049.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a417644-7c63-42cb-ab23-31a1d1f2a049.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:43:37 np0005465987 nova_compute[230713]: 2025-10-02 12:43:37.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.875 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[923e0c34-d6fe-4339-9281-01d0a309bead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.875 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-8a417644-7c63-42cb-ab23-31a1d1f2a049
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/8a417644-7c63-42cb-ab23-31a1d1f2a049.pid.haproxy
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 8a417644-7c63-42cb-ab23-31a1d1f2a049
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:43:37 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:37.876 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049', 'env', 'PROCESS_TAG=haproxy-8a417644-7c63-42cb-ab23-31a1d1f2a049', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a417644-7c63-42cb-ab23-31a1d1f2a049.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:43:37 np0005465987 nova_compute[230713]: 2025-10-02 12:43:37.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:37.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.174 2 DEBUG nova.compute.manager [req-7c29631d-1589-42b5-b5e6-eaaf0f65dc70 req-44f9fb9e-d159-4a49-a086-a97afff7745c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Received event network-vif-plugged-0a9e8992-8631-4d4d-b2ce-92b02026387a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.175 2 DEBUG oslo_concurrency.lockutils [req-7c29631d-1589-42b5-b5e6-eaaf0f65dc70 req-44f9fb9e-d159-4a49-a086-a97afff7745c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.176 2 DEBUG oslo_concurrency.lockutils [req-7c29631d-1589-42b5-b5e6-eaaf0f65dc70 req-44f9fb9e-d159-4a49-a086-a97afff7745c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.176 2 DEBUG oslo_concurrency.lockutils [req-7c29631d-1589-42b5-b5e6-eaaf0f65dc70 req-44f9fb9e-d159-4a49-a086-a97afff7745c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.177 2 DEBUG nova.compute.manager [req-7c29631d-1589-42b5-b5e6-eaaf0f65dc70 req-44f9fb9e-d159-4a49-a086-a97afff7745c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Processing event network-vif-plugged-0a9e8992-8631-4d4d-b2ce-92b02026387a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:43:38 np0005465987 podman[290639]: 2025-10-02 12:43:38.189304168 +0000 UTC m=+0.021730547 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:43:38 np0005465987 podman[290639]: 2025-10-02 12:43:38.337242165 +0000 UTC m=+0.169668524 container create 684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:43:38 np0005465987 systemd[1]: Started libpod-conmon-684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94.scope.
Oct  2 08:43:38 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:43:38 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed9ee0affb5ad1d920eb97ad2e23617fa0b4e2bab530e8fa3bf5924ebd2b486/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:43:38 np0005465987 podman[290639]: 2025-10-02 12:43:38.417357358 +0000 UTC m=+0.249783737 container init 684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:43:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:38.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:38 np0005465987 podman[290639]: 2025-10-02 12:43:38.422956052 +0000 UTC m=+0.255382411 container start 684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:43:38 np0005465987 neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049[290695]: [NOTICE]   (290700) : New worker (290702) forked
Oct  2 08:43:38 np0005465987 neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049[290695]: [NOTICE]   (290700) : Loading success.
Oct  2 08:43:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e354 e354: 3 total, 3 up, 3 in
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.833 2 DEBUG nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.835 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409018.8343914, 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.835 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] VM Started (Lifecycle Event)#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.840 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.843 2 INFO nova.virt.libvirt.driver [-] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Instance spawned successfully.#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.843 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:43:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:38.890 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:38.891 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.921 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.927 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.927 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.928 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.929 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.930 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.930 2 DEBUG nova.virt.libvirt.driver [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:38 np0005465987 nova_compute[230713]: 2025-10-02 12:43:38.936 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.177 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.180 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409018.8344953, 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.181 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.301 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.307 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409018.8396268, 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.307 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.318 2 INFO nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Took 10.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.319 2 DEBUG nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.340 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.345 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.392 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.425 2 INFO nova.compute.manager [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Took 12.66 seconds to build instance.#033[00m
Oct  2 08:43:39 np0005465987 nova_compute[230713]: 2025-10-02 12:43:39.547 2 DEBUG oslo_concurrency.lockutils [None req-3f12aa2e-9115-4281-9062-80283f8e5ca6 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:39.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:40 np0005465987 nova_compute[230713]: 2025-10-02 12:43:40.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:40.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:41 np0005465987 nova_compute[230713]: 2025-10-02 12:43:41.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:43:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:41.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:42 np0005465987 nova_compute[230713]: 2025-10-02 12:43:42.066 2 DEBUG nova.compute.manager [req-bcab01d0-b8c3-4757-ae7f-8fc59060be49 req-d74ad562-be14-4dfd-8a6a-fd228b2f748f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Received event network-vif-plugged-0a9e8992-8631-4d4d-b2ce-92b02026387a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:42 np0005465987 nova_compute[230713]: 2025-10-02 12:43:42.066 2 DEBUG oslo_concurrency.lockutils [req-bcab01d0-b8c3-4757-ae7f-8fc59060be49 req-d74ad562-be14-4dfd-8a6a-fd228b2f748f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:42 np0005465987 nova_compute[230713]: 2025-10-02 12:43:42.067 2 DEBUG oslo_concurrency.lockutils [req-bcab01d0-b8c3-4757-ae7f-8fc59060be49 req-d74ad562-be14-4dfd-8a6a-fd228b2f748f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:42 np0005465987 nova_compute[230713]: 2025-10-02 12:43:42.067 2 DEBUG oslo_concurrency.lockutils [req-bcab01d0-b8c3-4757-ae7f-8fc59060be49 req-d74ad562-be14-4dfd-8a6a-fd228b2f748f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:42 np0005465987 nova_compute[230713]: 2025-10-02 12:43:42.067 2 DEBUG nova.compute.manager [req-bcab01d0-b8c3-4757-ae7f-8fc59060be49 req-d74ad562-be14-4dfd-8a6a-fd228b2f748f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] No waiting events found dispatching network-vif-plugged-0a9e8992-8631-4d4d-b2ce-92b02026387a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:42 np0005465987 nova_compute[230713]: 2025-10-02 12:43:42.067 2 WARNING nova.compute.manager [req-bcab01d0-b8c3-4757-ae7f-8fc59060be49 req-d74ad562-be14-4dfd-8a6a-fd228b2f748f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Received unexpected event network-vif-plugged-0a9e8992-8631-4d4d-b2ce-92b02026387a for instance with vm_state active and task_state None.#033[00m
Oct  2 08:43:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.647 2 DEBUG oslo_concurrency.lockutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquiring lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.647 2 DEBUG oslo_concurrency.lockutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.648 2 DEBUG oslo_concurrency.lockutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquiring lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.648 2 DEBUG oslo_concurrency.lockutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.648 2 DEBUG oslo_concurrency.lockutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.649 2 INFO nova.compute.manager [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Terminating instance#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.650 2 DEBUG nova.compute.manager [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:43:43 np0005465987 kernel: tap0a9e8992-86 (unregistering): left promiscuous mode
Oct  2 08:43:43 np0005465987 NetworkManager[44910]: <info>  [1759409023.7173] device (tap0a9e8992-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:43Z|00612|binding|INFO|Releasing lport 0a9e8992-8631-4d4d-b2ce-92b02026387a from this chassis (sb_readonly=0)
Oct  2 08:43:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:43Z|00613|binding|INFO|Setting lport 0a9e8992-8631-4d4d-b2ce-92b02026387a down in Southbound
Oct  2 08:43:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:43:43Z|00614|binding|INFO|Removing iface tap0a9e8992-86 ovn-installed in OVS
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005465987 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009e.scope: Deactivated successfully.
Oct  2 08:43:43 np0005465987 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009e.scope: Consumed 5.834s CPU time.
Oct  2 08:43:43 np0005465987 systemd-machined[188335]: Machine qemu-72-instance-0000009e terminated.
Oct  2 08:43:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:43.810 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:9d:3f 10.100.0.12'], port_security=['fa:16:3e:b1:9d:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0057ecea-247f-44ea-a5d7-c65a1c9fb3cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a417644-7c63-42cb-ab23-31a1d1f2a049', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71902e570e234b5fa6914a7b1657bf63', 'neutron:revision_number': '4', 'neutron:security_group_ids': '148bd323-f3df-4b83-9d34-5a5be46331de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6d9fd1a2-7395-4427-a134-e15def871506, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0a9e8992-8631-4d4d-b2ce-92b02026387a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:43.812 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0a9e8992-8631-4d4d-b2ce-92b02026387a in datapath 8a417644-7c63-42cb-ab23-31a1d1f2a049 unbound from our chassis#033[00m
Oct  2 08:43:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:43.814 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a417644-7c63-42cb-ab23-31a1d1f2a049, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:43:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:43.815 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1db215c3-e54e-46a1-93fb-31ec2b254123]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:43.815 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049 namespace which is not needed anymore#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.884 2 INFO nova.virt.libvirt.driver [-] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Instance destroyed successfully.#033[00m
Oct  2 08:43:43 np0005465987 nova_compute[230713]: 2025-10-02 12:43:43.886 2 DEBUG nova.objects.instance [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lazy-loading 'resources' on Instance uuid 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:43.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:43 np0005465987 neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049[290695]: [NOTICE]   (290700) : haproxy version is 2.8.14-c23fe91
Oct  2 08:43:43 np0005465987 neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049[290695]: [NOTICE]   (290700) : path to executable is /usr/sbin/haproxy
Oct  2 08:43:43 np0005465987 neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049[290695]: [WARNING]  (290700) : Exiting Master process...
Oct  2 08:43:43 np0005465987 neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049[290695]: [ALERT]    (290700) : Current worker (290702) exited with code 143 (Terminated)
Oct  2 08:43:43 np0005465987 neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049[290695]: [WARNING]  (290700) : All workers exited. Exiting... (0)
Oct  2 08:43:43 np0005465987 systemd[1]: libpod-684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94.scope: Deactivated successfully.
Oct  2 08:43:43 np0005465987 podman[290746]: 2025-10-02 12:43:43.978215892 +0000 UTC m=+0.049634266 container died 684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:43:44 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94-userdata-shm.mount: Deactivated successfully.
Oct  2 08:43:44 np0005465987 systemd[1]: var-lib-containers-storage-overlay-bed9ee0affb5ad1d920eb97ad2e23617fa0b4e2bab530e8fa3bf5924ebd2b486-merged.mount: Deactivated successfully.
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.052 2 DEBUG nova.virt.libvirt.vif [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-2092028784',display_name='tempest-ServerAddressesNegativeTestJSON-server-2092028784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-2092028784',id=158,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:39Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71902e570e234b5fa6914a7b1657bf63',ramdisk_id='',reservation_id='r-uqqv5y4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1380649246',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1380649246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:43:39Z,user_data=None,user_id='b52b66b6f76243b28f1ac13ea171269c',uuid=0057ecea-247f-44ea-a5d7-c65a1c9fb3cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.052 2 DEBUG nova.network.os_vif_util [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Converting VIF {"id": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "address": "fa:16:3e:b1:9d:3f", "network": {"id": "8a417644-7c63-42cb-ab23-31a1d1f2a049", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-343414812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71902e570e234b5fa6914a7b1657bf63", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a9e8992-86", "ovs_interfaceid": "0a9e8992-8631-4d4d-b2ce-92b02026387a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.053 2 DEBUG nova.network.os_vif_util [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:9d:3f,bridge_name='br-int',has_traffic_filtering=True,id=0a9e8992-8631-4d4d-b2ce-92b02026387a,network=Network(8a417644-7c63-42cb-ab23-31a1d1f2a049),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a9e8992-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.053 2 DEBUG os_vif [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:9d:3f,bridge_name='br-int',has_traffic_filtering=True,id=0a9e8992-8631-4d4d-b2ce-92b02026387a,network=Network(8a417644-7c63-42cb-ab23-31a1d1f2a049),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a9e8992-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a9e8992-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.059 2 INFO os_vif [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:9d:3f,bridge_name='br-int',has_traffic_filtering=True,id=0a9e8992-8631-4d4d-b2ce-92b02026387a,network=Network(8a417644-7c63-42cb-ab23-31a1d1f2a049),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a9e8992-86')#033[00m
Oct  2 08:43:44 np0005465987 podman[290746]: 2025-10-02 12:43:44.062749435 +0000 UTC m=+0.134167839 container cleanup 684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:43:44 np0005465987 systemd[1]: libpod-conmon-684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94.scope: Deactivated successfully.
Oct  2 08:43:44 np0005465987 podman[290786]: 2025-10-02 12:43:44.186408114 +0000 UTC m=+0.098160429 container remove 684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:43:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:44.191 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e94883-439b-4ad0-8dc5-1b2e74506b7d]: (4, ('Thu Oct  2 12:43:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049 (684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94)\n684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94\nThu Oct  2 12:43:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049 (684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94)\n684899dc7470c143ca7bdc857e76f5077adab30f356819215926e2c2a378cb94\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:44.193 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[89af1364-5f97-4556-b36d-d5de81dd3366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:44.195 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a417644-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:44 np0005465987 kernel: tap8a417644-70: left promiscuous mode
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:44 np0005465987 nova_compute[230713]: 2025-10-02 12:43:44.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:44.214 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0a473150-cf77-4fba-8fec-0629095c935f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:44.239 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2cfdf960-09dc-421a-9383-622772fe5027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:44.240 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0813f696-f134-424a-8307-4f1b8ba1c68c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:44.256 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe3820e-8c6d-4341-9172-3ea42c24afc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708623, 'reachable_time': 41155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290807, 'error': None, 'target': 'ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:44 np0005465987 systemd[1]: run-netns-ovnmeta\x2d8a417644\x2d7c63\x2d42cb\x2dab23\x2d31a1d1f2a049.mount: Deactivated successfully.
Oct  2 08:43:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:44.261 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8a417644-7c63-42cb-ab23-31a1d1f2a049 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:43:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:44.261 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[18aeb5f5-a28c-469b-bf1c-7e519259996e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:44.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.415 2 DEBUG nova.compute.manager [req-d4beadfc-a887-4701-a2cc-2f4aa4307f2d req-351726ae-810f-416b-bb64-5af7631ebb91 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Received event network-vif-unplugged-0a9e8992-8631-4d4d-b2ce-92b02026387a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.415 2 DEBUG oslo_concurrency.lockutils [req-d4beadfc-a887-4701-a2cc-2f4aa4307f2d req-351726ae-810f-416b-bb64-5af7631ebb91 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.416 2 DEBUG oslo_concurrency.lockutils [req-d4beadfc-a887-4701-a2cc-2f4aa4307f2d req-351726ae-810f-416b-bb64-5af7631ebb91 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.416 2 DEBUG oslo_concurrency.lockutils [req-d4beadfc-a887-4701-a2cc-2f4aa4307f2d req-351726ae-810f-416b-bb64-5af7631ebb91 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.416 2 DEBUG nova.compute.manager [req-d4beadfc-a887-4701-a2cc-2f4aa4307f2d req-351726ae-810f-416b-bb64-5af7631ebb91 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] No waiting events found dispatching network-vif-unplugged-0a9e8992-8631-4d4d-b2ce-92b02026387a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.416 2 DEBUG nova.compute.manager [req-d4beadfc-a887-4701-a2cc-2f4aa4307f2d req-351726ae-810f-416b-bb64-5af7631ebb91 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Received event network-vif-unplugged-0a9e8992-8631-4d4d-b2ce-92b02026387a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.443 2 INFO nova.virt.libvirt.driver [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Deleting instance files /var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_del#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.444 2 INFO nova.virt.libvirt.driver [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Deletion of /var/lib/nova/instances/0057ecea-247f-44ea-a5d7-c65a1c9fb3cd_del complete#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.744 2 INFO nova.compute.manager [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Took 2.09 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.745 2 DEBUG oslo.service.loopingcall [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.745 2 DEBUG nova.compute.manager [-] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:43:45 np0005465987 nova_compute[230713]: 2025-10-02 12:43:45.745 2 DEBUG nova.network.neutron [-] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:43:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:45.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:46 np0005465987 nova_compute[230713]: 2025-10-02 12:43:46.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:46.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.583 2 DEBUG nova.compute.manager [req-0a501a96-0156-4a32-a586-9756a2c577a4 req-0196e9e9-1129-4608-ab42-2262848c4181 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Received event network-vif-plugged-0a9e8992-8631-4d4d-b2ce-92b02026387a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.583 2 DEBUG oslo_concurrency.lockutils [req-0a501a96-0156-4a32-a586-9756a2c577a4 req-0196e9e9-1129-4608-ab42-2262848c4181 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.584 2 DEBUG oslo_concurrency.lockutils [req-0a501a96-0156-4a32-a586-9756a2c577a4 req-0196e9e9-1129-4608-ab42-2262848c4181 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.584 2 DEBUG oslo_concurrency.lockutils [req-0a501a96-0156-4a32-a586-9756a2c577a4 req-0196e9e9-1129-4608-ab42-2262848c4181 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.584 2 DEBUG nova.compute.manager [req-0a501a96-0156-4a32-a586-9756a2c577a4 req-0196e9e9-1129-4608-ab42-2262848c4181 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] No waiting events found dispatching network-vif-plugged-0a9e8992-8631-4d4d-b2ce-92b02026387a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.585 2 WARNING nova.compute.manager [req-0a501a96-0156-4a32-a586-9756a2c577a4 req-0196e9e9-1129-4608-ab42-2262848c4181 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Received unexpected event network-vif-plugged-0a9e8992-8631-4d4d-b2ce-92b02026387a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.641 2 DEBUG nova.compute.manager [req-729bffd5-2c89-4fa0-83ae-290a83fa8ac1 req-3c6789c1-ad37-428f-96c6-7fe883095025 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Received event network-vif-deleted-0a9e8992-8631-4d4d-b2ce-92b02026387a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.641 2 INFO nova.compute.manager [req-729bffd5-2c89-4fa0-83ae-290a83fa8ac1 req-3c6789c1-ad37-428f-96c6-7fe883095025 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Neutron deleted interface 0a9e8992-8631-4d4d-b2ce-92b02026387a; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.642 2 DEBUG nova.network.neutron [req-729bffd5-2c89-4fa0-83ae-290a83fa8ac1 req-3c6789c1-ad37-428f-96c6-7fe883095025 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.643 2 DEBUG nova.network.neutron [-] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:43:47.893 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.906 2 DEBUG nova.compute.manager [req-729bffd5-2c89-4fa0-83ae-290a83fa8ac1 req-3c6789c1-ad37-428f-96c6-7fe883095025 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Detach interface failed, port_id=0a9e8992-8631-4d4d-b2ce-92b02026387a, reason: Instance 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:43:47 np0005465987 nova_compute[230713]: 2025-10-02 12:43:47.918 2 INFO nova.compute.manager [-] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Took 2.17 seconds to deallocate network for instance.#033[00m
Oct  2 08:43:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:43:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:47.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:48 np0005465987 nova_compute[230713]: 2025-10-02 12:43:48.200 2 DEBUG oslo_concurrency.lockutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:48 np0005465987 nova_compute[230713]: 2025-10-02 12:43:48.200 2 DEBUG oslo_concurrency.lockutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:48 np0005465987 nova_compute[230713]: 2025-10-02 12:43:48.258 2 DEBUG oslo_concurrency.processutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:48.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/62800550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:48 np0005465987 nova_compute[230713]: 2025-10-02 12:43:48.763 2 DEBUG oslo_concurrency.processutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:48 np0005465987 nova_compute[230713]: 2025-10-02 12:43:48.768 2 DEBUG nova.compute.provider_tree [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:48 np0005465987 nova_compute[230713]: 2025-10-02 12:43:48.826 2 DEBUG nova.scheduler.client.report [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:48 np0005465987 nova_compute[230713]: 2025-10-02 12:43:48.996 2 DEBUG oslo_concurrency.lockutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:49 np0005465987 nova_compute[230713]: 2025-10-02 12:43:49.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:49 np0005465987 nova_compute[230713]: 2025-10-02 12:43:49.087 2 INFO nova.scheduler.client.report [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Deleted allocations for instance 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd#033[00m
Oct  2 08:43:49 np0005465987 nova_compute[230713]: 2025-10-02 12:43:49.276 2 DEBUG oslo_concurrency.lockutils [None req-0b22cd0f-065f-452d-9381-3008ebaba6b7 b52b66b6f76243b28f1ac13ea171269c 71902e570e234b5fa6914a7b1657bf63 - - default default] Lock "0057ecea-247f-44ea-a5d7-c65a1c9fb3cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:49.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:50.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:51 np0005465987 nova_compute[230713]: 2025-10-02 12:43:51.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:51.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:52.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3505284374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:53.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:54 np0005465987 nova_compute[230713]: 2025-10-02 12:43:54.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:54.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:54 np0005465987 nova_compute[230713]: 2025-10-02 12:43:54.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:43:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3145299276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:43:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:43:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3145299276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:43:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:43:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:55.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:43:56 np0005465987 nova_compute[230713]: 2025-10-02 12:43:56.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:56.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:57.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:58.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:58 np0005465987 nova_compute[230713]: 2025-10-02 12:43:58.881 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409023.880261, 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:58 np0005465987 nova_compute[230713]: 2025-10-02 12:43:58.882 2 INFO nova.compute.manager [-] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:43:59 np0005465987 nova_compute[230713]: 2025-10-02 12:43:59.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:59 np0005465987 nova_compute[230713]: 2025-10-02 12:43:59.096 2 DEBUG nova.compute.manager [None req-b3d9aa36-fd0f-4819-a94e-1a9d981b53c9 - - - - - -] [instance: 0057ecea-247f-44ea-a5d7-c65a1c9fb3cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:43:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:59.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:00.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:01 np0005465987 nova_compute[230713]: 2025-10-02 12:44:01.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:01.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:02.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:03 np0005465987 podman[290832]: 2025-10-02 12:44:03.8286094 +0000 UTC m=+0.047142427 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:44:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:03.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:04 np0005465987 nova_compute[230713]: 2025-10-02 12:44:04.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:04.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:04 np0005465987 nova_compute[230713]: 2025-10-02 12:44:04.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:05 np0005465987 nova_compute[230713]: 2025-10-02 12:44:05.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:05 np0005465987 nova_compute[230713]: 2025-10-02 12:44:05.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:44:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:05.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:06 np0005465987 nova_compute[230713]: 2025-10-02 12:44:06.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:44:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:06.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:44:07 np0005465987 podman[290857]: 2025-10-02 12:44:07.84549144 +0000 UTC m=+0.061686497 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 08:44:07 np0005465987 podman[290856]: 2025-10-02 12:44:07.856492742 +0000 UTC m=+0.076198626 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  2 08:44:07 np0005465987 podman[290855]: 2025-10-02 12:44:07.8607698 +0000 UTC m=+0.084431792 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  2 08:44:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:07.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:07 np0005465987 nova_compute[230713]: 2025-10-02 12:44:07.995 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:07 np0005465987 nova_compute[230713]: 2025-10-02 12:44:07.995 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:44:07 np0005465987 nova_compute[230713]: 2025-10-02 12:44:07.995 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:44:08 np0005465987 nova_compute[230713]: 2025-10-02 12:44:08.015 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:44:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:08.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:09 np0005465987 nova_compute[230713]: 2025-10-02 12:44:09.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:09.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:44:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2528956362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:44:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:44:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2528956362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:44:11 np0005465987 nova_compute[230713]: 2025-10-02 12:44:11.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:11.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:12.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:44:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3067201084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:44:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:44:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3067201084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:44:13 np0005465987 nova_compute[230713]: 2025-10-02 12:44:13.979 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:13.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:14 np0005465987 nova_compute[230713]: 2025-10-02 12:44:14.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:14.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:14 np0005465987 nova_compute[230713]: 2025-10-02 12:44:14.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:15 np0005465987 nova_compute[230713]: 2025-10-02 12:44:15.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:15 np0005465987 nova_compute[230713]: 2025-10-02 12:44:15.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:44:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:16.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.072 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.072 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.072 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.073 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.073 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:16.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:44:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3504724591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.508 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.663 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.664 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4391MB free_disk=20.94293975830078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.664 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.665 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.926 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.926 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:44:16 np0005465987 nova_compute[230713]: 2025-10-02 12:44:16.950 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:44:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/764864428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:44:17 np0005465987 nova_compute[230713]: 2025-10-02 12:44:17.383 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:17 np0005465987 nova_compute[230713]: 2025-10-02 12:44:17.391 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:44:17 np0005465987 nova_compute[230713]: 2025-10-02 12:44:17.428 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:44:17 np0005465987 nova_compute[230713]: 2025-10-02 12:44:17.523 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:44:17 np0005465987 nova_compute[230713]: 2025-10-02 12:44:17.523 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:17 np0005465987 nova_compute[230713]: 2025-10-02 12:44:17.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:44:17.769 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:44:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:44:17.769 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:44:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:18.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:18.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:18 np0005465987 nova_compute[230713]: 2025-10-02 12:44:18.523 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:18 np0005465987 nova_compute[230713]: 2025-10-02 12:44:18.524 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:19 np0005465987 nova_compute[230713]: 2025-10-02 12:44:19.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:44:19.771 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e355 e355: 3 total, 3 up, 3 in
Oct  2 08:44:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:20.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:20.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:21 np0005465987 nova_compute[230713]: 2025-10-02 12:44:21.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:22.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:22.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:22 np0005465987 nova_compute[230713]: 2025-10-02 12:44:22.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:24.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:24 np0005465987 nova_compute[230713]: 2025-10-02 12:44:24.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:24 np0005465987 nova_compute[230713]: 2025-10-02 12:44:24.150 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:24 np0005465987 nova_compute[230713]: 2025-10-02 12:44:24.151 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:44:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:24.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:44:25 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3968223177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:44:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:26 np0005465987 nova_compute[230713]: 2025-10-02 12:44:26.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:27 np0005465987 nova_compute[230713]: 2025-10-02 12:44:27.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:27 np0005465987 nova_compute[230713]: 2025-10-02 12:44:27.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:44:27.969 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:44:27.970 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:44:27.970 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:28.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:28 np0005465987 nova_compute[230713]: 2025-10-02 12:44:28.246 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:44:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 e356: 3 total, 3 up, 3 in
Oct  2 08:44:29 np0005465987 nova_compute[230713]: 2025-10-02 12:44:29.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:30.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:30.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:30.987359) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409070987390, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1056, "num_deletes": 253, "total_data_size": 2052405, "memory_usage": 2086256, "flush_reason": "Manual Compaction"}
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409070998073, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 1352364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59726, "largest_seqno": 60776, "table_properties": {"data_size": 1347517, "index_size": 2371, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11219, "raw_average_key_size": 20, "raw_value_size": 1337629, "raw_average_value_size": 2440, "num_data_blocks": 102, "num_entries": 548, "num_filter_entries": 548, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409003, "oldest_key_time": 1759409003, "file_creation_time": 1759409070, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 10815 microseconds, and 3888 cpu microseconds.
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:44:30 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:30.998165) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 1352364 bytes OK
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:30.998195) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:30.999830) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:30.999850) EVENT_LOG_v1 {"time_micros": 1759409070999844, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:30.999874) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2047148, prev total WAL file size 2047148, number of live WAL files 2.
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.001232) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(1320KB)], [117(12MB)]
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409071001276, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14118881, "oldest_snapshot_seqno": -1}
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 8462 keys, 12095804 bytes, temperature: kUnknown
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409071062485, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 12095804, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12039413, "index_size": 34135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 220280, "raw_average_key_size": 26, "raw_value_size": 11888999, "raw_average_value_size": 1404, "num_data_blocks": 1331, "num_entries": 8462, "num_filter_entries": 8462, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409070, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.062752) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12095804 bytes
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.064358) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.4 rd, 197.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.2 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(19.4) write-amplify(8.9) OK, records in: 8985, records dropped: 523 output_compression: NoCompression
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.064377) EVENT_LOG_v1 {"time_micros": 1759409071064368, "job": 74, "event": "compaction_finished", "compaction_time_micros": 61271, "compaction_time_cpu_micros": 36735, "output_level": 6, "num_output_files": 1, "total_output_size": 12095804, "num_input_records": 8985, "num_output_records": 8462, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409071064732, "job": 74, "event": "table_file_deletion", "file_number": 119}
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409071066943, "job": 74, "event": "table_file_deletion", "file_number": 117}
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.001138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.067070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.067076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.067077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.067079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:31 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:44:31.067080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:31 np0005465987 nova_compute[230713]: 2025-10-02 12:44:31.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:31 np0005465987 nova_compute[230713]: 2025-10-02 12:44:31.239 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:44:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:44:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:32.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:34.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:34 np0005465987 nova_compute[230713]: 2025-10-02 12:44:34.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:34 np0005465987 podman[291095]: 2025-10-02 12:44:34.856864296 +0000 UTC m=+0.070266623 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:44:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:36.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:36 np0005465987 nova_compute[230713]: 2025-10-02 12:44:36.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:38.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:38.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:38 np0005465987 podman[291116]: 2025-10-02 12:44:38.838322312 +0000 UTC m=+0.053833452 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:44:38 np0005465987 podman[291117]: 2025-10-02 12:44:38.853609443 +0000 UTC m=+0.066187872 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:44:38 np0005465987 podman[291115]: 2025-10-02 12:44:38.872360587 +0000 UTC m=+0.087207979 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 08:44:39 np0005465987 nova_compute[230713]: 2025-10-02 12:44:39.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:40 np0005465987 nova_compute[230713]: 2025-10-02 12:44:40.029 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:40.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:40.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:40 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct  2 08:44:41 np0005465987 nova_compute[230713]: 2025-10-02 12:44:41.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:42.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:44:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:44:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:44.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:44 np0005465987 nova_compute[230713]: 2025-10-02 12:44:44.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000055s ======
Oct  2 08:44:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:44.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Oct  2 08:44:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:46.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:46 np0005465987 nova_compute[230713]: 2025-10-02 12:44:46.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:46.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:44:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:48.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:44:49 np0005465987 nova_compute[230713]: 2025-10-02 12:44:49.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:50.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:50.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:51 np0005465987 nova_compute[230713]: 2025-10-02 12:44:51.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:52.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:52.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:54.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:54 np0005465987 nova_compute[230713]: 2025-10-02 12:44:54.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:54.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:56.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:56 np0005465987 nova_compute[230713]: 2025-10-02 12:44:56.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:56.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:44:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:58.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:59 np0005465987 nova_compute[230713]: 2025-10-02 12:44:59.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:00.017 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:00.018 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:45:00 np0005465987 nova_compute[230713]: 2025-10-02 12:45:00.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:00.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:00.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:01 np0005465987 nova_compute[230713]: 2025-10-02 12:45:01.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:02.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:02.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:04.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:04 np0005465987 nova_compute[230713]: 2025-10-02 12:45:04.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:04.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:05.019 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:05 np0005465987 nova_compute[230713]: 2025-10-02 12:45:05.043 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:05 np0005465987 podman[291227]: 2025-10-02 12:45:05.873640415 +0000 UTC m=+0.077375021 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:45:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:06.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:06 np0005465987 nova_compute[230713]: 2025-10-02 12:45:06.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:06.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.118 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "7bd653a3-96de-43be-a073-1384b3d6aae1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.118 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.134 2 DEBUG nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.204 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.204 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.228 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.229 2 INFO nova.compute.claims [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.479 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1111525070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.912 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.918 2 DEBUG nova.compute.provider_tree [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.937 2 DEBUG nova.scheduler.client.report [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.976 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:07 np0005465987 nova_compute[230713]: 2025-10-02 12:45:07.977 2 DEBUG nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.040 2 DEBUG nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.040 2 DEBUG nova.network.neutron [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.063 2 INFO nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:45:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:08.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.083 2 DEBUG nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.168 2 DEBUG nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.170 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.170 2 INFO nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Creating image(s)#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.202 2 DEBUG nova.storage.rbd_utils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 7bd653a3-96de-43be-a073-1384b3d6aae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.232 2 DEBUG nova.storage.rbd_utils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 7bd653a3-96de-43be-a073-1384b3d6aae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.255 2 DEBUG nova.storage.rbd_utils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 7bd653a3-96de-43be-a073-1384b3d6aae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.258 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.320 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.321 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.321 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.321 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.346 2 DEBUG nova.storage.rbd_utils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 7bd653a3-96de-43be-a073-1384b3d6aae1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.350 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7bd653a3-96de-43be-a073-1384b3d6aae1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:08.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.584 2 DEBUG nova.policy [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16730f38111542e58a05fb4deb2b3914', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5ade962c517a483dbfe4bb13386f0006', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.789 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 7bd653a3-96de-43be-a073-1384b3d6aae1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.855 2 DEBUG nova.storage.rbd_utils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] resizing rbd image 7bd653a3-96de-43be-a073-1384b3d6aae1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.936 2 DEBUG nova.objects.instance [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'migration_context' on Instance uuid 7bd653a3-96de-43be-a073-1384b3d6aae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.965 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.966 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Ensure instance console log exists: /var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.966 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.967 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:08 np0005465987 nova_compute[230713]: 2025-10-02 12:45:08.967 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:09 np0005465987 nova_compute[230713]: 2025-10-02 12:45:09.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:09 np0005465987 nova_compute[230713]: 2025-10-02 12:45:09.668 2 DEBUG nova.network.neutron [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Successfully created port: 23a78e40-bfa3-4120-abee-48eb4fec828b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:45:09 np0005465987 podman[291436]: 2025-10-02 12:45:09.833257399 +0000 UTC m=+0.049908003 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:45:09 np0005465987 podman[291435]: 2025-10-02 12:45:09.834565455 +0000 UTC m=+0.053145110 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:45:09 np0005465987 podman[291434]: 2025-10-02 12:45:09.85638035 +0000 UTC m=+0.078339256 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:45:09 np0005465987 nova_compute[230713]: 2025-10-02 12:45:09.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:09 np0005465987 nova_compute[230713]: 2025-10-02 12:45:09.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:45:09 np0005465987 nova_compute[230713]: 2025-10-02 12:45:09.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:45:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.000 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.001 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:45:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:10.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.514 2 DEBUG nova.network.neutron [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Successfully updated port: 23a78e40-bfa3-4120-abee-48eb4fec828b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.528 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.529 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquired lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.529 2 DEBUG nova.network.neutron [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:45:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:10.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.623 2 DEBUG nova.compute.manager [req-b2bf4a0d-ef95-41c3-a1b6-fbd8db070957 req-03dfdb7f-9cf1-4c0e-bbf5-6445c8b8de4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received event network-changed-23a78e40-bfa3-4120-abee-48eb4fec828b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.624 2 DEBUG nova.compute.manager [req-b2bf4a0d-ef95-41c3-a1b6-fbd8db070957 req-03dfdb7f-9cf1-4c0e-bbf5-6445c8b8de4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Refreshing instance network info cache due to event network-changed-23a78e40-bfa3-4120-abee-48eb4fec828b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.624 2 DEBUG oslo_concurrency.lockutils [req-b2bf4a0d-ef95-41c3-a1b6-fbd8db070957 req-03dfdb7f-9cf1-4c0e-bbf5-6445c8b8de4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:10 np0005465987 nova_compute[230713]: 2025-10-02 12:45:10.699 2 DEBUG nova.network.neutron [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.712 2 DEBUG nova.network.neutron [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updating instance_info_cache with network_info: [{"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.735 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Releasing lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.735 2 DEBUG nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Instance network_info: |[{"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.735 2 DEBUG oslo_concurrency.lockutils [req-b2bf4a0d-ef95-41c3-a1b6-fbd8db070957 req-03dfdb7f-9cf1-4c0e-bbf5-6445c8b8de4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.736 2 DEBUG nova.network.neutron [req-b2bf4a0d-ef95-41c3-a1b6-fbd8db070957 req-03dfdb7f-9cf1-4c0e-bbf5-6445c8b8de4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Refreshing network info cache for port 23a78e40-bfa3-4120-abee-48eb4fec828b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.738 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Start _get_guest_xml network_info=[{"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.742 2 WARNING nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.747 2 DEBUG nova.virt.libvirt.host [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.747 2 DEBUG nova.virt.libvirt.host [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.874 2 DEBUG nova.virt.libvirt.host [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.875 2 DEBUG nova.virt.libvirt.host [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.876 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.876 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.876 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.876 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.877 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.877 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.877 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.877 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.878 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.878 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.878 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.878 2 DEBUG nova.virt.hardware [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:45:11 np0005465987 nova_compute[230713]: 2025-10-02 12:45:11.880 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:12.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:45:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2092898847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.319 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.357 2 DEBUG nova.storage.rbd_utils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 7bd653a3-96de-43be-a073-1384b3d6aae1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.362 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:45:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:12.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:45:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:45:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2407577364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.853 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.855 2 DEBUG nova.virt.libvirt.vif [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-450707529',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-450707529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=162,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/ZjcNE6wdFzzzXeuoKTkNX3KO3BqOWz1wYhko1YgNYgKucwC17t7W3cYk44y91IyPGFz/HxMZMdcg++d8EWlGOo4YwOU/6rhgUrqATsjpINMPOK6TF4JadZwKYwGDxAw==',key_name='tempest-TestSecurityGroupsBasicOps-1124539397',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-aleft2y6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:08Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=7bd653a3-96de-43be-a073-1384b3d6aae1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.856 2 DEBUG nova.network.os_vif_util [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.856 2 DEBUG nova.network.os_vif_util [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:1a:2b,bridge_name='br-int',has_traffic_filtering=True,id=23a78e40-bfa3-4120-abee-48eb4fec828b,network=Network(2614e2b3-f8b2-48c2-924a-36fbf2e644a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23a78e40-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.858 2 DEBUG nova.objects.instance [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7bd653a3-96de-43be-a073-1384b3d6aae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.881 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <uuid>7bd653a3-96de-43be-a073-1384b3d6aae1</uuid>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <name>instance-000000a2</name>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-450707529</nova:name>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:45:11</nova:creationTime>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <nova:user uuid="16730f38111542e58a05fb4deb2b3914">tempest-TestSecurityGroupsBasicOps-1031871880-project-member</nova:user>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <nova:project uuid="5ade962c517a483dbfe4bb13386f0006">tempest-TestSecurityGroupsBasicOps-1031871880</nova:project>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <nova:port uuid="23a78e40-bfa3-4120-abee-48eb4fec828b">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <entry name="serial">7bd653a3-96de-43be-a073-1384b3d6aae1</entry>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <entry name="uuid">7bd653a3-96de-43be-a073-1384b3d6aae1</entry>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/7bd653a3-96de-43be-a073-1384b3d6aae1_disk">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/7bd653a3-96de-43be-a073-1384b3d6aae1_disk.config">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:c6:1a:2b"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <target dev="tap23a78e40-bf"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1/console.log" append="off"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:45:12 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:45:12 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:45:12 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:45:12 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.883 2 DEBUG nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Preparing to wait for external event network-vif-plugged-23a78e40-bfa3-4120-abee-48eb4fec828b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.883 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.884 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.884 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.885 2 DEBUG nova.virt.libvirt.vif [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-450707529',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-450707529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=162,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/ZjcNE6wdFzzzXeuoKTkNX3KO3BqOWz1wYhko1YgNYgKucwC17t7W3cYk44y91IyPGFz/HxMZMdcg++d8EWlGOo4YwOU/6rhgUrqATsjpINMPOK6TF4JadZwKYwGDxAw==',key_name='tempest-TestSecurityGroupsBasicOps-1124539397',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-aleft2y6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:08Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=7bd653a3-96de-43be-a073-1384b3d6aae1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.885 2 DEBUG nova.network.os_vif_util [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.886 2 DEBUG nova.network.os_vif_util [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:1a:2b,bridge_name='br-int',has_traffic_filtering=True,id=23a78e40-bfa3-4120-abee-48eb4fec828b,network=Network(2614e2b3-f8b2-48c2-924a-36fbf2e644a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23a78e40-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.886 2 DEBUG os_vif [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:1a:2b,bridge_name='br-int',has_traffic_filtering=True,id=23a78e40-bfa3-4120-abee-48eb4fec828b,network=Network(2614e2b3-f8b2-48c2-924a-36fbf2e644a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23a78e40-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23a78e40-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23a78e40-bf, col_values=(('external_ids', {'iface-id': '23a78e40-bfa3-4120-abee-48eb4fec828b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:1a:2b', 'vm-uuid': '7bd653a3-96de-43be-a073-1384b3d6aae1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005465987 NetworkManager[44910]: <info>  [1759409112.8941] manager: (tap23a78e40-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.902 2 INFO os_vif [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:1a:2b,bridge_name='br-int',has_traffic_filtering=True,id=23a78e40-bfa3-4120-abee-48eb4fec828b,network=Network(2614e2b3-f8b2-48c2-924a-36fbf2e644a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23a78e40-bf')#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.945 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.946 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.946 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No VIF found with MAC fa:16:3e:c6:1a:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.947 2 INFO nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Using config drive#033[00m
Oct  2 08:45:12 np0005465987 nova_compute[230713]: 2025-10-02 12:45:12.969 2 DEBUG nova.storage.rbd_utils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 7bd653a3-96de-43be-a073-1384b3d6aae1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:13Z|00615|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  2 08:45:13 np0005465987 nova_compute[230713]: 2025-10-02 12:45:13.649 2 INFO nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Creating config drive at /var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1/disk.config#033[00m
Oct  2 08:45:13 np0005465987 nova_compute[230713]: 2025-10-02 12:45:13.654 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxwkbq4rs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:45:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/670527462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:45:13 np0005465987 nova_compute[230713]: 2025-10-02 12:45:13.791 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxwkbq4rs" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:13 np0005465987 nova_compute[230713]: 2025-10-02 12:45:13.819 2 DEBUG nova.storage.rbd_utils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 7bd653a3-96de-43be-a073-1384b3d6aae1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:13 np0005465987 nova_compute[230713]: 2025-10-02 12:45:13.822 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1/disk.config 7bd653a3-96de-43be-a073-1384b3d6aae1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:13 np0005465987 nova_compute[230713]: 2025-10-02 12:45:13.849 2 DEBUG nova.network.neutron [req-b2bf4a0d-ef95-41c3-a1b6-fbd8db070957 req-03dfdb7f-9cf1-4c0e-bbf5-6445c8b8de4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updated VIF entry in instance network info cache for port 23a78e40-bfa3-4120-abee-48eb4fec828b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:45:13 np0005465987 nova_compute[230713]: 2025-10-02 12:45:13.850 2 DEBUG nova.network.neutron [req-b2bf4a0d-ef95-41c3-a1b6-fbd8db070957 req-03dfdb7f-9cf1-4c0e-bbf5-6445c8b8de4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updating instance_info_cache with network_info: [{"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:13 np0005465987 nova_compute[230713]: 2025-10-02 12:45:13.883 2 DEBUG oslo_concurrency.lockutils [req-b2bf4a0d-ef95-41c3-a1b6-fbd8db070957 req-03dfdb7f-9cf1-4c0e-bbf5-6445c8b8de4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:14.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.229 2 DEBUG oslo_concurrency.processutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1/disk.config 7bd653a3-96de-43be-a073-1384b3d6aae1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.229 2 INFO nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Deleting local config drive /var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1/disk.config because it was imported into RBD.#033[00m
Oct  2 08:45:14 np0005465987 kernel: tap23a78e40-bf: entered promiscuous mode
Oct  2 08:45:14 np0005465987 NetworkManager[44910]: <info>  [1759409114.2864] manager: (tap23a78e40-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:14Z|00616|binding|INFO|Claiming lport 23a78e40-bfa3-4120-abee-48eb4fec828b for this chassis.
Oct  2 08:45:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:14Z|00617|binding|INFO|23a78e40-bfa3-4120-abee-48eb4fec828b: Claiming fa:16:3e:c6:1a:2b 10.100.0.10
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.305 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:1a:2b 10.100.0.10'], port_security=['fa:16:3e:c6:1a:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7bd653a3-96de-43be-a073-1384b3d6aae1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2614e2b3-f8b2-48c2-924a-36fbf2e644a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '2', 'neutron:security_group_ids': '188b9a08-a903-4af6-b546-f7bd08b938fa 51d5e11a-b13a-4915-933f-c2d9aa135276', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9bd2d3c-1353-4216-ae02-9a1a5671ea57, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=23a78e40-bfa3-4120-abee-48eb4fec828b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.306 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 23a78e40-bfa3-4120-abee-48eb4fec828b in datapath 2614e2b3-f8b2-48c2-924a-36fbf2e644a4 bound to our chassis#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.307 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2614e2b3-f8b2-48c2-924a-36fbf2e644a4#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.320 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[971eef6a-edc1-422f-88b1-707fda3ca83a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.322 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2614e2b3-f1 in ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:45:14 np0005465987 systemd-machined[188335]: New machine qemu-73-instance-000000a2.
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.324 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2614e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.324 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaeb1a9-ad04-44e6-8703-abdb534bd52f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.326 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[85a47fc8-ede4-479b-a987-e2a6c726c18d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.336 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e386b0-ae72-43e6-963d-715d1b773693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 systemd[1]: Started Virtual Machine qemu-73-instance-000000a2.
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.361 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f3412d-1c62-4d81-a9ea-d70c381493ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:14Z|00618|binding|INFO|Setting lport 23a78e40-bfa3-4120-abee-48eb4fec828b ovn-installed in OVS
Oct  2 08:45:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:14Z|00619|binding|INFO|Setting lport 23a78e40-bfa3-4120-abee-48eb4fec828b up in Southbound
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:14 np0005465987 systemd-udevd[291639]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:45:14 np0005465987 NetworkManager[44910]: <info>  [1759409114.3950] device (tap23a78e40-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.394 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6a42d38f-8fce-4bba-bfc9-f40c66e6efc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 NetworkManager[44910]: <info>  [1759409114.3963] device (tap23a78e40-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.402 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[68f90629-377b-42db-ac64-b5675df42027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 NetworkManager[44910]: <info>  [1759409114.4032] manager: (tap2614e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/287)
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.439 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[fdaee80f-272d-42a6-960f-b2f8c341237c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.442 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd3edc7-9414-4902-9e5a-e5f401ada8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 NetworkManager[44910]: <info>  [1759409114.4675] device (tap2614e2b3-f0): carrier: link connected
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.473 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[372ce9a6-5e75-464e-9732-3071e3eaa207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.500 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[90c82851-d36f-46c6-926d-58bffa4858e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2614e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a3:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718304, 'reachable_time': 32105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291668, 'error': None, 'target': 'ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.516 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[306da6e1-8bc0-4173-8cd8-e6b00edd6380]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a36a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718304, 'tstamp': 718304}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291669, 'error': None, 'target': 'ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.539 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[540ada40-4578-4713-9066-11cfa761a867]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2614e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a3:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718304, 'reachable_time': 32105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291670, 'error': None, 'target': 'ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:14.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.577 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[332b891e-68a7-4478-8c5b-4930cdd42554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.638 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0c0832-b9d2-4647-98db-37097a2281f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.639 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2614e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.640 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.640 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2614e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:14 np0005465987 NetworkManager[44910]: <info>  [1759409114.6431] manager: (tap2614e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Oct  2 08:45:14 np0005465987 kernel: tap2614e2b3-f0: entered promiscuous mode
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.647 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2614e2b3-f0, col_values=(('external_ids', {'iface-id': '5a8624b9-e774-4b29-b8e7-972fe270a8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:14Z|00620|binding|INFO|Releasing lport 5a8624b9-e774-4b29-b8e7-972fe270a8c7 from this chassis (sb_readonly=0)
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.663 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2614e2b3-f8b2-48c2-924a-36fbf2e644a4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2614e2b3-f8b2-48c2-924a-36fbf2e644a4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.664 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[31e6e5a4-2c65-4649-80b7-13637f2834b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.665 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-2614e2b3-f8b2-48c2-924a-36fbf2e644a4
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/2614e2b3-f8b2-48c2-924a-36fbf2e644a4.pid.haproxy
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 2614e2b3-f8b2-48c2-924a-36fbf2e644a4
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:45:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:14.666 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4', 'env', 'PROCESS_TAG=haproxy-2614e2b3-f8b2-48c2-924a-36fbf2e644a4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2614e2b3-f8b2-48c2-924a-36fbf2e644a4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.787 2 DEBUG nova.compute.manager [req-c565f076-1827-46b8-9ab2-aa43b7727589 req-0910afdd-ef38-4921-873b-d575e09d77e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received event network-vif-plugged-23a78e40-bfa3-4120-abee-48eb4fec828b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.787 2 DEBUG oslo_concurrency.lockutils [req-c565f076-1827-46b8-9ab2-aa43b7727589 req-0910afdd-ef38-4921-873b-d575e09d77e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.787 2 DEBUG oslo_concurrency.lockutils [req-c565f076-1827-46b8-9ab2-aa43b7727589 req-0910afdd-ef38-4921-873b-d575e09d77e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.788 2 DEBUG oslo_concurrency.lockutils [req-c565f076-1827-46b8-9ab2-aa43b7727589 req-0910afdd-ef38-4921-873b-d575e09d77e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.788 2 DEBUG nova.compute.manager [req-c565f076-1827-46b8-9ab2-aa43b7727589 req-0910afdd-ef38-4921-873b-d575e09d77e7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Processing event network-vif-plugged-23a78e40-bfa3-4120-abee-48eb4fec828b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:14 np0005465987 nova_compute[230713]: 2025-10-02 12:45:14.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:15 np0005465987 podman[291700]: 2025-10-02 12:45:15.081128699 +0000 UTC m=+0.062703747 container create 36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:45:15 np0005465987 systemd[1]: Started libpod-conmon-36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3.scope.
Oct  2 08:45:15 np0005465987 podman[291700]: 2025-10-02 12:45:15.052858278 +0000 UTC m=+0.034433316 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:45:15 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:45:15 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0cd8c1d702e9fd97740cb9b23809470a7685dfe77a1c3a96909d3d0d16f9ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:45:15 np0005465987 podman[291700]: 2025-10-02 12:45:15.183031598 +0000 UTC m=+0.164606646 container init 36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:45:15 np0005465987 podman[291700]: 2025-10-02 12:45:15.189590714 +0000 UTC m=+0.171165732 container start 36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:45:15 np0005465987 neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4[291715]: [NOTICE]   (291719) : New worker (291721) forked
Oct  2 08:45:15 np0005465987 neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4[291715]: [NOTICE]   (291719) : Loading success.
Oct  2 08:45:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:45:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:16.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.195 2 DEBUG nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.196 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409116.1943395, 7bd653a3-96de-43be-a073-1384b3d6aae1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.196 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] VM Started (Lifecycle Event)#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.199 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.203 2 INFO nova.virt.libvirt.driver [-] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Instance spawned successfully.#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.204 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.228 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.231 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.237 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.238 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.238 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.238 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.239 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.239 2 DEBUG nova.virt.libvirt.driver [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.466 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.466 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409116.1954188, 7bd653a3-96de-43be-a073-1384b3d6aae1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.467 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:45:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:16.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.571 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.575 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409116.1977005, 7bd653a3-96de-43be-a073-1384b3d6aae1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.575 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.744 2 INFO nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Took 8.58 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.746 2 DEBUG nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.823 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.825 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:16 np0005465987 nova_compute[230713]: 2025-10-02 12:45:16.978 2 INFO nova.compute.manager [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Took 9.80 seconds to build instance.#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.003 2 DEBUG nova.compute.manager [req-df0b77d5-38a3-48e8-bfb6-c1b9e5d61d67 req-8a07732e-5d3c-40d0-9d3c-9f7c17acef45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received event network-vif-plugged-23a78e40-bfa3-4120-abee-48eb4fec828b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.004 2 DEBUG oslo_concurrency.lockutils [req-df0b77d5-38a3-48e8-bfb6-c1b9e5d61d67 req-8a07732e-5d3c-40d0-9d3c-9f7c17acef45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.005 2 DEBUG oslo_concurrency.lockutils [req-df0b77d5-38a3-48e8-bfb6-c1b9e5d61d67 req-8a07732e-5d3c-40d0-9d3c-9f7c17acef45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.005 2 DEBUG oslo_concurrency.lockutils [req-df0b77d5-38a3-48e8-bfb6-c1b9e5d61d67 req-8a07732e-5d3c-40d0-9d3c-9f7c17acef45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.005 2 DEBUG nova.compute.manager [req-df0b77d5-38a3-48e8-bfb6-c1b9e5d61d67 req-8a07732e-5d3c-40d0-9d3c-9f7c17acef45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] No waiting events found dispatching network-vif-plugged-23a78e40-bfa3-4120-abee-48eb4fec828b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.005 2 WARNING nova.compute.manager [req-df0b77d5-38a3-48e8-bfb6-c1b9e5d61d67 req-8a07732e-5d3c-40d0-9d3c-9f7c17acef45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received unexpected event network-vif-plugged-23a78e40-bfa3-4120-abee-48eb4fec828b for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.116 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.116 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.116 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.117 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.117 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.153 2 DEBUG oslo_concurrency.lockutils [None req-c54213a1-b020-4f16-9f69-1199c226e41b 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2295967782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.567 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:17 np0005465987 nova_compute[230713]: 2025-10-02 12:45:17.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.012 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.013 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:45:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:18.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.153 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.154 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4258MB free_disk=20.911514282226562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.155 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.155 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.549 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 7bd653a3-96de-43be-a073-1384b3d6aae1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.549 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.549 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:45:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:18.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.626 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.644 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.645 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.660 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.708 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:45:18 np0005465987 nova_compute[230713]: 2025-10-02 12:45:18.736 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1450786427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:19 np0005465987 nova_compute[230713]: 2025-10-02 12:45:19.151 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:19 np0005465987 nova_compute[230713]: 2025-10-02 12:45:19.156 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:19 np0005465987 nova_compute[230713]: 2025-10-02 12:45:19.195 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:19 np0005465987 nova_compute[230713]: 2025-10-02 12:45:19.249 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:45:19 np0005465987 nova_compute[230713]: 2025-10-02 12:45:19.249 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:20.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:20 np0005465987 nova_compute[230713]: 2025-10-02 12:45:20.249 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:20 np0005465987 nova_compute[230713]: 2025-10-02 12:45:20.250 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:20.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:21 np0005465987 nova_compute[230713]: 2025-10-02 12:45:21.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:45:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:22.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:45:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:45:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:22.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:45:22 np0005465987 nova_compute[230713]: 2025-10-02 12:45:22.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:23 np0005465987 NetworkManager[44910]: <info>  [1759409123.7749] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Oct  2 08:45:23 np0005465987 nova_compute[230713]: 2025-10-02 12:45:23.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:23 np0005465987 NetworkManager[44910]: <info>  [1759409123.7760] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Oct  2 08:45:24 np0005465987 nova_compute[230713]: 2025-10-02 12:45:24.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:24Z|00621|binding|INFO|Releasing lport 5a8624b9-e774-4b29-b8e7-972fe270a8c7 from this chassis (sb_readonly=0)
Oct  2 08:45:24 np0005465987 nova_compute[230713]: 2025-10-02 12:45:24.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:24.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:24.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:24 np0005465987 nova_compute[230713]: 2025-10-02 12:45:24.633 2 DEBUG nova.compute.manager [req-be3d5414-192b-4f85-8717-67a4c13747f9 req-ae61a415-c17e-44b9-a730-277634403500 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received event network-changed-23a78e40-bfa3-4120-abee-48eb4fec828b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:24 np0005465987 nova_compute[230713]: 2025-10-02 12:45:24.633 2 DEBUG nova.compute.manager [req-be3d5414-192b-4f85-8717-67a4c13747f9 req-ae61a415-c17e-44b9-a730-277634403500 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Refreshing instance network info cache due to event network-changed-23a78e40-bfa3-4120-abee-48eb4fec828b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:45:24 np0005465987 nova_compute[230713]: 2025-10-02 12:45:24.634 2 DEBUG oslo_concurrency.lockutils [req-be3d5414-192b-4f85-8717-67a4c13747f9 req-ae61a415-c17e-44b9-a730-277634403500 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:24 np0005465987 nova_compute[230713]: 2025-10-02 12:45:24.634 2 DEBUG oslo_concurrency.lockutils [req-be3d5414-192b-4f85-8717-67a4c13747f9 req-ae61a415-c17e-44b9-a730-277634403500 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:24 np0005465987 nova_compute[230713]: 2025-10-02 12:45:24.634 2 DEBUG nova.network.neutron [req-be3d5414-192b-4f85-8717-67a4c13747f9 req-ae61a415-c17e-44b9-a730-277634403500 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Refreshing network info cache for port 23a78e40-bfa3-4120-abee-48eb4fec828b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:45:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:25 np0005465987 nova_compute[230713]: 2025-10-02 12:45:25.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:25 np0005465987 nova_compute[230713]: 2025-10-02 12:45:25.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:45:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:26.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:26 np0005465987 nova_compute[230713]: 2025-10-02 12:45:26.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:26.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:27 np0005465987 nova_compute[230713]: 2025-10-02 12:45:27.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:27.970 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:27.971 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:27.972 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:28.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:28.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:28 np0005465987 nova_compute[230713]: 2025-10-02 12:45:28.603 2 DEBUG nova.network.neutron [req-be3d5414-192b-4f85-8717-67a4c13747f9 req-ae61a415-c17e-44b9-a730-277634403500 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updated VIF entry in instance network info cache for port 23a78e40-bfa3-4120-abee-48eb4fec828b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:45:28 np0005465987 nova_compute[230713]: 2025-10-02 12:45:28.603 2 DEBUG nova.network.neutron [req-be3d5414-192b-4f85-8717-67a4c13747f9 req-ae61a415-c17e-44b9-a730-277634403500 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updating instance_info_cache with network_info: [{"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:28 np0005465987 nova_compute[230713]: 2025-10-02 12:45:28.669 2 DEBUG oslo_concurrency.lockutils [req-be3d5414-192b-4f85-8717-67a4c13747f9 req-ae61a415-c17e-44b9-a730-277634403500 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:29Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:1a:2b 10.100.0.10
Oct  2 08:45:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:29Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:1a:2b 10.100.0.10
Oct  2 08:45:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:30.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:30.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:31 np0005465987 nova_compute[230713]: 2025-10-02 12:45:31.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:45:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:32.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:45:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:32.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:32 np0005465987 nova_compute[230713]: 2025-10-02 12:45:32.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:34.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e357 e357: 3 total, 3 up, 3 in
Oct  2 08:45:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:36.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:36 np0005465987 nova_compute[230713]: 2025-10-02 12:45:36.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:36.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:36 np0005465987 podman[291818]: 2025-10-02 12:45:36.645018534 +0000 UTC m=+0.092879888 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:45:37 np0005465987 nova_compute[230713]: 2025-10-02 12:45:37.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:38.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:45:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:38.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:45:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:38Z|00622|binding|INFO|Releasing lport 5a8624b9-e774-4b29-b8e7-972fe270a8c7 from this chassis (sb_readonly=0)
Oct  2 08:45:38 np0005465987 nova_compute[230713]: 2025-10-02 12:45:38.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:40.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:40.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:40 np0005465987 podman[291839]: 2025-10-02 12:45:40.839111682 +0000 UTC m=+0.056142661 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:45:40 np0005465987 podman[291840]: 2025-10-02 12:45:40.857910718 +0000 UTC m=+0.068699059 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:45:40 np0005465987 podman[291838]: 2025-10-02 12:45:40.870865826 +0000 UTC m=+0.093188367 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:45:41 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:41Z|00623|binding|INFO|Releasing lport 5a8624b9-e774-4b29-b8e7-972fe270a8c7 from this chassis (sb_readonly=0)
Oct  2 08:45:41 np0005465987 nova_compute[230713]: 2025-10-02 12:45:41.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:41 np0005465987 nova_compute[230713]: 2025-10-02 12:45:41.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:42.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:42.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:42 np0005465987 nova_compute[230713]: 2025-10-02 12:45:42.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 e358: 3 total, 3 up, 3 in
Oct  2 08:45:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:45:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:44.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:45:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:44.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:45:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:45:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:45:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:45.031 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:45 np0005465987 nova_compute[230713]: 2025-10-02 12:45:45.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:45.032 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:45:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:46.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:46 np0005465987 nova_compute[230713]: 2025-10-02 12:45:46.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:46.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:47 np0005465987 nova_compute[230713]: 2025-10-02 12:45:47.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:48 np0005465987 nova_compute[230713]: 2025-10-02 12:45:48.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:45:48Z|00624|binding|INFO|Releasing lport 5a8624b9-e774-4b29-b8e7-972fe270a8c7 from this chassis (sb_readonly=0)
Oct  2 08:45:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:48.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:48 np0005465987 nova_compute[230713]: 2025-10-02 12:45:48.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:48.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:50.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:50 np0005465987 nova_compute[230713]: 2025-10-02 12:45:50.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:50.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:45:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:45:51 np0005465987 nova_compute[230713]: 2025-10-02 12:45:51.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:52.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:52.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:53 np0005465987 nova_compute[230713]: 2025-10-02 12:45:53.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:45:54.034 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:54.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:54.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:54 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:45:54 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:45:54 np0005465987 nova_compute[230713]: 2025-10-02 12:45:54.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:54 np0005465987 nova_compute[230713]: 2025-10-02 12:45:54.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:45:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:56.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:45:56 np0005465987 nova_compute[230713]: 2025-10-02 12:45:56.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:56.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:58 np0005465987 nova_compute[230713]: 2025-10-02 12:45:58.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:58.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:45:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:45:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:58.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:45:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:00.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:00.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:01 np0005465987 nova_compute[230713]: 2025-10-02 12:46:01.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:02.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:02.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:02 np0005465987 nova_compute[230713]: 2025-10-02 12:46:02.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:03 np0005465987 nova_compute[230713]: 2025-10-02 12:46:03.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.058 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquiring lock "5a538012-6544-42f5-8262-da298e929634" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.058 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "5a538012-6544-42f5-8262-da298e929634" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.098 2 DEBUG nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:46:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:04.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.252 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.253 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.263 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.264 2 INFO nova.compute.claims [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.543 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:46:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:04.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:46:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1004384928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.979 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:04 np0005465987 nova_compute[230713]: 2025-10-02 12:46:04.986 2 DEBUG nova.compute.provider_tree [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:46:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.024 2 DEBUG nova.scheduler.client.report [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.058 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.059 2 DEBUG nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.129 2 DEBUG nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.129 2 DEBUG nova.network.neutron [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.156 2 INFO nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.183 2 DEBUG nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.319 2 DEBUG nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.320 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.320 2 INFO nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Creating image(s)#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.342 2 DEBUG nova.storage.rbd_utils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] rbd image 5a538012-6544-42f5-8262-da298e929634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.363 2 DEBUG nova.storage.rbd_utils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] rbd image 5a538012-6544-42f5-8262-da298e929634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.387 2 DEBUG nova.storage.rbd_utils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] rbd image 5a538012-6544-42f5-8262-da298e929634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.391 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.444 2 DEBUG nova.policy [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e22bfab495d045efbf09d770e1516f29', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bebc19e62a04c99b7982761086c6bb7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.486 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.487 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.488 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.488 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.513 2 DEBUG nova.storage.rbd_utils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] rbd image 5a538012-6544-42f5-8262-da298e929634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.517 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 5a538012-6544-42f5-8262-da298e929634_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.826 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 5a538012-6544-42f5-8262-da298e929634_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:05 np0005465987 nova_compute[230713]: 2025-10-02 12:46:05.902 2 DEBUG nova.storage.rbd_utils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] resizing rbd image 5a538012-6544-42f5-8262-da298e929634_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:46:06 np0005465987 nova_compute[230713]: 2025-10-02 12:46:06.021 2 DEBUG nova.objects.instance [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lazy-loading 'migration_context' on Instance uuid 5a538012-6544-42f5-8262-da298e929634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:06 np0005465987 nova_compute[230713]: 2025-10-02 12:46:06.054 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:46:06 np0005465987 nova_compute[230713]: 2025-10-02 12:46:06.055 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Ensure instance console log exists: /var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:46:06 np0005465987 nova_compute[230713]: 2025-10-02 12:46:06.056 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:06 np0005465987 nova_compute[230713]: 2025-10-02 12:46:06.056 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:06 np0005465987 nova_compute[230713]: 2025-10-02 12:46:06.057 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:46:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:06.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:46:06 np0005465987 nova_compute[230713]: 2025-10-02 12:46:06.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:06.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:06 np0005465987 nova_compute[230713]: 2025-10-02 12:46:06.824 2 DEBUG nova.network.neutron [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Successfully created port: f5dc00f3-0982-4cbc-8727-599d5a3d7d49 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:46:06 np0005465987 podman[292273]: 2025-10-02 12:46:06.84168781 +0000 UTC m=+0.059917161 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:46:06 np0005465987 nova_compute[230713]: 2025-10-02 12:46:06.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:08 np0005465987 nova_compute[230713]: 2025-10-02 12:46:08.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:08.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:08.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:08 np0005465987 nova_compute[230713]: 2025-10-02 12:46:08.983 2 DEBUG nova.network.neutron [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Successfully updated port: f5dc00f3-0982-4cbc-8727-599d5a3d7d49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:46:09 np0005465987 nova_compute[230713]: 2025-10-02 12:46:09.014 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquiring lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:09 np0005465987 nova_compute[230713]: 2025-10-02 12:46:09.014 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquired lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:09 np0005465987 nova_compute[230713]: 2025-10-02 12:46:09.015 2 DEBUG nova.network.neutron [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:46:09 np0005465987 nova_compute[230713]: 2025-10-02 12:46:09.392 2 DEBUG nova.compute.manager [req-2e8369fb-200e-4f56-bcdc-a7d563aa19f1 req-7aec047e-1fb4-4f15-a069-ba43c9989f63 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Received event network-changed-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:09 np0005465987 nova_compute[230713]: 2025-10-02 12:46:09.392 2 DEBUG nova.compute.manager [req-2e8369fb-200e-4f56-bcdc-a7d563aa19f1 req-7aec047e-1fb4-4f15-a069-ba43c9989f63 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Refreshing instance network info cache due to event network-changed-f5dc00f3-0982-4cbc-8727-599d5a3d7d49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:46:09 np0005465987 nova_compute[230713]: 2025-10-02 12:46:09.392 2 DEBUG oslo_concurrency.lockutils [req-2e8369fb-200e-4f56-bcdc-a7d563aa19f1 req-7aec047e-1fb4-4f15-a069-ba43c9989f63 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:09 np0005465987 nova_compute[230713]: 2025-10-02 12:46:09.676 2 DEBUG nova.network.neutron [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:46:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:10.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:10.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:10 np0005465987 nova_compute[230713]: 2025-10-02 12:46:10.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:10 np0005465987 nova_compute[230713]: 2025-10-02 12:46:10.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:46:10 np0005465987 nova_compute[230713]: 2025-10-02 12:46:10.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.001 2 DEBUG nova.network.neutron [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Updating instance_info_cache with network_info: [{"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.080 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.218 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Releasing lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.219 2 DEBUG nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Instance network_info: |[{"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.220 2 DEBUG oslo_concurrency.lockutils [req-2e8369fb-200e-4f56-bcdc-a7d563aa19f1 req-7aec047e-1fb4-4f15-a069-ba43c9989f63 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.221 2 DEBUG nova.network.neutron [req-2e8369fb-200e-4f56-bcdc-a7d563aa19f1 req-7aec047e-1fb4-4f15-a069-ba43c9989f63 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Refreshing network info cache for port f5dc00f3-0982-4cbc-8727-599d5a3d7d49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.226 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Start _get_guest_xml network_info=[{"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.231 2 WARNING nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.238 2 DEBUG nova.virt.libvirt.host [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.239 2 DEBUG nova.virt.libvirt.host [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.242 2 DEBUG nova.virt.libvirt.host [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.243 2 DEBUG nova.virt.libvirt.host [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.245 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.246 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.247 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.247 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.248 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.248 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.248 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.249 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.249 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.250 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.250 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.251 2 DEBUG nova.virt.hardware [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.255 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:46:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3989483832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.684 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.716 2 DEBUG nova.storage.rbd_utils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] rbd image 5a538012-6544-42f5-8262-da298e929634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.721 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.751 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.751 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.751 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:46:11 np0005465987 nova_compute[230713]: 2025-10-02 12:46:11.752 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7bd653a3-96de-43be-a073-1384b3d6aae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:11 np0005465987 podman[292334]: 2025-10-02 12:46:11.848882708 +0000 UTC m=+0.062688527 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible)
Oct  2 08:46:11 np0005465987 podman[292335]: 2025-10-02 12:46:11.857788417 +0000 UTC m=+0.069522910 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:46:11 np0005465987 podman[292333]: 2025-10-02 12:46:11.868194687 +0000 UTC m=+0.088630484 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:46:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:12.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:46:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3642926876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.238 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.241 2 DEBUG nova.virt.libvirt.vif [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-867195034-access_point-528169257',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-867195034-access_point-528169257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-867195034-acc',id=165,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKK59RxXF0zkM+DQKKfqt3lKEBHXJSgzX9X97HRVbxX02BUom6Hybllx+GRgopHp07LFNIQbeDVO3XCkNvHzK6J7znSS+By1rD5nR72V8lGj4HdPOdi+riwvq2Gz2cZ24g==',key_name='tempest-TestSecurityGroupsBasicOps-1478546965',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bebc19e62a04c99b7982761086c6bb7',ramdisk_id='',reservation_id='r-ddtaal0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-867195034',owner_user_name='tempest-TestSecurityGroupsBasicOps-867195034-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:05Z,user_data=None,user_id='e22bfab495d045efbf09d770e1516f29',uuid=5a538012-6544-42f5-8262-da298e929634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.241 2 DEBUG nova.network.os_vif_util [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Converting VIF {"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.243 2 DEBUG nova.network.os_vif_util [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:48:b9,bridge_name='br-int',has_traffic_filtering=True,id=f5dc00f3-0982-4cbc-8727-599d5a3d7d49,network=Network(bd444c4e-0c48-4f54-bd04-5fe58ec22b3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5dc00f3-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.245 2 DEBUG nova.objects.instance [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a538012-6544-42f5-8262-da298e929634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.321 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <uuid>5a538012-6544-42f5-8262-da298e929634</uuid>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <name>instance-000000a5</name>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-867195034-access_point-528169257</nova:name>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:46:11</nova:creationTime>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <nova:user uuid="e22bfab495d045efbf09d770e1516f29">tempest-TestSecurityGroupsBasicOps-867195034-project-member</nova:user>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <nova:project uuid="9bebc19e62a04c99b7982761086c6bb7">tempest-TestSecurityGroupsBasicOps-867195034</nova:project>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <nova:port uuid="f5dc00f3-0982-4cbc-8727-599d5a3d7d49">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <entry name="serial">5a538012-6544-42f5-8262-da298e929634</entry>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <entry name="uuid">5a538012-6544-42f5-8262-da298e929634</entry>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/5a538012-6544-42f5-8262-da298e929634_disk">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/5a538012-6544-42f5-8262-da298e929634_disk.config">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:95:48:b9"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <target dev="tapf5dc00f3-09"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634/console.log" append="off"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:46:12 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:46:12 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:46:12 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:46:12 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.322 2 DEBUG nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Preparing to wait for external event network-vif-plugged-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.323 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquiring lock "5a538012-6544-42f5-8262-da298e929634-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.323 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "5a538012-6544-42f5-8262-da298e929634-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.323 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "5a538012-6544-42f5-8262-da298e929634-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.324 2 DEBUG nova.virt.libvirt.vif [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-867195034-access_point-528169257',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-867195034-access_point-528169257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-867195034-acc',id=165,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKK59RxXF0zkM+DQKKfqt3lKEBHXJSgzX9X97HRVbxX02BUom6Hybllx+GRgopHp07LFNIQbeDVO3XCkNvHzK6J7znSS+By1rD5nR72V8lGj4HdPOdi+riwvq2Gz2cZ24g==',key_name='tempest-TestSecurityGroupsBasicOps-1478546965',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bebc19e62a04c99b7982761086c6bb7',ramdisk_id='',reservation_id='r-ddtaal0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-867195034',owner_user_name='tempest-TestSecurityGroupsBasicOps-867195034-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:05Z,user_data=None,user_id='e22bfab495d045efbf09d770e1516f29',uuid=5a538012-6544-42f5-8262-da298e929634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.324 2 DEBUG nova.network.os_vif_util [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Converting VIF {"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.325 2 DEBUG nova.network.os_vif_util [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:48:b9,bridge_name='br-int',has_traffic_filtering=True,id=f5dc00f3-0982-4cbc-8727-599d5a3d7d49,network=Network(bd444c4e-0c48-4f54-bd04-5fe58ec22b3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5dc00f3-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.325 2 DEBUG os_vif [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:48:b9,bridge_name='br-int',has_traffic_filtering=True,id=f5dc00f3-0982-4cbc-8727-599d5a3d7d49,network=Network(bd444c4e-0c48-4f54-bd04-5fe58ec22b3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5dc00f3-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5dc00f3-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5dc00f3-09, col_values=(('external_ids', {'iface-id': 'f5dc00f3-0982-4cbc-8727-599d5a3d7d49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:48:b9', 'vm-uuid': '5a538012-6544-42f5-8262-da298e929634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:12 np0005465987 NetworkManager[44910]: <info>  [1759409172.3353] manager: (tapf5dc00f3-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.344 2 INFO os_vif [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:48:b9,bridge_name='br-int',has_traffic_filtering=True,id=f5dc00f3-0982-4cbc-8727-599d5a3d7d49,network=Network(bd444c4e-0c48-4f54-bd04-5fe58ec22b3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5dc00f3-09')#033[00m
Oct  2 08:46:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:12.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.656 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.656 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.656 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] No VIF found with MAC fa:16:3e:95:48:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.657 2 INFO nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Using config drive#033[00m
Oct  2 08:46:12 np0005465987 nova_compute[230713]: 2025-10-02 12:46:12.683 2 DEBUG nova.storage.rbd_utils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] rbd image 5a538012-6544-42f5-8262-da298e929634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.347 2 INFO nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Creating config drive at /var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634/disk.config#033[00m
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.355 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsgy69c9n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.495 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsgy69c9n" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.522 2 DEBUG nova.storage.rbd_utils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] rbd image 5a538012-6544-42f5-8262-da298e929634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.525 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634/disk.config 5a538012-6544-42f5-8262-da298e929634_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.738 2 DEBUG oslo_concurrency.processutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634/disk.config 5a538012-6544-42f5-8262-da298e929634_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.739 2 INFO nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Deleting local config drive /var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634/disk.config because it was imported into RBD.#033[00m
Oct  2 08:46:13 np0005465987 kernel: tapf5dc00f3-09: entered promiscuous mode
Oct  2 08:46:13 np0005465987 NetworkManager[44910]: <info>  [1759409173.7885] manager: (tapf5dc00f3-09): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Oct  2 08:46:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:13Z|00625|binding|INFO|Claiming lport f5dc00f3-0982-4cbc-8727-599d5a3d7d49 for this chassis.
Oct  2 08:46:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:13Z|00626|binding|INFO|f5dc00f3-0982-4cbc-8727-599d5a3d7d49: Claiming fa:16:3e:95:48:b9 10.100.0.5
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:13Z|00627|binding|INFO|Releasing lport 5a8624b9-e774-4b29-b8e7-972fe270a8c7 from this chassis (sb_readonly=0)
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.805 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:48:b9 10.100.0.5'], port_security=['fa:16:3e:95:48:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5a538012-6544-42f5-8262-da298e929634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bebc19e62a04c99b7982761086c6bb7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07de347b-eb8b-4454-8d79-471b92641858 733fca7b-666c-470e-952b-f63bb0a2c897', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de89a42e-0fe3-4a39-9de6-48b3ef97b42e, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=f5dc00f3-0982-4cbc-8727-599d5a3d7d49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.806 138325 INFO neutron.agent.ovn.metadata.agent [-] Port f5dc00f3-0982-4cbc-8727-599d5a3d7d49 in datapath bd444c4e-0c48-4f54-bd04-5fe58ec22b3f bound to our chassis#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.808 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd444c4e-0c48-4f54-bd04-5fe58ec22b3f#033[00m
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.819 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f475077e-fbb5-4847-896e-993775ba71ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.821 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd444c4e-01 in ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:46:13 np0005465987 systemd-udevd[292490]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.823 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd444c4e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.823 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[77a7fec4-daf3-416f-9e45-18fb1e737e98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.826 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e008aae3-34da-4f7f-9101-69450b57f32d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 systemd-machined[188335]: New machine qemu-74-instance-000000a5.
Oct  2 08:46:13 np0005465987 NetworkManager[44910]: <info>  [1759409173.8386] device (tapf5dc00f3-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.837 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[1207a848-5a7b-4e08-a815-dba5a7348adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 NetworkManager[44910]: <info>  [1759409173.8395] device (tapf5dc00f3-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.851 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3b1d81b6-6c2c-4699-af3e-f113fd3451fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:13Z|00628|binding|INFO|Setting lport f5dc00f3-0982-4cbc-8727-599d5a3d7d49 ovn-installed in OVS
Oct  2 08:46:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:13Z|00629|binding|INFO|Setting lport f5dc00f3-0982-4cbc-8727-599d5a3d7d49 up in Southbound
Oct  2 08:46:13 np0005465987 nova_compute[230713]: 2025-10-02 12:46:13.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:13 np0005465987 systemd[1]: Started Virtual Machine qemu-74-instance-000000a5.
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.882 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[14282258-21ca-4352-beb8-12a445348d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 NetworkManager[44910]: <info>  [1759409173.8903] manager: (tapbd444c4e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/293)
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.889 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3effdc37-1165-4d34-ab5b-9bfb2d920d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.920 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ec69ab8f-0229-4172-ade0-ee939947eef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.923 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7a08a114-d4f9-423a-ae12-643fe936dee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 NetworkManager[44910]: <info>  [1759409173.9421] device (tapbd444c4e-00): carrier: link connected
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.945 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c46993-1a1f-4fb1-ac0c-3a86a8eb36a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.959 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5c27a8-6f83-41f2-959d-61379e25bdf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd444c4e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:de:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724251, 'reachable_time': 35232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292523, 'error': None, 'target': 'ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.975 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d10d8534-563e-4be9-a8eb-c27b0c3a760b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:de8b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724251, 'tstamp': 724251}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292524, 'error': None, 'target': 'ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:13.990 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0d20c3a0-b395-4b49-b3df-5d6dc7c7daf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd444c4e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:de:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724251, 'reachable_time': 35232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292525, 'error': None, 'target': 'ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.020 2 DEBUG nova.network.neutron [req-2e8369fb-200e-4f56-bcdc-a7d563aa19f1 req-7aec047e-1fb4-4f15-a069-ba43c9989f63 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Updated VIF entry in instance network info cache for port f5dc00f3-0982-4cbc-8727-599d5a3d7d49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.020 2 DEBUG nova.network.neutron [req-2e8369fb-200e-4f56-bcdc-a7d563aa19f1 req-7aec047e-1fb4-4f15-a069-ba43c9989f63 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Updating instance_info_cache with network_info: [{"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.018 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c897303f-b076-452f-bdf9-44c3f828201c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.061 2 DEBUG oslo_concurrency.lockutils [req-2e8369fb-200e-4f56-bcdc-a7d563aa19f1 req-7aec047e-1fb4-4f15-a069-ba43c9989f63 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.079 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f11dd039-7535-4d09-9f86-be8e9be77690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.080 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd444c4e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.081 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.082 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd444c4e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:14 np0005465987 NetworkManager[44910]: <info>  [1759409174.0848] manager: (tapbd444c4e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Oct  2 08:46:14 np0005465987 kernel: tapbd444c4e-00: entered promiscuous mode
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.094 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd444c4e-00, col_values=(('external_ids', {'iface-id': '2f25d291-6f05-489d-98f2-ae807b9d9461'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:14 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:14Z|00630|binding|INFO|Releasing lport 2f25d291-6f05-489d-98f2-ae807b9d9461 from this chassis (sb_readonly=0)
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.101 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd444c4e-0c48-4f54-bd04-5fe58ec22b3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd444c4e-0c48-4f54-bd04-5fe58ec22b3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.102 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8d774b-8ca0-483f-83c9-921e12e305ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.103 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/bd444c4e-0c48-4f54-bd04-5fe58ec22b3f.pid.haproxy
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID bd444c4e-0c48-4f54-bd04-5fe58ec22b3f
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:46:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:14.106 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f', 'env', 'PROCESS_TAG=haproxy-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd444c4e-0c48-4f54-bd04-5fe58ec22b3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:14.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:14 np0005465987 podman[292599]: 2025-10-02 12:46:14.46712017 +0000 UTC m=+0.045654518 container create a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:46:14 np0005465987 systemd[1]: Started libpod-conmon-a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca.scope.
Oct  2 08:46:14 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:46:14 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086da188f3e849f59c0656de54f76ffbc334354d5a3486f9aedd92588d1d80bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:46:14 np0005465987 podman[292599]: 2025-10-02 12:46:14.44292142 +0000 UTC m=+0.021455798 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:46:14 np0005465987 podman[292599]: 2025-10-02 12:46:14.545970091 +0000 UTC m=+0.124504479 container init a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:46:14 np0005465987 podman[292599]: 2025-10-02 12:46:14.551786157 +0000 UTC m=+0.130320515 container start a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:46:14 np0005465987 neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f[292614]: [NOTICE]   (292618) : New worker (292620) forked
Oct  2 08:46:14 np0005465987 neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f[292614]: [NOTICE]   (292618) : Loading success.
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.645 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409174.6450713, 5a538012-6544-42f5-8262-da298e929634 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.646 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] VM Started (Lifecycle Event)#033[00m
Oct  2 08:46:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:14.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.773 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.777 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409174.6452787, 5a538012-6544-42f5-8262-da298e929634 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.778 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.806 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.809 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:14 np0005465987 nova_compute[230713]: 2025-10-02 12:46:14.862 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:46:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.003 2 DEBUG nova.compute.manager [req-cbc978ef-c1cd-4084-8aba-8ccf2851225a req-e23427b7-0dab-456b-8a01-97e226db11a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Received event network-vif-plugged-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.004 2 DEBUG oslo_concurrency.lockutils [req-cbc978ef-c1cd-4084-8aba-8ccf2851225a req-e23427b7-0dab-456b-8a01-97e226db11a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "5a538012-6544-42f5-8262-da298e929634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.004 2 DEBUG oslo_concurrency.lockutils [req-cbc978ef-c1cd-4084-8aba-8ccf2851225a req-e23427b7-0dab-456b-8a01-97e226db11a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "5a538012-6544-42f5-8262-da298e929634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.004 2 DEBUG oslo_concurrency.lockutils [req-cbc978ef-c1cd-4084-8aba-8ccf2851225a req-e23427b7-0dab-456b-8a01-97e226db11a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "5a538012-6544-42f5-8262-da298e929634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.005 2 DEBUG nova.compute.manager [req-cbc978ef-c1cd-4084-8aba-8ccf2851225a req-e23427b7-0dab-456b-8a01-97e226db11a3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Processing event network-vif-plugged-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.005 2 DEBUG nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.008 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409175.008732, 5a538012-6544-42f5-8262-da298e929634 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.009 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.010 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.013 2 INFO nova.virt.libvirt.driver [-] [instance: 5a538012-6544-42f5-8262-da298e929634] Instance spawned successfully.#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.013 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.096 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.101 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.104 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.104 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.105 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.105 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.106 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.106 2 DEBUG nova.virt.libvirt.driver [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.157 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.224 2 INFO nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Took 9.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.224 2 DEBUG nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.322 2 INFO nova.compute.manager [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Took 11.11 seconds to build instance.#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.342 2 DEBUG oslo_concurrency.lockutils [None req-1d337fc0-8cac-4d25-a8eb-6f2ea352261a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "5a538012-6544-42f5-8262-da298e929634" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.849 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updating instance_info_cache with network_info: [{"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.868 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:15 np0005465987 nova_compute[230713]: 2025-10-02 12:46:15.869 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:46:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:16.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:16 np0005465987 nova_compute[230713]: 2025-10-02 12:46:16.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:16.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:16 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:16Z|00631|binding|INFO|Releasing lport 2f25d291-6f05-489d-98f2-ae807b9d9461 from this chassis (sb_readonly=0)
Oct  2 08:46:16 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:16Z|00632|binding|INFO|Releasing lport 5a8624b9-e774-4b29-b8e7-972fe270a8c7 from this chassis (sb_readonly=0)
Oct  2 08:46:16 np0005465987 nova_compute[230713]: 2025-10-02 12:46:16.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:16 np0005465987 nova_compute[230713]: 2025-10-02 12:46:16.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:16 np0005465987 nova_compute[230713]: 2025-10-02 12:46:16.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:16 np0005465987 nova_compute[230713]: 2025-10-02 12:46:16.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.010 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.010 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.010 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.174 2 DEBUG nova.compute.manager [req-651329e4-d434-4014-855a-b6cc72d695e7 req-6d431625-622d-44dd-b15b-f35219c97b06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Received event network-vif-plugged-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.175 2 DEBUG oslo_concurrency.lockutils [req-651329e4-d434-4014-855a-b6cc72d695e7 req-6d431625-622d-44dd-b15b-f35219c97b06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "5a538012-6544-42f5-8262-da298e929634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.176 2 DEBUG oslo_concurrency.lockutils [req-651329e4-d434-4014-855a-b6cc72d695e7 req-6d431625-622d-44dd-b15b-f35219c97b06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "5a538012-6544-42f5-8262-da298e929634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.176 2 DEBUG oslo_concurrency.lockutils [req-651329e4-d434-4014-855a-b6cc72d695e7 req-6d431625-622d-44dd-b15b-f35219c97b06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "5a538012-6544-42f5-8262-da298e929634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.177 2 DEBUG nova.compute.manager [req-651329e4-d434-4014-855a-b6cc72d695e7 req-6d431625-622d-44dd-b15b-f35219c97b06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] No waiting events found dispatching network-vif-plugged-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.177 2 WARNING nova.compute.manager [req-651329e4-d434-4014-855a-b6cc72d695e7 req-6d431625-622d-44dd-b15b-f35219c97b06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Received unexpected event network-vif-plugged-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1529374331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.559 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.721 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.721 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.724 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.725 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.884 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.885 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4038MB free_disk=20.901084899902344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.885 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.886 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.994 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 7bd653a3-96de-43be-a073-1384b3d6aae1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.995 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 5a538012-6544-42f5-8262-da298e929634 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.995 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:46:17 np0005465987 nova_compute[230713]: 2025-10-02 12:46:17.995 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:46:18 np0005465987 nova_compute[230713]: 2025-10-02 12:46:18.089 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:18.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:18 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1007076917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:18 np0005465987 nova_compute[230713]: 2025-10-02 12:46:18.540 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:18 np0005465987 nova_compute[230713]: 2025-10-02 12:46:18.546 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:46:18 np0005465987 nova_compute[230713]: 2025-10-02 12:46:18.599 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:46:18 np0005465987 nova_compute[230713]: 2025-10-02 12:46:18.640 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:46:18 np0005465987 nova_compute[230713]: 2025-10-02 12:46:18.641 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:46:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:18.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:46:19 np0005465987 nova_compute[230713]: 2025-10-02 12:46:19.641 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:19 np0005465987 nova_compute[230713]: 2025-10-02 12:46:19.643 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:19 np0005465987 nova_compute[230713]: 2025-10-02 12:46:19.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:20.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:20.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:21 np0005465987 nova_compute[230713]: 2025-10-02 12:46:21.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:22.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:22 np0005465987 nova_compute[230713]: 2025-10-02 12:46:22.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:22.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:22 np0005465987 nova_compute[230713]: 2025-10-02 12:46:22.741 2 DEBUG nova.compute.manager [req-511b78d7-1653-486d-9cca-2130dd6473d4 req-a9711be0-e2c9-434c-a53e-c4de90595a95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Received event network-changed-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:22 np0005465987 nova_compute[230713]: 2025-10-02 12:46:22.742 2 DEBUG nova.compute.manager [req-511b78d7-1653-486d-9cca-2130dd6473d4 req-a9711be0-e2c9-434c-a53e-c4de90595a95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Refreshing instance network info cache due to event network-changed-f5dc00f3-0982-4cbc-8727-599d5a3d7d49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:46:22 np0005465987 nova_compute[230713]: 2025-10-02 12:46:22.742 2 DEBUG oslo_concurrency.lockutils [req-511b78d7-1653-486d-9cca-2130dd6473d4 req-a9711be0-e2c9-434c-a53e-c4de90595a95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:22 np0005465987 nova_compute[230713]: 2025-10-02 12:46:22.743 2 DEBUG oslo_concurrency.lockutils [req-511b78d7-1653-486d-9cca-2130dd6473d4 req-a9711be0-e2c9-434c-a53e-c4de90595a95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:22 np0005465987 nova_compute[230713]: 2025-10-02 12:46:22.743 2 DEBUG nova.network.neutron [req-511b78d7-1653-486d-9cca-2130dd6473d4 req-a9711be0-e2c9-434c-a53e-c4de90595a95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Refreshing network info cache for port f5dc00f3-0982-4cbc-8727-599d5a3d7d49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:46:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:24.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:46:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:24.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:46:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:25 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:25Z|00633|binding|INFO|Releasing lport 2f25d291-6f05-489d-98f2-ae807b9d9461 from this chassis (sb_readonly=0)
Oct  2 08:46:25 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:25Z|00634|binding|INFO|Releasing lport 5a8624b9-e774-4b29-b8e7-972fe270a8c7 from this chassis (sb_readonly=0)
Oct  2 08:46:25 np0005465987 nova_compute[230713]: 2025-10-02 12:46:25.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:25 np0005465987 nova_compute[230713]: 2025-10-02 12:46:25.673 2 DEBUG nova.network.neutron [req-511b78d7-1653-486d-9cca-2130dd6473d4 req-a9711be0-e2c9-434c-a53e-c4de90595a95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Updated VIF entry in instance network info cache for port f5dc00f3-0982-4cbc-8727-599d5a3d7d49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:46:25 np0005465987 nova_compute[230713]: 2025-10-02 12:46:25.674 2 DEBUG nova.network.neutron [req-511b78d7-1653-486d-9cca-2130dd6473d4 req-a9711be0-e2c9-434c-a53e-c4de90595a95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Updating instance_info_cache with network_info: [{"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:25 np0005465987 nova_compute[230713]: 2025-10-02 12:46:25.704 2 DEBUG oslo_concurrency.lockutils [req-511b78d7-1653-486d-9cca-2130dd6473d4 req-a9711be0-e2c9-434c-a53e-c4de90595a95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:26 np0005465987 nova_compute[230713]: 2025-10-02 12:46:26.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:26.125 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:26.126 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:46:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:26.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:26 np0005465987 nova_compute[230713]: 2025-10-02 12:46:26.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:27 np0005465987 nova_compute[230713]: 2025-10-02 12:46:27.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:27 np0005465987 nova_compute[230713]: 2025-10-02 12:46:27.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:27 np0005465987 nova_compute[230713]: 2025-10-02 12:46:27.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:46:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:27.971 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:27.971 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:27.972 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:28.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:28.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:29Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:48:b9 10.100.0.5
Oct  2 08:46:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:29Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:48:b9 10.100.0.5
Oct  2 08:46:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:46:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:30.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:46:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:30.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:30 np0005465987 nova_compute[230713]: 2025-10-02 12:46:30.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:31.129 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:31 np0005465987 nova_compute[230713]: 2025-10-02 12:46:31.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:46:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:32.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:46:32 np0005465987 nova_compute[230713]: 2025-10-02 12:46:32.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:46:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:32.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:46:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:34.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:46:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:34.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:46:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:36.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:36 np0005465987 nova_compute[230713]: 2025-10-02 12:46:36.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:36.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:37 np0005465987 nova_compute[230713]: 2025-10-02 12:46:37.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:37 np0005465987 podman[292675]: 2025-10-02 12:46:37.845847619 +0000 UTC m=+0.055323879 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:46:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:38.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:38.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:40.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:40.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:41 np0005465987 nova_compute[230713]: 2025-10-02 12:46:41.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:41 np0005465987 nova_compute[230713]: 2025-10-02 12:46:41.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:42.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:42 np0005465987 nova_compute[230713]: 2025-10-02 12:46:42.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:42.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:42 np0005465987 podman[292696]: 2025-10-02 12:46:42.883761751 +0000 UTC m=+0.080456205 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:46:42 np0005465987 podman[292697]: 2025-10-02 12:46:42.892970828 +0000 UTC m=+0.088888370 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:46:42 np0005465987 podman[292695]: 2025-10-02 12:46:42.925599526 +0000 UTC m=+0.129847972 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:46:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:46:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:44.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:46:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:44.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.852 2 DEBUG nova.compute.manager [req-13fcd8f8-d466-4d8b-998c-1bfa09b2547a req-34262e3c-275c-4752-afdb-797c2db7fbce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Received event network-changed-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.852 2 DEBUG nova.compute.manager [req-13fcd8f8-d466-4d8b-998c-1bfa09b2547a req-34262e3c-275c-4752-afdb-797c2db7fbce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Refreshing instance network info cache due to event network-changed-f5dc00f3-0982-4cbc-8727-599d5a3d7d49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.852 2 DEBUG oslo_concurrency.lockutils [req-13fcd8f8-d466-4d8b-998c-1bfa09b2547a req-34262e3c-275c-4752-afdb-797c2db7fbce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.852 2 DEBUG oslo_concurrency.lockutils [req-13fcd8f8-d466-4d8b-998c-1bfa09b2547a req-34262e3c-275c-4752-afdb-797c2db7fbce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.853 2 DEBUG nova.network.neutron [req-13fcd8f8-d466-4d8b-998c-1bfa09b2547a req-34262e3c-275c-4752-afdb-797c2db7fbce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Refreshing network info cache for port f5dc00f3-0982-4cbc-8727-599d5a3d7d49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.909 2 DEBUG oslo_concurrency.lockutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquiring lock "5a538012-6544-42f5-8262-da298e929634" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.909 2 DEBUG oslo_concurrency.lockutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "5a538012-6544-42f5-8262-da298e929634" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.909 2 DEBUG oslo_concurrency.lockutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquiring lock "5a538012-6544-42f5-8262-da298e929634-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.909 2 DEBUG oslo_concurrency.lockutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "5a538012-6544-42f5-8262-da298e929634-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.909 2 DEBUG oslo_concurrency.lockutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "5a538012-6544-42f5-8262-da298e929634-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.910 2 INFO nova.compute.manager [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Terminating instance#033[00m
Oct  2 08:46:44 np0005465987 nova_compute[230713]: 2025-10-02 12:46:44.911 2 DEBUG nova.compute.manager [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:46:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:45 np0005465987 kernel: tapf5dc00f3-09 (unregistering): left promiscuous mode
Oct  2 08:46:45 np0005465987 NetworkManager[44910]: <info>  [1759409205.0358] device (tapf5dc00f3-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:46:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:45Z|00635|binding|INFO|Releasing lport f5dc00f3-0982-4cbc-8727-599d5a3d7d49 from this chassis (sb_readonly=0)
Oct  2 08:46:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:45Z|00636|binding|INFO|Setting lport f5dc00f3-0982-4cbc-8727-599d5a3d7d49 down in Southbound
Oct  2 08:46:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:45Z|00637|binding|INFO|Removing iface tapf5dc00f3-09 ovn-installed in OVS
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.051 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:48:b9 10.100.0.5'], port_security=['fa:16:3e:95:48:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5a538012-6544-42f5-8262-da298e929634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bebc19e62a04c99b7982761086c6bb7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07de347b-eb8b-4454-8d79-471b92641858 733fca7b-666c-470e-952b-f63bb0a2c897', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de89a42e-0fe3-4a39-9de6-48b3ef97b42e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=f5dc00f3-0982-4cbc-8727-599d5a3d7d49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.053 138325 INFO neutron.agent.ovn.metadata.agent [-] Port f5dc00f3-0982-4cbc-8727-599d5a3d7d49 in datapath bd444c4e-0c48-4f54-bd04-5fe58ec22b3f unbound from our chassis#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.054 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd444c4e-0c48-4f54-bd04-5fe58ec22b3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.055 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[28531b0a-ab7a-4ae9-9815-789331d1911f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.055 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f namespace which is not needed anymore#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:45 np0005465987 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Oct  2 08:46:45 np0005465987 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a5.scope: Consumed 14.402s CPU time.
Oct  2 08:46:45 np0005465987 systemd-machined[188335]: Machine qemu-74-instance-000000a5 terminated.
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.148 2 INFO nova.virt.libvirt.driver [-] [instance: 5a538012-6544-42f5-8262-da298e929634] Instance destroyed successfully.#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.149 2 DEBUG nova.objects.instance [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lazy-loading 'resources' on Instance uuid 5a538012-6544-42f5-8262-da298e929634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:45 np0005465987 neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f[292614]: [NOTICE]   (292618) : haproxy version is 2.8.14-c23fe91
Oct  2 08:46:45 np0005465987 neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f[292614]: [NOTICE]   (292618) : path to executable is /usr/sbin/haproxy
Oct  2 08:46:45 np0005465987 neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f[292614]: [WARNING]  (292618) : Exiting Master process...
Oct  2 08:46:45 np0005465987 neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f[292614]: [WARNING]  (292618) : Exiting Master process...
Oct  2 08:46:45 np0005465987 neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f[292614]: [ALERT]    (292618) : Current worker (292620) exited with code 143 (Terminated)
Oct  2 08:46:45 np0005465987 neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f[292614]: [WARNING]  (292618) : All workers exited. Exiting... (0)
Oct  2 08:46:45 np0005465987 systemd[1]: libpod-a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca.scope: Deactivated successfully.
Oct  2 08:46:45 np0005465987 podman[292782]: 2025-10-02 12:46:45.186270704 +0000 UTC m=+0.052960965 container died a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:46:45 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca-userdata-shm.mount: Deactivated successfully.
Oct  2 08:46:45 np0005465987 systemd[1]: var-lib-containers-storage-overlay-086da188f3e849f59c0656de54f76ffbc334354d5a3486f9aedd92588d1d80bf-merged.mount: Deactivated successfully.
Oct  2 08:46:45 np0005465987 podman[292782]: 2025-10-02 12:46:45.221770259 +0000 UTC m=+0.088460520 container cleanup a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:46:45 np0005465987 systemd[1]: libpod-conmon-a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca.scope: Deactivated successfully.
Oct  2 08:46:45 np0005465987 podman[292823]: 2025-10-02 12:46:45.284916186 +0000 UTC m=+0.041698182 container remove a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.290 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fa843588-8bd3-409e-9757-fc4ea899eece]: (4, ('Thu Oct  2 12:46:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f (a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca)\na8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca\nThu Oct  2 12:46:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f (a8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca)\na8ec01f6508b0181e103ac61328285eacfcc4b75c0e8bba290e4fbf8eaa2d7ca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.292 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0de71d-fd48-498a-b111-5478d19c171c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.293 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd444c4e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:45 np0005465987 kernel: tapbd444c4e-00: left promiscuous mode
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.312 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[59ceabeb-4f6e-4bcc-be62-e0d4e250882a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.338 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b3b081-0b74-4438-b89f-4698c7a51840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.339 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dcdb5974-679e-44e9-8631-62bb9175c5e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.356 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cfc986-3db5-4dc0-b4d3-e2c47e9d0ce3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724245, 'reachable_time': 35832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292841, 'error': None, 'target': 'ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.359 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd444c4e-0c48-4f54-bd04-5fe58ec22b3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:46:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:45.359 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[59ef0229-2d7e-40ac-b1b1-ce311553cf90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:45 np0005465987 systemd[1]: run-netns-ovnmeta\x2dbd444c4e\x2d0c48\x2d4f54\x2dbd04\x2d5fe58ec22b3f.mount: Deactivated successfully.
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.367 2 DEBUG nova.virt.libvirt.vif [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-867195034-access_point-528169257',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-867195034-access_point-528169257',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-867195034-acc',id=165,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKK59RxXF0zkM+DQKKfqt3lKEBHXJSgzX9X97HRVbxX02BUom6Hybllx+GRgopHp07LFNIQbeDVO3XCkNvHzK6J7znSS+By1rD5nR72V8lGj4HdPOdi+riwvq2Gz2cZ24g==',key_name='tempest-TestSecurityGroupsBasicOps-1478546965',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:46:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bebc19e62a04c99b7982761086c6bb7',ramdisk_id='',reservation_id='r-ddtaal0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-867195034',owner_user_name='tempest-TestSecurityGroupsBasicOps-867195034-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:46:15Z,user_data=None,user_id='e22bfab495d045efbf09d770e1516f29',uuid=5a538012-6544-42f5-8262-da298e929634,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.367 2 DEBUG nova.network.os_vif_util [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Converting VIF {"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.369 2 DEBUG nova.network.os_vif_util [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:48:b9,bridge_name='br-int',has_traffic_filtering=True,id=f5dc00f3-0982-4cbc-8727-599d5a3d7d49,network=Network(bd444c4e-0c48-4f54-bd04-5fe58ec22b3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5dc00f3-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.369 2 DEBUG os_vif [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:48:b9,bridge_name='br-int',has_traffic_filtering=True,id=f5dc00f3-0982-4cbc-8727-599d5a3d7d49,network=Network(bd444c4e-0c48-4f54-bd04-5fe58ec22b3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5dc00f3-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5dc00f3-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.381 2 INFO os_vif [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:48:b9,bridge_name='br-int',has_traffic_filtering=True,id=f5dc00f3-0982-4cbc-8727-599d5a3d7d49,network=Network(bd444c4e-0c48-4f54-bd04-5fe58ec22b3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5dc00f3-09')#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.601 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.601 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.625 2 DEBUG nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.726 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.727 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.740 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.740 2 INFO nova.compute.claims [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:46:45 np0005465987 nova_compute[230713]: 2025-10-02 12:46:45.977 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:46.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/845394669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.395 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.404 2 INFO nova.virt.libvirt.driver [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Deleting instance files /var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634_del#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.404 2 INFO nova.virt.libvirt.driver [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Deletion of /var/lib/nova/instances/5a538012-6544-42f5-8262-da298e929634_del complete#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.408 2 DEBUG nova.compute.provider_tree [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.452 2 DEBUG nova.scheduler.client.report [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.463 2 INFO nova.compute.manager [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Took 1.55 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.464 2 DEBUG oslo.service.loopingcall [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.464 2 DEBUG nova.compute.manager [-] [instance: 5a538012-6544-42f5-8262-da298e929634] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.464 2 DEBUG nova.network.neutron [-] [instance: 5a538012-6544-42f5-8262-da298e929634] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.482 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.482 2 DEBUG nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.531 2 DEBUG nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.532 2 DEBUG nova.network.neutron [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.562 2 INFO nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.591 2 DEBUG nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.686 2 DEBUG nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.687 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.687 2 INFO nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Creating image(s)#033[00m
Oct  2 08:46:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:46.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.712 2 DEBUG nova.storage.rbd_utils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.739 2 DEBUG nova.storage.rbd_utils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.763 2 DEBUG nova.storage.rbd_utils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.767 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.799 2 DEBUG nova.policy [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9c1a967b21e4d05a1e9cb54949a7527', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '508fca18f76a46cba8f3b8b8d8169ef1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.836 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.837 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.838 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.838 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.909 2 DEBUG nova.storage.rbd_utils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.913 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:46 np0005465987 nova_compute[230713]: 2025-10-02 12:46:46.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.108 2 DEBUG nova.network.neutron [req-13fcd8f8-d466-4d8b-998c-1bfa09b2547a req-34262e3c-275c-4752-afdb-797c2db7fbce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Updated VIF entry in instance network info cache for port f5dc00f3-0982-4cbc-8727-599d5a3d7d49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.109 2 DEBUG nova.network.neutron [req-13fcd8f8-d466-4d8b-998c-1bfa09b2547a req-34262e3c-275c-4752-afdb-797c2db7fbce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Updating instance_info_cache with network_info: [{"id": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "address": "fa:16:3e:95:48:b9", "network": {"id": "bd444c4e-0c48-4f54-bd04-5fe58ec22b3f", "bridge": "br-int", "label": "tempest-network-smoke--8936800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bebc19e62a04c99b7982761086c6bb7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5dc00f3-09", "ovs_interfaceid": "f5dc00f3-0982-4cbc-8727-599d5a3d7d49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.146 2 DEBUG oslo_concurrency.lockutils [req-13fcd8f8-d466-4d8b-998c-1bfa09b2547a req-34262e3c-275c-4752-afdb-797c2db7fbce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-5a538012-6544-42f5-8262-da298e929634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.594 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.668 2 DEBUG nova.storage.rbd_utils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] resizing rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.848 2 DEBUG nova.network.neutron [-] [instance: 5a538012-6544-42f5-8262-da298e929634] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.857 2 DEBUG nova.objects.instance [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'migration_context' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.900 2 INFO nova.compute.manager [-] [instance: 5a538012-6544-42f5-8262-da298e929634] Took 1.44 seconds to deallocate network for instance.#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.911 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.912 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Ensure instance console log exists: /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.912 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.912 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.913 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.941 2 DEBUG nova.compute.manager [req-adc33f66-e633-4525-a23f-7056d1c43b97 req-b1f84806-59e2-45d2-bfea-7e8acde53139 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 5a538012-6544-42f5-8262-da298e929634] Received event network-vif-deleted-f5dc00f3-0982-4cbc-8727-599d5a3d7d49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.979 2 DEBUG oslo_concurrency.lockutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.979 2 DEBUG oslo_concurrency.lockutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:47 np0005465987 nova_compute[230713]: 2025-10-02 12:46:47.980 2 DEBUG nova.network.neutron [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Successfully created port: 2b948b4e-6553-4f5f-bf05-4bc8467c9898 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:46:48 np0005465987 nova_compute[230713]: 2025-10-02 12:46:48.115 2 DEBUG oslo_concurrency.processutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:48.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4030086630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:48 np0005465987 nova_compute[230713]: 2025-10-02 12:46:48.621 2 DEBUG oslo_concurrency.processutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:48 np0005465987 nova_compute[230713]: 2025-10-02 12:46:48.629 2 DEBUG nova.compute.provider_tree [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:46:48 np0005465987 nova_compute[230713]: 2025-10-02 12:46:48.652 2 DEBUG nova.scheduler.client.report [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:46:48 np0005465987 nova_compute[230713]: 2025-10-02 12:46:48.680 2 DEBUG oslo_concurrency.lockutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:48.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:48 np0005465987 nova_compute[230713]: 2025-10-02 12:46:48.732 2 INFO nova.scheduler.client.report [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Deleted allocations for instance 5a538012-6544-42f5-8262-da298e929634#033[00m
Oct  2 08:46:48 np0005465987 nova_compute[230713]: 2025-10-02 12:46:48.817 2 DEBUG oslo_concurrency.lockutils [None req-0d248120-8f5b-42a4-8211-f8c901ada49a e22bfab495d045efbf09d770e1516f29 9bebc19e62a04c99b7982761086c6bb7 - - default default] Lock "5a538012-6544-42f5-8262-da298e929634" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:49 np0005465987 nova_compute[230713]: 2025-10-02 12:46:49.138 2 DEBUG nova.network.neutron [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Successfully updated port: 2b948b4e-6553-4f5f-bf05-4bc8467c9898 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:46:49 np0005465987 nova_compute[230713]: 2025-10-02 12:46:49.161 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:49 np0005465987 nova_compute[230713]: 2025-10-02 12:46:49.161 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquired lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:49 np0005465987 nova_compute[230713]: 2025-10-02 12:46:49.161 2 DEBUG nova.network.neutron [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:46:49 np0005465987 nova_compute[230713]: 2025-10-02 12:46:49.344 2 DEBUG nova.compute.manager [req-ff3fc058-3c37-437c-a050-51013d35611d req-0ec7c170-180a-47ef-b978-7a4e977250f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-changed-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:49 np0005465987 nova_compute[230713]: 2025-10-02 12:46:49.344 2 DEBUG nova.compute.manager [req-ff3fc058-3c37-437c-a050-51013d35611d req-0ec7c170-180a-47ef-b978-7a4e977250f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Refreshing instance network info cache due to event network-changed-2b948b4e-6553-4f5f-bf05-4bc8467c9898. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:46:49 np0005465987 nova_compute[230713]: 2025-10-02 12:46:49.345 2 DEBUG oslo_concurrency.lockutils [req-ff3fc058-3c37-437c-a050-51013d35611d req-0ec7c170-180a-47ef-b978-7a4e977250f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:49 np0005465987 nova_compute[230713]: 2025-10-02 12:46:49.453 2 DEBUG nova.network.neutron [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:46:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:50.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.569 2 DEBUG nova.network.neutron [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Updating instance_info_cache with network_info: [{"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.598 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Releasing lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.598 2 DEBUG nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance network_info: |[{"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.598 2 DEBUG oslo_concurrency.lockutils [req-ff3fc058-3c37-437c-a050-51013d35611d req-0ec7c170-180a-47ef-b978-7a4e977250f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.599 2 DEBUG nova.network.neutron [req-ff3fc058-3c37-437c-a050-51013d35611d req-0ec7c170-180a-47ef-b978-7a4e977250f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Refreshing network info cache for port 2b948b4e-6553-4f5f-bf05-4bc8467c9898 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.601 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Start _get_guest_xml network_info=[{"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.605 2 WARNING nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.610 2 DEBUG nova.virt.libvirt.host [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.610 2 DEBUG nova.virt.libvirt.host [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.613 2 DEBUG nova.virt.libvirt.host [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.614 2 DEBUG nova.virt.libvirt.host [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.615 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.615 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.615 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.616 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.616 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.616 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.616 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.616 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.617 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.617 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.617 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.617 2 DEBUG nova.virt.hardware [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:46:50 np0005465987 nova_compute[230713]: 2025-10-02 12:46:50.620 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:50.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:46:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3232759905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.057 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.087 2 DEBUG nova.storage.rbd_utils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.093 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:46:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1732234033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.555 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.558 2 DEBUG nova.virt.libvirt.vif [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1488280213',display_name='tempest-ServerRescueTestJSON-server-1488280213',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1488280213',id=167,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='508fca18f76a46cba8f3b8b8d8169ef1',ramdisk_id='',reservation_id='r-leqzacb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1772269056',owner_user_name='tempest-ServerRescueTestJSON-1772269056-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:46Z,user_data=None,user_id='f9c1a967b21e4d05a1e9cb54949a7527',uuid=781be3e3-efcf-46f6-aa85-dff8146f22ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.558 2 DEBUG nova.network.os_vif_util [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Converting VIF {"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.559 2 DEBUG nova.network.os_vif_util [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=2b948b4e-6553-4f5f-bf05-4bc8467c9898,network=Network(54eb1883-31f7-40fa-864f-3516c27c1276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b948b4e-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.560 2 DEBUG nova.objects.instance [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.586 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <uuid>781be3e3-efcf-46f6-aa85-dff8146f22ae</uuid>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <name>instance-000000a7</name>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerRescueTestJSON-server-1488280213</nova:name>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:46:50</nova:creationTime>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <nova:user uuid="f9c1a967b21e4d05a1e9cb54949a7527">tempest-ServerRescueTestJSON-1772269056-project-member</nova:user>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <nova:project uuid="508fca18f76a46cba8f3b8b8d8169ef1">tempest-ServerRescueTestJSON-1772269056</nova:project>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <nova:port uuid="2b948b4e-6553-4f5f-bf05-4bc8467c9898">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <entry name="serial">781be3e3-efcf-46f6-aa85-dff8146f22ae</entry>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <entry name="uuid">781be3e3-efcf-46f6-aa85-dff8146f22ae</entry>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/781be3e3-efcf-46f6-aa85-dff8146f22ae_disk">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:ee:67:ca"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <target dev="tap2b948b4e-65"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/console.log" append="off"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:46:51 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:46:51 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:46:51 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:46:51 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.587 2 DEBUG nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Preparing to wait for external event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.587 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.587 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.588 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.588 2 DEBUG nova.virt.libvirt.vif [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1488280213',display_name='tempest-ServerRescueTestJSON-server-1488280213',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1488280213',id=167,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='508fca18f76a46cba8f3b8b8d8169ef1',ramdisk_id='',reservation_id='r-leqzacb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1772269056',owner_user_name='tempest-ServerRescueTestJSON-1772269056-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:46Z,user_data=None,user_id='f9c1a967b21e4d05a1e9cb54949a7527',uuid=781be3e3-efcf-46f6-aa85-dff8146f22ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.588 2 DEBUG nova.network.os_vif_util [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Converting VIF {"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.589 2 DEBUG nova.network.os_vif_util [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=2b948b4e-6553-4f5f-bf05-4bc8467c9898,network=Network(54eb1883-31f7-40fa-864f-3516c27c1276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b948b4e-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.589 2 DEBUG os_vif [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=2b948b4e-6553-4f5f-bf05-4bc8467c9898,network=Network(54eb1883-31f7-40fa-864f-3516c27c1276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b948b4e-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b948b4e-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b948b4e-65, col_values=(('external_ids', {'iface-id': '2b948b4e-6553-4f5f-bf05-4bc8467c9898', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:67:ca', 'vm-uuid': '781be3e3-efcf-46f6-aa85-dff8146f22ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:51 np0005465987 NetworkManager[44910]: <info>  [1759409211.5960] manager: (tap2b948b4e-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.602 2 INFO os_vif [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=2b948b4e-6553-4f5f-bf05-4bc8467c9898,network=Network(54eb1883-31f7-40fa-864f-3516c27c1276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b948b4e-65')#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.734 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.734 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.735 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] No VIF found with MAC fa:16:3e:ee:67:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.735 2 INFO nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Using config drive#033[00m
Oct  2 08:46:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:46:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:46:51 np0005465987 nova_compute[230713]: 2025-10-02 12:46:51.769 2 DEBUG nova.storage.rbd_utils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:52.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.376 2 INFO nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Creating config drive at /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config#033[00m
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.381 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikd3fp4q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.514 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikd3fp4q" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.544 2 DEBUG nova.storage.rbd_utils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.547 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:52.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:46:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:46:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:46:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:46:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.826 2 DEBUG nova.network.neutron [req-ff3fc058-3c37-437c-a050-51013d35611d req-0ec7c170-180a-47ef-b978-7a4e977250f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Updated VIF entry in instance network info cache for port 2b948b4e-6553-4f5f-bf05-4bc8467c9898. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.827 2 DEBUG nova.network.neutron [req-ff3fc058-3c37-437c-a050-51013d35611d req-0ec7c170-180a-47ef-b978-7a4e977250f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Updating instance_info_cache with network_info: [{"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.844 2 DEBUG oslo_concurrency.processutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.844 2 INFO nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Deleting local config drive /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config because it was imported into RBD.#033[00m
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.857 2 DEBUG oslo_concurrency.lockutils [req-ff3fc058-3c37-437c-a050-51013d35611d req-0ec7c170-180a-47ef-b978-7a4e977250f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:52 np0005465987 kernel: tap2b948b4e-65: entered promiscuous mode
Oct  2 08:46:52 np0005465987 NetworkManager[44910]: <info>  [1759409212.8928] manager: (tap2b948b4e-65): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Oct  2 08:46:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:52Z|00638|binding|INFO|Claiming lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 for this chassis.
Oct  2 08:46:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:52Z|00639|binding|INFO|2b948b4e-6553-4f5f-bf05-4bc8467c9898: Claiming fa:16:3e:ee:67:ca 10.100.0.3
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:52Z|00640|binding|INFO|Setting lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 ovn-installed in OVS
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:52 np0005465987 nova_compute[230713]: 2025-10-02 12:46:52.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:52 np0005465987 systemd-udevd[293459]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:46:52 np0005465987 systemd-machined[188335]: New machine qemu-75-instance-000000a7.
Oct  2 08:46:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:52Z|00641|binding|INFO|Setting lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 up in Southbound
Oct  2 08:46:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:52.930 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:ca 10.100.0.3'], port_security=['fa:16:3e:ee:67:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '781be3e3-efcf-46f6-aa85-dff8146f22ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54eb1883-31f7-40fa-864f-3516c27c1276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '508fca18f76a46cba8f3b8b8d8169ef1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1004833a-e427-4539-9aa9-ed5f03a58603', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b082b214-cf6d-4c88-9481-dd3bf0098234, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2b948b4e-6553-4f5f-bf05-4bc8467c9898) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:52.931 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2b948b4e-6553-4f5f-bf05-4bc8467c9898 in datapath 54eb1883-31f7-40fa-864f-3516c27c1276 bound to our chassis#033[00m
Oct  2 08:46:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:52.932 138325 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 54eb1883-31f7-40fa-864f-3516c27c1276 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:46:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:52.933 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[05d53b9a-8906-4f4e-b7b1-6ad35f70c012]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:52 np0005465987 NetworkManager[44910]: <info>  [1759409212.9341] device (tap2b948b4e-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:46:52 np0005465987 NetworkManager[44910]: <info>  [1759409212.9349] device (tap2b948b4e-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:46:52 np0005465987 systemd[1]: Started Virtual Machine qemu-75-instance-000000a7.
Oct  2 08:46:53 np0005465987 nova_compute[230713]: 2025-10-02 12:46:53.781 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409213.7814429, 781be3e3-efcf-46f6-aa85-dff8146f22ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:53 np0005465987 nova_compute[230713]: 2025-10-02 12:46:53.783 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] VM Started (Lifecycle Event)#033[00m
Oct  2 08:46:53 np0005465987 nova_compute[230713]: 2025-10-02 12:46:53.817 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:53 np0005465987 nova_compute[230713]: 2025-10-02 12:46:53.821 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409213.7815886, 781be3e3-efcf-46f6-aa85-dff8146f22ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:53 np0005465987 nova_compute[230713]: 2025-10-02 12:46:53.821 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:46:53 np0005465987 nova_compute[230713]: 2025-10-02 12:46:53.885 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:53 np0005465987 nova_compute[230713]: 2025-10-02 12:46:53.889 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:53 np0005465987 nova_compute[230713]: 2025-10-02 12:46:53.955 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.157 2 DEBUG nova.compute.manager [req-6fac54e7-12d4-4d32-809d-4107e28b3577 req-d64fbf30-5fba-4c6e-8e4b-c0e0c188c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.157 2 DEBUG oslo_concurrency.lockutils [req-6fac54e7-12d4-4d32-809d-4107e28b3577 req-d64fbf30-5fba-4c6e-8e4b-c0e0c188c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.158 2 DEBUG oslo_concurrency.lockutils [req-6fac54e7-12d4-4d32-809d-4107e28b3577 req-d64fbf30-5fba-4c6e-8e4b-c0e0c188c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.158 2 DEBUG oslo_concurrency.lockutils [req-6fac54e7-12d4-4d32-809d-4107e28b3577 req-d64fbf30-5fba-4c6e-8e4b-c0e0c188c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.158 2 DEBUG nova.compute.manager [req-6fac54e7-12d4-4d32-809d-4107e28b3577 req-d64fbf30-5fba-4c6e-8e4b-c0e0c188c287 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Processing event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.158 2 DEBUG nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.164 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409214.1638732, 781be3e3-efcf-46f6-aa85-dff8146f22ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.164 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.166 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.169 2 INFO nova.virt.libvirt.driver [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance spawned successfully.#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.169 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000053s ======
Oct  2 08:46:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:54.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.293 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.294 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.294 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.295 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.295 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.295 2 DEBUG nova.virt.libvirt.driver [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.299 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.301 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.367 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.454 2 INFO nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Took 7.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.456 2 DEBUG nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.630 2 INFO nova.compute.manager [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Took 8.94 seconds to build instance.#033[00m
Oct  2 08:46:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:54.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.714 2 DEBUG oslo_concurrency.lockutils [None req-26ae5f1a-a3a9-4e74-abac-6c3c927a3393 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:54 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:54Z|00642|binding|INFO|Releasing lport 5a8624b9-e774-4b29-b8e7-972fe270a8c7 from this chassis (sb_readonly=0)
Oct  2 08:46:54 np0005465987 nova_compute[230713]: 2025-10-02 12:46:54.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:46:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3160753360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:46:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:46:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3160753360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:46:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:56.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:56 np0005465987 nova_compute[230713]: 2025-10-02 12:46:56.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:56 np0005465987 nova_compute[230713]: 2025-10-02 12:46:56.320 2 DEBUG nova.compute.manager [req-63b74fa8-a1a3-4b37-a195-adfa6aa78938 req-fe2d2f52-20c8-4c3b-a43d-a111277692a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:56 np0005465987 nova_compute[230713]: 2025-10-02 12:46:56.321 2 DEBUG oslo_concurrency.lockutils [req-63b74fa8-a1a3-4b37-a195-adfa6aa78938 req-fe2d2f52-20c8-4c3b-a43d-a111277692a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:56 np0005465987 nova_compute[230713]: 2025-10-02 12:46:56.321 2 DEBUG oslo_concurrency.lockutils [req-63b74fa8-a1a3-4b37-a195-adfa6aa78938 req-fe2d2f52-20c8-4c3b-a43d-a111277692a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:56 np0005465987 nova_compute[230713]: 2025-10-02 12:46:56.321 2 DEBUG oslo_concurrency.lockutils [req-63b74fa8-a1a3-4b37-a195-adfa6aa78938 req-fe2d2f52-20c8-4c3b-a43d-a111277692a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:56 np0005465987 nova_compute[230713]: 2025-10-02 12:46:56.321 2 DEBUG nova.compute.manager [req-63b74fa8-a1a3-4b37-a195-adfa6aa78938 req-fe2d2f52-20c8-4c3b-a43d-a111277692a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:56 np0005465987 nova_compute[230713]: 2025-10-02 12:46:56.321 2 WARNING nova.compute.manager [req-63b74fa8-a1a3-4b37-a195-adfa6aa78938 req-fe2d2f52-20c8-4c3b-a43d-a111277692a0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:46:56 np0005465987 nova_compute[230713]: 2025-10-02 12:46:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:56.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.133 2 INFO nova.compute.manager [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Rescuing#033[00m
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.133 2 DEBUG oslo_concurrency.lockutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.134 2 DEBUG oslo_concurrency.lockutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquired lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.134 2 DEBUG nova.network.neutron [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.826 2 DEBUG nova.compute.manager [req-8cfdd7c6-48b9-447f-b7d8-542287d928dd req-c5225060-ae57-4fa0-908c-62bc63fa8513 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received event network-changed-23a78e40-bfa3-4120-abee-48eb4fec828b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.826 2 DEBUG nova.compute.manager [req-8cfdd7c6-48b9-447f-b7d8-542287d928dd req-c5225060-ae57-4fa0-908c-62bc63fa8513 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Refreshing instance network info cache due to event network-changed-23a78e40-bfa3-4120-abee-48eb4fec828b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.827 2 DEBUG oslo_concurrency.lockutils [req-8cfdd7c6-48b9-447f-b7d8-542287d928dd req-c5225060-ae57-4fa0-908c-62bc63fa8513 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.827 2 DEBUG oslo_concurrency.lockutils [req-8cfdd7c6-48b9-447f-b7d8-542287d928dd req-c5225060-ae57-4fa0-908c-62bc63fa8513 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.827 2 DEBUG nova.network.neutron [req-8cfdd7c6-48b9-447f-b7d8-542287d928dd req-c5225060-ae57-4fa0-908c-62bc63fa8513 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Refreshing network info cache for port 23a78e40-bfa3-4120-abee-48eb4fec828b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:46:57 np0005465987 nova_compute[230713]: 2025-10-02 12:46:57.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.011 2 DEBUG oslo_concurrency.lockutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "7bd653a3-96de-43be-a073-1384b3d6aae1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.011 2 DEBUG oslo_concurrency.lockutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.012 2 DEBUG oslo_concurrency.lockutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.012 2 DEBUG oslo_concurrency.lockutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.012 2 DEBUG oslo_concurrency.lockutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.013 2 INFO nova.compute.manager [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Terminating instance#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.014 2 DEBUG nova.compute.manager [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:46:58 np0005465987 kernel: tap23a78e40-bf (unregistering): left promiscuous mode
Oct  2 08:46:58 np0005465987 NetworkManager[44910]: <info>  [1759409218.0728] device (tap23a78e40-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:46:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:58Z|00643|binding|INFO|Releasing lport 23a78e40-bfa3-4120-abee-48eb4fec828b from this chassis (sb_readonly=0)
Oct  2 08:46:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:58Z|00644|binding|INFO|Setting lport 23a78e40-bfa3-4120-abee-48eb4fec828b down in Southbound
Oct  2 08:46:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:46:58Z|00645|binding|INFO|Removing iface tap23a78e40-bf ovn-installed in OVS
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.086 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:1a:2b 10.100.0.10'], port_security=['fa:16:3e:c6:1a:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7bd653a3-96de-43be-a073-1384b3d6aae1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2614e2b3-f8b2-48c2-924a-36fbf2e644a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '4', 'neutron:security_group_ids': '188b9a08-a903-4af6-b546-f7bd08b938fa 51d5e11a-b13a-4915-933f-c2d9aa135276', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9bd2d3c-1353-4216-ae02-9a1a5671ea57, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=23a78e40-bfa3-4120-abee-48eb4fec828b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.087 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 23a78e40-bfa3-4120-abee-48eb4fec828b in datapath 2614e2b3-f8b2-48c2-924a-36fbf2e644a4 unbound from our chassis#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.088 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2614e2b3-f8b2-48c2-924a-36fbf2e644a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.089 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ac546b46-80ba-4d53-a26e-8376f3500844]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.090 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4 namespace which is not needed anymore#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005465987 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Oct  2 08:46:58 np0005465987 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a2.scope: Consumed 19.102s CPU time.
Oct  2 08:46:58 np0005465987 systemd-machined[188335]: Machine qemu-73-instance-000000a2 terminated.
Oct  2 08:46:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:58.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:58 np0005465987 neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4[291715]: [NOTICE]   (291719) : haproxy version is 2.8.14-c23fe91
Oct  2 08:46:58 np0005465987 neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4[291715]: [NOTICE]   (291719) : path to executable is /usr/sbin/haproxy
Oct  2 08:46:58 np0005465987 neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4[291715]: [WARNING]  (291719) : Exiting Master process...
Oct  2 08:46:58 np0005465987 neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4[291715]: [ALERT]    (291719) : Current worker (291721) exited with code 143 (Terminated)
Oct  2 08:46:58 np0005465987 neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4[291715]: [WARNING]  (291719) : All workers exited. Exiting... (0)
Oct  2 08:46:58 np0005465987 systemd[1]: libpod-36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3.scope: Deactivated successfully.
Oct  2 08:46:58 np0005465987 podman[293533]: 2025-10-02 12:46:58.2383954 +0000 UTC m=+0.056175591 container died 36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.247 2 INFO nova.virt.libvirt.driver [-] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Instance destroyed successfully.#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.248 2 DEBUG nova.objects.instance [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'resources' on Instance uuid 7bd653a3-96de-43be-a073-1384b3d6aae1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.285 2 DEBUG nova.virt.libvirt.vif [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-450707529',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-450707529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=162,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/ZjcNE6wdFzzzXeuoKTkNX3KO3BqOWz1wYhko1YgNYgKucwC17t7W3cYk44y91IyPGFz/HxMZMdcg++d8EWlGOo4YwOU/6rhgUrqATsjpINMPOK6TF4JadZwKYwGDxAw==',key_name='tempest-TestSecurityGroupsBasicOps-1124539397',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:45:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-aleft2y6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:45:16Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=7bd653a3-96de-43be-a073-1384b3d6aae1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.286 2 DEBUG nova.network.os_vif_util [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.286 2 DEBUG nova.network.os_vif_util [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:1a:2b,bridge_name='br-int',has_traffic_filtering=True,id=23a78e40-bfa3-4120-abee-48eb4fec828b,network=Network(2614e2b3-f8b2-48c2-924a-36fbf2e644a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23a78e40-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.287 2 DEBUG os_vif [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:1a:2b,bridge_name='br-int',has_traffic_filtering=True,id=23a78e40-bfa3-4120-abee-48eb4fec828b,network=Network(2614e2b3-f8b2-48c2-924a-36fbf2e644a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23a78e40-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.289 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23a78e40-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:58 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3-userdata-shm.mount: Deactivated successfully.
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005465987 systemd[1]: var-lib-containers-storage-overlay-9c0cd8c1d702e9fd97740cb9b23809470a7685dfe77a1c3a96909d3d0d16f9ec-merged.mount: Deactivated successfully.
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.337 2 INFO os_vif [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:1a:2b,bridge_name='br-int',has_traffic_filtering=True,id=23a78e40-bfa3-4120-abee-48eb4fec828b,network=Network(2614e2b3-f8b2-48c2-924a-36fbf2e644a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23a78e40-bf')#033[00m
Oct  2 08:46:58 np0005465987 podman[293533]: 2025-10-02 12:46:58.370083591 +0000 UTC m=+0.187863782 container cleanup 36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:46:58 np0005465987 systemd[1]: libpod-conmon-36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3.scope: Deactivated successfully.
Oct  2 08:46:58 np0005465987 podman[293639]: 2025-10-02 12:46:58.514655898 +0000 UTC m=+0.115574178 container remove 36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.521 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[908f1cac-b2fc-4cb6-b8eb-4e1486f2ee89]: (4, ('Thu Oct  2 12:46:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4 (36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3)\n36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3\nThu Oct  2 12:46:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4 (36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3)\n36226690cf6e54ef3d72cdade27c5649d4873c5343ba00f719c6acde813e30c3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.523 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4318d1-9174-4166-8275-d2ef31e916af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.524 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2614e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:58 np0005465987 kernel: tap2614e2b3-f0: left promiscuous mode
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.532 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[92f8fb56-b296-4eae-a71a-7323d952d9af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.578 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[78bff028-07b4-4fb2-9f4f-7b9d9e378f0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.579 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fa623b39-9986-442a-8be1-0d3cbf337737]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.594 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bd658838-2bd0-45e5-8252-ac39fc454057]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718296, 'reachable_time': 24038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293657, 'error': None, 'target': 'ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005465987 systemd[1]: run-netns-ovnmeta\x2d2614e2b3\x2df8b2\x2d48c2\x2d924a\x2d36fbf2e644a4.mount: Deactivated successfully.
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.596 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2614e2b3-f8b2-48c2-924a-36fbf2e644a4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:46:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:46:58.597 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[1755b36a-1445-4111-8b5a-b0ecb2c8001f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.609 2 DEBUG nova.compute.manager [req-be4df8c5-8cf8-4fcc-8c9c-f303bfaf9cab req-4f775bc2-8f0f-485d-85e4-b29acda76ab1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received event network-vif-unplugged-23a78e40-bfa3-4120-abee-48eb4fec828b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.609 2 DEBUG oslo_concurrency.lockutils [req-be4df8c5-8cf8-4fcc-8c9c-f303bfaf9cab req-4f775bc2-8f0f-485d-85e4-b29acda76ab1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.610 2 DEBUG oslo_concurrency.lockutils [req-be4df8c5-8cf8-4fcc-8c9c-f303bfaf9cab req-4f775bc2-8f0f-485d-85e4-b29acda76ab1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.610 2 DEBUG oslo_concurrency.lockutils [req-be4df8c5-8cf8-4fcc-8c9c-f303bfaf9cab req-4f775bc2-8f0f-485d-85e4-b29acda76ab1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.610 2 DEBUG nova.compute.manager [req-be4df8c5-8cf8-4fcc-8c9c-f303bfaf9cab req-4f775bc2-8f0f-485d-85e4-b29acda76ab1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] No waiting events found dispatching network-vif-unplugged-23a78e40-bfa3-4120-abee-48eb4fec828b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:58 np0005465987 nova_compute[230713]: 2025-10-02 12:46:58.610 2 DEBUG nova.compute.manager [req-be4df8c5-8cf8-4fcc-8c9c-f303bfaf9cab req-4f775bc2-8f0f-485d-85e4-b29acda76ab1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received event network-vif-unplugged-23a78e40-bfa3-4120-abee-48eb4fec828b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:46:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:46:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:46:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:58.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:46:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:46:59 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:46:59 np0005465987 nova_compute[230713]: 2025-10-02 12:46:59.117 2 INFO nova.virt.libvirt.driver [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Deleting instance files /var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1_del#033[00m
Oct  2 08:46:59 np0005465987 nova_compute[230713]: 2025-10-02 12:46:59.118 2 INFO nova.virt.libvirt.driver [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Deletion of /var/lib/nova/instances/7bd653a3-96de-43be-a073-1384b3d6aae1_del complete#033[00m
Oct  2 08:46:59 np0005465987 nova_compute[230713]: 2025-10-02 12:46:59.248 2 INFO nova.compute.manager [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:46:59 np0005465987 nova_compute[230713]: 2025-10-02 12:46:59.248 2 DEBUG oslo.service.loopingcall [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:46:59 np0005465987 nova_compute[230713]: 2025-10-02 12:46:59.249 2 DEBUG nova.compute.manager [-] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:46:59 np0005465987 nova_compute[230713]: 2025-10-02 12:46:59.249 2 DEBUG nova.network.neutron [-] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:46:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.147 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409205.1458805, 5a538012-6544-42f5-8262-da298e929634 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.147 2 INFO nova.compute.manager [-] [instance: 5a538012-6544-42f5-8262-da298e929634] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.219 2 DEBUG nova.compute.manager [None req-085d37a7-20d0-471c-b7e9-000ad60d48f0 - - - - - -] [instance: 5a538012-6544-42f5-8262-da298e929634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:00.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.372 2 DEBUG nova.network.neutron [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Updating instance_info_cache with network_info: [{"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.406 2 DEBUG nova.network.neutron [-] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.447 2 DEBUG oslo_concurrency.lockutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Releasing lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.453 2 INFO nova.compute.manager [-] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Took 1.20 seconds to deallocate network for instance.#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.493 2 DEBUG nova.compute.manager [req-b926580b-2294-4ae0-9c84-244626889cf6 req-561a42e5-9bc1-4867-8508-0a43d3dac9d3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received event network-vif-deleted-23a78e40-bfa3-4120-abee-48eb4fec828b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.543 2 DEBUG oslo_concurrency.lockutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.543 2 DEBUG oslo_concurrency.lockutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.679 2 DEBUG oslo_concurrency.processutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:00.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.796 2 DEBUG nova.compute.manager [req-ccde8613-6fcc-4ed2-bbfb-6f077feab775 req-53df3e11-5973-4135-9e60-34bc8ed3de7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received event network-vif-plugged-23a78e40-bfa3-4120-abee-48eb4fec828b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.797 2 DEBUG oslo_concurrency.lockutils [req-ccde8613-6fcc-4ed2-bbfb-6f077feab775 req-53df3e11-5973-4135-9e60-34bc8ed3de7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.797 2 DEBUG oslo_concurrency.lockutils [req-ccde8613-6fcc-4ed2-bbfb-6f077feab775 req-53df3e11-5973-4135-9e60-34bc8ed3de7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.798 2 DEBUG oslo_concurrency.lockutils [req-ccde8613-6fcc-4ed2-bbfb-6f077feab775 req-53df3e11-5973-4135-9e60-34bc8ed3de7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.798 2 DEBUG nova.compute.manager [req-ccde8613-6fcc-4ed2-bbfb-6f077feab775 req-53df3e11-5973-4135-9e60-34bc8ed3de7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] No waiting events found dispatching network-vif-plugged-23a78e40-bfa3-4120-abee-48eb4fec828b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.798 2 WARNING nova.compute.manager [req-ccde8613-6fcc-4ed2-bbfb-6f077feab775 req-53df3e11-5973-4135-9e60-34bc8ed3de7c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Received unexpected event network-vif-plugged-23a78e40-bfa3-4120-abee-48eb4fec828b for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.841 2 DEBUG nova.network.neutron [req-8cfdd7c6-48b9-447f-b7d8-542287d928dd req-c5225060-ae57-4fa0-908c-62bc63fa8513 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updated VIF entry in instance network info cache for port 23a78e40-bfa3-4120-abee-48eb4fec828b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.841 2 DEBUG nova.network.neutron [req-8cfdd7c6-48b9-447f-b7d8-542287d928dd req-c5225060-ae57-4fa0-908c-62bc63fa8513 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Updating instance_info_cache with network_info: [{"id": "23a78e40-bfa3-4120-abee-48eb4fec828b", "address": "fa:16:3e:c6:1a:2b", "network": {"id": "2614e2b3-f8b2-48c2-924a-36fbf2e644a4", "bridge": "br-int", "label": "tempest-network-smoke--361256794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23a78e40-bf", "ovs_interfaceid": "23a78e40-bfa3-4120-abee-48eb4fec828b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:00 np0005465987 nova_compute[230713]: 2025-10-02 12:47:00.881 2 DEBUG oslo_concurrency.lockutils [req-8cfdd7c6-48b9-447f-b7d8-542287d928dd req-c5225060-ae57-4fa0-908c-62bc63fa8513 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-7bd653a3-96de-43be-a073-1384b3d6aae1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:01 np0005465987 nova_compute[230713]: 2025-10-02 12:47:01.022 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:47:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3388750140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:01 np0005465987 nova_compute[230713]: 2025-10-02 12:47:01.104 2 DEBUG oslo_concurrency.processutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:01 np0005465987 nova_compute[230713]: 2025-10-02 12:47:01.110 2 DEBUG nova.compute.provider_tree [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:01 np0005465987 nova_compute[230713]: 2025-10-02 12:47:01.130 2 DEBUG nova.scheduler.client.report [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:01 np0005465987 nova_compute[230713]: 2025-10-02 12:47:01.166 2 DEBUG oslo_concurrency.lockutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:01 np0005465987 nova_compute[230713]: 2025-10-02 12:47:01.208 2 INFO nova.scheduler.client.report [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Deleted allocations for instance 7bd653a3-96de-43be-a073-1384b3d6aae1#033[00m
Oct  2 08:47:01 np0005465987 nova_compute[230713]: 2025-10-02 12:47:01.308 2 DEBUG oslo_concurrency.lockutils [None req-94ff766e-cb5b-487e-98ee-4da959f9646f 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "7bd653a3-96de-43be-a073-1384b3d6aae1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:01 np0005465987 nova_compute[230713]: 2025-10-02 12:47:01.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:47:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:02.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:47:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:03.200 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:03.200 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:47:03 np0005465987 nova_compute[230713]: 2025-10-02 12:47:03.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:03 np0005465987 nova_compute[230713]: 2025-10-02 12:47:03.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:04.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:04.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:06.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:06 np0005465987 nova_compute[230713]: 2025-10-02 12:47:06.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:06.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:06 np0005465987 nova_compute[230713]: 2025-10-02 12:47:06.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:07 np0005465987 nova_compute[230713]: 2025-10-02 12:47:07.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:07.202 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:07 np0005465987 nova_compute[230713]: 2025-10-02 12:47:07.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:08.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:08 np0005465987 nova_compute[230713]: 2025-10-02 12:47:08.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:08.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:08 np0005465987 podman[293683]: 2025-10-02 12:47:08.836790149 +0000 UTC m=+0.053281074 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:47:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:47:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:10.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:47:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:10.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:10 np0005465987 nova_compute[230713]: 2025-10-02 12:47:10.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:10 np0005465987 nova_compute[230713]: 2025-10-02 12:47:10.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:47:10 np0005465987 nova_compute[230713]: 2025-10-02 12:47:10.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:47:11 np0005465987 nova_compute[230713]: 2025-10-02 12:47:11.010 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:11 np0005465987 nova_compute[230713]: 2025-10-02 12:47:11.011 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:11 np0005465987 nova_compute[230713]: 2025-10-02 12:47:11.011 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:47:11 np0005465987 nova_compute[230713]: 2025-10-02 12:47:11.011 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:11 np0005465987 nova_compute[230713]: 2025-10-02 12:47:11.068 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:47:11 np0005465987 nova_compute[230713]: 2025-10-02 12:47:11.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:12.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:12.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.246 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409218.24515, 7bd653a3-96de-43be-a073-1384b3d6aae1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.247 2 INFO nova.compute.manager [-] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.254 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Updating instance_info_cache with network_info: [{"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:13 np0005465987 kernel: tap2b948b4e-65 (unregistering): left promiscuous mode
Oct  2 08:47:13 np0005465987 NetworkManager[44910]: <info>  [1759409233.3102] device (tap2b948b4e-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:13Z|00646|binding|INFO|Releasing lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 from this chassis (sb_readonly=0)
Oct  2 08:47:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:13Z|00647|binding|INFO|Setting lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 down in Southbound
Oct  2 08:47:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:13Z|00648|binding|INFO|Removing iface tap2b948b4e-65 ovn-installed in OVS
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:13 np0005465987 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Oct  2 08:47:13 np0005465987 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a7.scope: Consumed 13.653s CPU time.
Oct  2 08:47:13 np0005465987 systemd-machined[188335]: Machine qemu-75-instance-000000a7 terminated.
Oct  2 08:47:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:13.389 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:ca 10.100.0.3'], port_security=['fa:16:3e:ee:67:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '781be3e3-efcf-46f6-aa85-dff8146f22ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54eb1883-31f7-40fa-864f-3516c27c1276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '508fca18f76a46cba8f3b8b8d8169ef1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1004833a-e427-4539-9aa9-ed5f03a58603', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b082b214-cf6d-4c88-9481-dd3bf0098234, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2b948b4e-6553-4f5f-bf05-4bc8467c9898) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:13.390 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2b948b4e-6553-4f5f-bf05-4bc8467c9898 in datapath 54eb1883-31f7-40fa-864f-3516c27c1276 unbound from our chassis#033[00m
Oct  2 08:47:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:13.391 138325 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 54eb1883-31f7-40fa-864f-3516c27c1276 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:47:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:13.392 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd8bfc0-0f38-4c2c-9760-ec8d67af7c87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:13 np0005465987 podman[293705]: 2025-10-02 12:47:13.393991529 +0000 UTC m=+0.058378171 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.399 2 DEBUG nova.compute.manager [None req-1371f799-997d-45a6-9430-8adca56033a7 - - - - - -] [instance: 7bd653a3-96de-43be-a073-1384b3d6aae1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.401 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.401 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:47:13 np0005465987 podman[293706]: 2025-10-02 12:47:13.418348304 +0000 UTC m=+0.075877241 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:47:13 np0005465987 podman[293702]: 2025-10-02 12:47:13.429533245 +0000 UTC m=+0.097163124 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:13 np0005465987 nova_compute[230713]: 2025-10-02 12:47:13.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.079 2 INFO nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.084 2 INFO nova.virt.libvirt.driver [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance destroyed successfully.#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.084 2 DEBUG nova.objects.instance [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.114 2 INFO nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Attempting rescue#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.115 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.120 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.121 2 INFO nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Creating image(s)#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.150 2 DEBUG nova.storage.rbd_utils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.154 2 DEBUG nova.objects.instance [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.198 2 DEBUG nova.storage.rbd_utils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.229 2 DEBUG nova.storage.rbd_utils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.232 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:14.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.319 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.320 2 DEBUG oslo_concurrency.lockutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.320 2 DEBUG oslo_concurrency.lockutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.321 2 DEBUG oslo_concurrency.lockutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.347 2 DEBUG nova.storage.rbd_utils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.351 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.671 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.672 2 DEBUG nova.objects.instance [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'migration_context' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.690 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.691 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Start _get_guest_xml network_info=[{"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-717258684-network", "vif_mac": "fa:16:3e:ee:67:ca"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.691 2 DEBUG nova.objects.instance [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'resources' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.711 2 WARNING nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.716 2 DEBUG nova.virt.libvirt.host [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.717 2 DEBUG nova.virt.libvirt.host [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.720 2 DEBUG nova.virt.libvirt.host [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.721 2 DEBUG nova.virt.libvirt.host [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.722 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.722 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.723 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.723 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.723 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.724 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.724 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.724 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.725 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.725 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.725 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.725 2 DEBUG nova.virt.hardware [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.726 2 DEBUG nova.objects.instance [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:14.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:14 np0005465987 nova_compute[230713]: 2025-10-02 12:47:14.743 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1062975263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:15 np0005465987 nova_compute[230713]: 2025-10-02 12:47:15.194 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:15 np0005465987 nova_compute[230713]: 2025-10-02 12:47:15.196 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/990030479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:15 np0005465987 nova_compute[230713]: 2025-10-02 12:47:15.616 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:15 np0005465987 nova_compute[230713]: 2025-10-02 12:47:15.618 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.044 2 DEBUG nova.compute.manager [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-unplugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.045 2 DEBUG oslo_concurrency.lockutils [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.046 2 DEBUG oslo_concurrency.lockutils [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.046 2 DEBUG oslo_concurrency.lockutils [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.047 2 DEBUG nova.compute.manager [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-unplugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.047 2 WARNING nova.compute.manager [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-unplugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:47:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.047 2 DEBUG nova.compute.manager [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.048 2 DEBUG oslo_concurrency.lockutils [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2341942147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.048 2 DEBUG oslo_concurrency.lockutils [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.048 2 DEBUG oslo_concurrency.lockutils [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.049 2 DEBUG nova.compute.manager [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.049 2 WARNING nova.compute.manager [req-9d88d5c0-594c-4030-b308-11ecd197d943 req-5c5097cb-7cb9-4c81-a925-dfdaf8831a05 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.066 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.068 2 DEBUG nova.virt.libvirt.vif [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1488280213',display_name='tempest-ServerRescueTestJSON-server-1488280213',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1488280213',id=167,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:46:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='508fca18f76a46cba8f3b8b8d8169ef1',ramdisk_id='',reservation_id='r-leqzacb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1772269056',owner_user_name='tempest-ServerRescueTestJSON-1772269056-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:54Z,user_data=None,user_id='f9c1a967b21e4d05a1e9cb54949a7527',uuid=781be3e3-efcf-46f6-aa85-dff8146f22ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-717258684-network", "vif_mac": "fa:16:3e:ee:67:ca"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.068 2 DEBUG nova.network.os_vif_util [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Converting VIF {"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-717258684-network", "vif_mac": "fa:16:3e:ee:67:ca"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.069 2 DEBUG nova.network.os_vif_util [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=2b948b4e-6553-4f5f-bf05-4bc8467c9898,network=Network(54eb1883-31f7-40fa-864f-3516c27c1276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b948b4e-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.070 2 DEBUG nova.objects.instance [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.089 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <uuid>781be3e3-efcf-46f6-aa85-dff8146f22ae</uuid>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <name>instance-000000a7</name>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerRescueTestJSON-server-1488280213</nova:name>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:47:14</nova:creationTime>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <nova:user uuid="f9c1a967b21e4d05a1e9cb54949a7527">tempest-ServerRescueTestJSON-1772269056-project-member</nova:user>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <nova:project uuid="508fca18f76a46cba8f3b8b8d8169ef1">tempest-ServerRescueTestJSON-1772269056</nova:project>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <nova:port uuid="2b948b4e-6553-4f5f-bf05-4bc8467c9898">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <entry name="serial">781be3e3-efcf-46f6-aa85-dff8146f22ae</entry>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <entry name="uuid">781be3e3-efcf-46f6-aa85-dff8146f22ae</entry>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.rescue">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/781be3e3-efcf-46f6-aa85-dff8146f22ae_disk">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config.rescue">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:ee:67:ca"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <target dev="tap2b948b4e-65"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/console.log" append="off"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:47:16 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:47:16 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:47:16 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:47:16 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.097 2 INFO nova.virt.libvirt.driver [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance destroyed successfully.#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.145 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.146 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.146 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.146 2 DEBUG nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] No VIF found with MAC fa:16:3e:ee:67:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.147 2 INFO nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Using config drive#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.169 2 DEBUG nova.storage.rbd_utils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.190 2 DEBUG nova.objects.instance [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.227 2 DEBUG nova.objects.instance [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'keypairs' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:47:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:16.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.704 2 INFO nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Creating config drive at /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config.rescue#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.709 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpupa0lqvh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:16.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.842 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpupa0lqvh" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.873 2 DEBUG nova.storage.rbd_utils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] rbd image 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.877 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config.rescue 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:16 np0005465987 nova_compute[230713]: 2025-10-02 12:47:16.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.024 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.025 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.025 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.026 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.026 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.064 2 DEBUG oslo_concurrency.processutils [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config.rescue 781be3e3-efcf-46f6-aa85-dff8146f22ae_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.065 2 INFO nova.virt.libvirt.driver [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Deleting local config drive /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:47:17 np0005465987 kernel: tap2b948b4e-65: entered promiscuous mode
Oct  2 08:47:17 np0005465987 NetworkManager[44910]: <info>  [1759409237.1199] manager: (tap2b948b4e-65): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:17Z|00649|binding|INFO|Claiming lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 for this chassis.
Oct  2 08:47:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:17Z|00650|binding|INFO|2b948b4e-6553-4f5f-bf05-4bc8467c9898: Claiming fa:16:3e:ee:67:ca 10.100.0.3
Oct  2 08:47:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:17Z|00651|binding|INFO|Setting lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 ovn-installed in OVS
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:17 np0005465987 systemd-udevd[294021]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:17 np0005465987 systemd-machined[188335]: New machine qemu-76-instance-000000a7.
Oct  2 08:47:17 np0005465987 NetworkManager[44910]: <info>  [1759409237.1701] device (tap2b948b4e-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:47:17 np0005465987 NetworkManager[44910]: <info>  [1759409237.1713] device (tap2b948b4e-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:47:17 np0005465987 systemd[1]: Started Virtual Machine qemu-76-instance-000000a7.
Oct  2 08:47:17 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:17Z|00652|binding|INFO|Setting lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 up in Southbound
Oct  2 08:47:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:17.196 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:ca 10.100.0.3'], port_security=['fa:16:3e:ee:67:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '781be3e3-efcf-46f6-aa85-dff8146f22ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54eb1883-31f7-40fa-864f-3516c27c1276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '508fca18f76a46cba8f3b8b8d8169ef1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1004833a-e427-4539-9aa9-ed5f03a58603', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b082b214-cf6d-4c88-9481-dd3bf0098234, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2b948b4e-6553-4f5f-bf05-4bc8467c9898) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:17.197 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2b948b4e-6553-4f5f-bf05-4bc8467c9898 in datapath 54eb1883-31f7-40fa-864f-3516c27c1276 bound to our chassis#033[00m
Oct  2 08:47:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:17.197 138325 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 54eb1883-31f7-40fa-864f-3516c27c1276 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:47:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:17.198 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ea02f0ac-119c-4964-a7d2-23dfbf067327]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1377427477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.474 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.582 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.582 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.582 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.736 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.737 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4277MB free_disk=20.797191619873047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.738 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.738 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.952 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 781be3e3-efcf-46f6-aa85-dff8146f22ae actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.953 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.953 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:47:17 np0005465987 nova_compute[230713]: 2025-10-02 12:47:17.994 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.085 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 781be3e3-efcf-46f6-aa85-dff8146f22ae due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.085 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409238.0839174, 781be3e3-efcf-46f6-aa85-dff8146f22ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.086 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.091 2 DEBUG nova.compute.manager [None req-b927707e-2e53-466c-b341-70bdacae4ad7 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.193 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.198 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:18.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:18 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2180663085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.428 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409238.0923584, 781be3e3-efcf-46f6-aa85-dff8146f22ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.429 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] VM Started (Lifecycle Event)#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.432 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.436 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.511 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.516 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.518 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.680 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:47:18 np0005465987 nova_compute[230713]: 2025-10-02 12:47:18.681 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:18.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:19 np0005465987 nova_compute[230713]: 2025-10-02 12:47:19.680 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:19 np0005465987 nova_compute[230713]: 2025-10-02 12:47:19.681 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:20.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:20.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:20 np0005465987 nova_compute[230713]: 2025-10-02 12:47:20.967 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:21 np0005465987 nova_compute[230713]: 2025-10-02 12:47:21.239 2 INFO nova.compute.manager [None req-af003745-9e5c-42c9-b06b-06434cc45499 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Unrescuing#033[00m
Oct  2 08:47:21 np0005465987 nova_compute[230713]: 2025-10-02 12:47:21.240 2 DEBUG oslo_concurrency.lockutils [None req-af003745-9e5c-42c9-b06b-06434cc45499 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:21 np0005465987 nova_compute[230713]: 2025-10-02 12:47:21.241 2 DEBUG oslo_concurrency.lockutils [None req-af003745-9e5c-42c9-b06b-06434cc45499 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquired lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:21 np0005465987 nova_compute[230713]: 2025-10-02 12:47:21.241 2 DEBUG nova.network.neutron [None req-af003745-9e5c-42c9-b06b-06434cc45499 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:47:21 np0005465987 nova_compute[230713]: 2025-10-02 12:47:21.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:21 np0005465987 nova_compute[230713]: 2025-10-02 12:47:21.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:22.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:22.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:23 np0005465987 nova_compute[230713]: 2025-10-02 12:47:23.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:24.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:24.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:25 np0005465987 nova_compute[230713]: 2025-10-02 12:47:25.679 2 DEBUG nova.network.neutron [None req-af003745-9e5c-42c9-b06b-06434cc45499 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Updating instance_info_cache with network_info: [{"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:25 np0005465987 nova_compute[230713]: 2025-10-02 12:47:25.929 2 DEBUG oslo_concurrency.lockutils [None req-af003745-9e5c-42c9-b06b-06434cc45499 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Releasing lock "refresh_cache-781be3e3-efcf-46f6-aa85-dff8146f22ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:25 np0005465987 nova_compute[230713]: 2025-10-02 12:47:25.930 2 DEBUG nova.objects.instance [None req-af003745-9e5c-42c9-b06b-06434cc45499 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'flavor' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:26 np0005465987 kernel: tap2b948b4e-65 (unregistering): left promiscuous mode
Oct  2 08:47:26 np0005465987 NetworkManager[44910]: <info>  [1759409246.0850] device (tap2b948b4e-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:26 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:26Z|00653|binding|INFO|Releasing lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 from this chassis (sb_readonly=0)
Oct  2 08:47:26 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:26Z|00654|binding|INFO|Setting lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 down in Southbound
Oct  2 08:47:26 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:26Z|00655|binding|INFO|Removing iface tap2b948b4e-65 ovn-installed in OVS
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:26.155 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:ca 10.100.0.3'], port_security=['fa:16:3e:ee:67:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '781be3e3-efcf-46f6-aa85-dff8146f22ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54eb1883-31f7-40fa-864f-3516c27c1276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '508fca18f76a46cba8f3b8b8d8169ef1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1004833a-e427-4539-9aa9-ed5f03a58603', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b082b214-cf6d-4c88-9481-dd3bf0098234, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2b948b4e-6553-4f5f-bf05-4bc8467c9898) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:26.156 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2b948b4e-6553-4f5f-bf05-4bc8467c9898 in datapath 54eb1883-31f7-40fa-864f-3516c27c1276 unbound from our chassis#033[00m
Oct  2 08:47:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:26.157 138325 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 54eb1883-31f7-40fa-864f-3516c27c1276 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:47:26 np0005465987 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Oct  2 08:47:26 np0005465987 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a7.scope: Consumed 8.950s CPU time.
Oct  2 08:47:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:26.157 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[99b8b97e-93ef-4f7a-937b-e62e9cfe2197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:26 np0005465987 systemd-machined[188335]: Machine qemu-76-instance-000000a7 terminated.
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.264 2 INFO nova.virt.libvirt.driver [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance destroyed successfully.#033[00m
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.265 2 DEBUG nova.objects.instance [None req-af003745-9e5c-42c9-b06b-06434cc45499 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:26.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:26 np0005465987 kernel: tap2b948b4e-65: entered promiscuous mode
Oct  2 08:47:26 np0005465987 systemd-udevd[294131]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:47:26 np0005465987 NetworkManager[44910]: <info>  [1759409246.5733] manager: (tap2b948b4e-65): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Oct  2 08:47:26 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:26Z|00656|binding|INFO|Claiming lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 for this chassis.
Oct  2 08:47:26 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:26Z|00657|binding|INFO|2b948b4e-6553-4f5f-bf05-4bc8467c9898: Claiming fa:16:3e:ee:67:ca 10.100.0.3
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:26 np0005465987 NetworkManager[44910]: <info>  [1759409246.5846] device (tap2b948b4e-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:47:26 np0005465987 NetworkManager[44910]: <info>  [1759409246.5855] device (tap2b948b4e-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:47:26 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:26Z|00658|binding|INFO|Setting lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 ovn-installed in OVS
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:26 np0005465987 nova_compute[230713]: 2025-10-02 12:47:26.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:26 np0005465987 systemd-machined[188335]: New machine qemu-77-instance-000000a7.
Oct  2 08:47:26 np0005465987 systemd[1]: Started Virtual Machine qemu-77-instance-000000a7.
Oct  2 08:47:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:47:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:26.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:47:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:27Z|00659|binding|INFO|Setting lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 up in Southbound
Oct  2 08:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:27.167 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:ca 10.100.0.3'], port_security=['fa:16:3e:ee:67:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '781be3e3-efcf-46f6-aa85-dff8146f22ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54eb1883-31f7-40fa-864f-3516c27c1276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '508fca18f76a46cba8f3b8b8d8169ef1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1004833a-e427-4539-9aa9-ed5f03a58603', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b082b214-cf6d-4c88-9481-dd3bf0098234, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2b948b4e-6553-4f5f-bf05-4bc8467c9898) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:27.169 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2b948b4e-6553-4f5f-bf05-4bc8467c9898 in datapath 54eb1883-31f7-40fa-864f-3516c27c1276 bound to our chassis#033[00m
Oct  2 08:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:27.171 138325 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 54eb1883-31f7-40fa-864f-3516c27c1276 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:27.171 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6eb402-66a1-4b84-b251-c95a7c5964e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.442 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 781be3e3-efcf-46f6-aa85-dff8146f22ae due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.443 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409247.4425297, 781be3e3-efcf-46f6-aa85-dff8146f22ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.444 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.783 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.785 2 DEBUG nova.compute.manager [req-b91c91ed-6ecd-4a43-9644-c3934fafb9e0 req-4b2b8209-7bc3-4550-b5bb-75546186df4a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.785 2 DEBUG oslo_concurrency.lockutils [req-b91c91ed-6ecd-4a43-9644-c3934fafb9e0 req-4b2b8209-7bc3-4550-b5bb-75546186df4a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.786 2 DEBUG oslo_concurrency.lockutils [req-b91c91ed-6ecd-4a43-9644-c3934fafb9e0 req-4b2b8209-7bc3-4550-b5bb-75546186df4a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.786 2 DEBUG oslo_concurrency.lockutils [req-b91c91ed-6ecd-4a43-9644-c3934fafb9e0 req-4b2b8209-7bc3-4550-b5bb-75546186df4a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.786 2 DEBUG nova.compute.manager [req-b91c91ed-6ecd-4a43-9644-c3934fafb9e0 req-4b2b8209-7bc3-4550-b5bb-75546186df4a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.787 2 WARNING nova.compute.manager [req-b91c91ed-6ecd-4a43-9644-c3934fafb9e0 req-4b2b8209-7bc3-4550-b5bb-75546186df4a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.790 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.868 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.868 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409247.4452827, 781be3e3-efcf-46f6-aa85-dff8146f22ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.869 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] VM Started (Lifecycle Event)#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.958 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:27 np0005465987 nova_compute[230713]: 2025-10-02 12:47:27.962 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:27.971 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:27.972 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:27.972 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:28 np0005465987 nova_compute[230713]: 2025-10-02 12:47:28.048 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:47:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:28.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:28 np0005465987 nova_compute[230713]: 2025-10-02 12:47:28.282 2 DEBUG nova.compute.manager [None req-af003745-9e5c-42c9-b06b-06434cc45499 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:28 np0005465987 nova_compute[230713]: 2025-10-02 12:47:28.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.859 2 DEBUG nova.compute.manager [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.860 2 DEBUG oslo_concurrency.lockutils [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.861 2 DEBUG oslo_concurrency.lockutils [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.861 2 DEBUG oslo_concurrency.lockutils [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.861 2 DEBUG nova.compute.manager [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.862 2 WARNING nova.compute.manager [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.862 2 DEBUG nova.compute.manager [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-unplugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.862 2 DEBUG oslo_concurrency.lockutils [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.863 2 DEBUG oslo_concurrency.lockutils [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.863 2 DEBUG oslo_concurrency.lockutils [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.863 2 DEBUG nova.compute.manager [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-unplugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.863 2 WARNING nova.compute.manager [req-e1e9c7c3-3a35-48da-9f0e-0c67c2cf3d38 req-3e11299c-1a59-4842-82b4-26640300ac6c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-unplugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:29 np0005465987 nova_compute[230713]: 2025-10-02 12:47:29.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:47:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.242 2 DEBUG oslo_concurrency.lockutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.242 2 DEBUG oslo_concurrency.lockutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.243 2 DEBUG oslo_concurrency.lockutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.243 2 DEBUG oslo_concurrency.lockutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.244 2 DEBUG oslo_concurrency.lockutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.246 2 INFO nova.compute.manager [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Terminating instance#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.248 2 DEBUG nova.compute.manager [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:47:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:30.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:30 np0005465987 kernel: tap2b948b4e-65 (unregistering): left promiscuous mode
Oct  2 08:47:30 np0005465987 NetworkManager[44910]: <info>  [1759409250.3183] device (tap2b948b4e-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:30Z|00660|binding|INFO|Releasing lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 from this chassis (sb_readonly=0)
Oct  2 08:47:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:30Z|00661|binding|INFO|Setting lport 2b948b4e-6553-4f5f-bf05-4bc8467c9898 down in Southbound
Oct  2 08:47:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:47:30Z|00662|binding|INFO|Removing iface tap2b948b4e-65 ovn-installed in OVS
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:30.381 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:ca 10.100.0.3'], port_security=['fa:16:3e:ee:67:ca 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '781be3e3-efcf-46f6-aa85-dff8146f22ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54eb1883-31f7-40fa-864f-3516c27c1276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '508fca18f76a46cba8f3b8b8d8169ef1', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1004833a-e427-4539-9aa9-ed5f03a58603', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b082b214-cf6d-4c88-9481-dd3bf0098234, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=2b948b4e-6553-4f5f-bf05-4bc8467c9898) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:30.382 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 2b948b4e-6553-4f5f-bf05-4bc8467c9898 in datapath 54eb1883-31f7-40fa-864f-3516c27c1276 unbound from our chassis#033[00m
Oct  2 08:47:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:30.383 138325 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 54eb1883-31f7-40fa-864f-3516c27c1276 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:47:30 np0005465987 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Oct  2 08:47:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:30.384 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd1c5c1-78fc-490c-85f5-878d402e1cf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:30 np0005465987 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a7.scope: Consumed 3.672s CPU time.
Oct  2 08:47:30 np0005465987 systemd-machined[188335]: Machine qemu-77-instance-000000a7 terminated.
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.481 2 INFO nova.virt.libvirt.driver [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Instance destroyed successfully.#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.482 2 DEBUG nova.objects.instance [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lazy-loading 'resources' on Instance uuid 781be3e3-efcf-46f6-aa85-dff8146f22ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.535 2 DEBUG nova.virt.libvirt.vif [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1488280213',display_name='tempest-ServerRescueTestJSON-server-1488280213',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1488280213',id=167,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='508fca18f76a46cba8f3b8b8d8169ef1',ramdisk_id='',reservation_id='r-leqzacb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1772269056',owner_user_name='tempest-ServerRescueTestJSON-1772269056-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:28Z,user_data=None,user_id='f9c1a967b21e4d05a1e9cb54949a7527',uuid=781be3e3-efcf-46f6-aa85-dff8146f22ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.536 2 DEBUG nova.network.os_vif_util [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Converting VIF {"id": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "address": "fa:16:3e:ee:67:ca", "network": {"id": "54eb1883-31f7-40fa-864f-3516c27c1276", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-717258684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "508fca18f76a46cba8f3b8b8d8169ef1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b948b4e-65", "ovs_interfaceid": "2b948b4e-6553-4f5f-bf05-4bc8467c9898", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.536 2 DEBUG nova.network.os_vif_util [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=2b948b4e-6553-4f5f-bf05-4bc8467c9898,network=Network(54eb1883-31f7-40fa-864f-3516c27c1276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b948b4e-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.537 2 DEBUG os_vif [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=2b948b4e-6553-4f5f-bf05-4bc8467c9898,network=Network(54eb1883-31f7-40fa-864f-3516c27c1276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b948b4e-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b948b4e-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:47:30 np0005465987 nova_compute[230713]: 2025-10-02 12:47:30.583 2 INFO os_vif [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=2b948b4e-6553-4f5f-bf05-4bc8467c9898,network=Network(54eb1883-31f7-40fa-864f-3516c27c1276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b948b4e-65')#033[00m
Oct  2 08:47:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:47:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:30.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:47:31 np0005465987 nova_compute[230713]: 2025-10-02 12:47:31.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.007 2 DEBUG nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.007 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.008 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.008 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.008 2 DEBUG nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.008 2 WARNING nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.008 2 DEBUG nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.008 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.009 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.009 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.009 2 DEBUG nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.009 2 WARNING nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.009 2 DEBUG nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.009 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.010 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.010 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.010 2 DEBUG nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.010 2 WARNING nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.010 2 DEBUG nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-unplugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.010 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.010 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.010 2 DEBUG oslo_concurrency.lockutils [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.011 2 DEBUG nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-unplugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:32 np0005465987 nova_compute[230713]: 2025-10-02 12:47:32.011 2 DEBUG nova.compute.manager [req-df55dde4-df24-42ee-a96e-c3fe798b9699 req-77fdd751-65eb-433e-9336-879d670bf111 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-unplugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:47:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:32.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:32.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:33 np0005465987 nova_compute[230713]: 2025-10-02 12:47:33.365 2 INFO nova.virt.libvirt.driver [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Deleting instance files /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae_del#033[00m
Oct  2 08:47:33 np0005465987 nova_compute[230713]: 2025-10-02 12:47:33.365 2 INFO nova.virt.libvirt.driver [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Deletion of /var/lib/nova/instances/781be3e3-efcf-46f6-aa85-dff8146f22ae_del complete#033[00m
Oct  2 08:47:33 np0005465987 nova_compute[230713]: 2025-10-02 12:47:33.645 2 INFO nova.compute.manager [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Took 3.40 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:47:33 np0005465987 nova_compute[230713]: 2025-10-02 12:47:33.646 2 DEBUG oslo.service.loopingcall [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:47:33 np0005465987 nova_compute[230713]: 2025-10-02 12:47:33.646 2 DEBUG nova.compute.manager [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:47:33 np0005465987 nova_compute[230713]: 2025-10-02 12:47:33.646 2 DEBUG nova.network.neutron [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:47:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:34.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:34 np0005465987 nova_compute[230713]: 2025-10-02 12:47:34.924 2 DEBUG nova.compute.manager [req-a09f1202-6c91-41a0-80fd-e6e436455798 req-962b9e04-ae43-4fd8-af59-bc998da15aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:34 np0005465987 nova_compute[230713]: 2025-10-02 12:47:34.925 2 DEBUG oslo_concurrency.lockutils [req-a09f1202-6c91-41a0-80fd-e6e436455798 req-962b9e04-ae43-4fd8-af59-bc998da15aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:34 np0005465987 nova_compute[230713]: 2025-10-02 12:47:34.925 2 DEBUG oslo_concurrency.lockutils [req-a09f1202-6c91-41a0-80fd-e6e436455798 req-962b9e04-ae43-4fd8-af59-bc998da15aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:34 np0005465987 nova_compute[230713]: 2025-10-02 12:47:34.926 2 DEBUG oslo_concurrency.lockutils [req-a09f1202-6c91-41a0-80fd-e6e436455798 req-962b9e04-ae43-4fd8-af59-bc998da15aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:34 np0005465987 nova_compute[230713]: 2025-10-02 12:47:34.926 2 DEBUG nova.compute.manager [req-a09f1202-6c91-41a0-80fd-e6e436455798 req-962b9e04-ae43-4fd8-af59-bc998da15aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] No waiting events found dispatching network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:34 np0005465987 nova_compute[230713]: 2025-10-02 12:47:34.926 2 WARNING nova.compute.manager [req-a09f1202-6c91-41a0-80fd-e6e436455798 req-962b9e04-ae43-4fd8-af59-bc998da15aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received unexpected event network-vif-plugged-2b948b4e-6553-4f5f-bf05-4bc8467c9898 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:47:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:35 np0005465987 nova_compute[230713]: 2025-10-02 12:47:35.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:36 np0005465987 nova_compute[230713]: 2025-10-02 12:47:36.249 2 DEBUG nova.network.neutron [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:36.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:36 np0005465987 nova_compute[230713]: 2025-10-02 12:47:36.296 2 INFO nova.compute.manager [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Took 2.65 seconds to deallocate network for instance.#033[00m
Oct  2 08:47:36 np0005465987 nova_compute[230713]: 2025-10-02 12:47:36.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:36 np0005465987 nova_compute[230713]: 2025-10-02 12:47:36.394 2 DEBUG oslo_concurrency.lockutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:36 np0005465987 nova_compute[230713]: 2025-10-02 12:47:36.394 2 DEBUG oslo_concurrency.lockutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:36 np0005465987 nova_compute[230713]: 2025-10-02 12:47:36.453 2 DEBUG nova.compute.manager [req-1b5664b1-144e-42ab-82e6-a79c1593d871 req-4689bfe4-3509-45d1-85dc-445c03c42889 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Received event network-vif-deleted-2b948b4e-6553-4f5f-bf05-4bc8467c9898 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:36 np0005465987 nova_compute[230713]: 2025-10-02 12:47:36.468 2 DEBUG oslo_concurrency.processutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:36.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4200530497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:36 np0005465987 nova_compute[230713]: 2025-10-02 12:47:36.968 2 DEBUG oslo_concurrency.processutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:36 np0005465987 nova_compute[230713]: 2025-10-02 12:47:36.975 2 DEBUG nova.compute.provider_tree [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:37 np0005465987 nova_compute[230713]: 2025-10-02 12:47:37.001 2 DEBUG nova.scheduler.client.report [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:37 np0005465987 nova_compute[230713]: 2025-10-02 12:47:37.033 2 DEBUG oslo_concurrency.lockutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:37 np0005465987 nova_compute[230713]: 2025-10-02 12:47:37.087 2 INFO nova.scheduler.client.report [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Deleted allocations for instance 781be3e3-efcf-46f6-aa85-dff8146f22ae#033[00m
Oct  2 08:47:37 np0005465987 nova_compute[230713]: 2025-10-02 12:47:37.199 2 DEBUG oslo_concurrency.lockutils [None req-31e98839-d57f-4ce3-88cc-7b3a3669b547 f9c1a967b21e4d05a1e9cb54949a7527 508fca18f76a46cba8f3b8b8d8169ef1 - - default default] Lock "781be3e3-efcf-46f6-aa85-dff8146f22ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:38.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:38.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:39 np0005465987 podman[294283]: 2025-10-02 12:47:39.842856828 +0000 UTC m=+0.056719196 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct  2 08:47:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:40.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:40 np0005465987 nova_compute[230713]: 2025-10-02 12:47:40.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:40.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:41 np0005465987 nova_compute[230713]: 2025-10-02 12:47:41.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:42.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:43 np0005465987 podman[294304]: 2025-10-02 12:47:43.849068775 +0000 UTC m=+0.056594482 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:47:43 np0005465987 podman[294303]: 2025-10-02 12:47:43.863903484 +0000 UTC m=+0.073502487 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:47:43 np0005465987 podman[294302]: 2025-10-02 12:47:43.870229134 +0000 UTC m=+0.087597536 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:47:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:44.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:44.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:45 np0005465987 nova_compute[230713]: 2025-10-02 12:47:45.480 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409250.4791734, 781be3e3-efcf-46f6-aa85-dff8146f22ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:45 np0005465987 nova_compute[230713]: 2025-10-02 12:47:45.480 2 INFO nova.compute.manager [-] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:47:45 np0005465987 nova_compute[230713]: 2025-10-02 12:47:45.530 2 DEBUG nova.compute.manager [None req-996af905-f7c6-457e-b0c7-d1e30fb283fc - - - - - -] [instance: 781be3e3-efcf-46f6-aa85-dff8146f22ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:45 np0005465987 nova_compute[230713]: 2025-10-02 12:47:45.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:46.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:46 np0005465987 nova_compute[230713]: 2025-10-02 12:47:46.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:47:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:46.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:47:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:47:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:48.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:47:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:48.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:50 np0005465987 nova_compute[230713]: 2025-10-02 12:47:50.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:50.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:51.236 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:47:51.237 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:47:51 np0005465987 nova_compute[230713]: 2025-10-02 12:47:51.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:51 np0005465987 nova_compute[230713]: 2025-10-02 12:47:51.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:47:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:52.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:47:54 np0005465987 nova_compute[230713]: 2025-10-02 12:47:54.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:54.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:54.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:47:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1821411203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:47:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:47:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1821411203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:47:55 np0005465987 nova_compute[230713]: 2025-10-02 12:47:55.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:56 np0005465987 nova_compute[230713]: 2025-10-02 12:47:56.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:47:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:47:58 np0005465987 systemd[1]: Starting dnf makecache...
Oct  2 08:47:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e359 e359: 3 total, 3 up, 3 in
Oct  2 08:47:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:47:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:47:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:58.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:47:58 np0005465987 dnf[294391]: Metadata cache refreshed recently.
Oct  2 08:47:58 np0005465987 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  2 08:47:58 np0005465987 systemd[1]: Finished dnf makecache.
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.044966) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279044996, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2397, "num_deletes": 253, "total_data_size": 5639120, "memory_usage": 5709760, "flush_reason": "Manual Compaction"}
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279070537, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 3685261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60781, "largest_seqno": 63173, "table_properties": {"data_size": 3675699, "index_size": 5992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20366, "raw_average_key_size": 20, "raw_value_size": 3656361, "raw_average_value_size": 3693, "num_data_blocks": 261, "num_entries": 990, "num_filter_entries": 990, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409071, "oldest_key_time": 1759409071, "file_creation_time": 1759409279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 25627 microseconds, and 6724 cpu microseconds.
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.070587) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 3685261 bytes OK
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.070609) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.072304) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.072319) EVENT_LOG_v1 {"time_micros": 1759409279072314, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.072338) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 5628640, prev total WAL file size 5628640, number of live WAL files 2.
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.073824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(3598KB)], [120(11MB)]
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279073864, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 15781065, "oldest_snapshot_seqno": -1}
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 8927 keys, 13811603 bytes, temperature: kUnknown
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279134584, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 13811603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13750599, "index_size": 37596, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22341, "raw_key_size": 230780, "raw_average_key_size": 25, "raw_value_size": 13590767, "raw_average_value_size": 1522, "num_data_blocks": 1473, "num_entries": 8927, "num_filter_entries": 8927, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.134899) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13811603 bytes
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.136529) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.6 rd, 227.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.5 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 9452, records dropped: 525 output_compression: NoCompression
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.136547) EVENT_LOG_v1 {"time_micros": 1759409279136539, "job": 76, "event": "compaction_finished", "compaction_time_micros": 60794, "compaction_time_cpu_micros": 29233, "output_level": 6, "num_output_files": 1, "total_output_size": 13811603, "num_input_records": 9452, "num_output_records": 8927, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279137271, "job": 76, "event": "table_file_deletion", "file_number": 122}
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409279139496, "job": 76, "event": "table_file_deletion", "file_number": 120}
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.073703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.139579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.139588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.139590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.139592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:59 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:47:59.139594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:48:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:48:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:48:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:48:00 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:48:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:00.239 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:00.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:00 np0005465987 nova_compute[230713]: 2025-10-02 12:48:00.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:00.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:48:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:48:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:48:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:48:01 np0005465987 nova_compute[230713]: 2025-10-02 12:48:01.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:02.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:02.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e360 e360: 3 total, 3 up, 3 in
Oct  2 08:48:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:04.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:04.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e361 e361: 3 total, 3 up, 3 in
Oct  2 08:48:05 np0005465987 nova_compute[230713]: 2025-10-02 12:48:05.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:06.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:06 np0005465987 nova_compute[230713]: 2025-10-02 12:48:06.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e362 e362: 3 total, 3 up, 3 in
Oct  2 08:48:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:06.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:48:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:48:07 np0005465987 nova_compute[230713]: 2025-10-02 12:48:07.967 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:08.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:08.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:10.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:10 np0005465987 nova_compute[230713]: 2025-10-02 12:48:10.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:48:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:10.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:48:10 np0005465987 podman[294550]: 2025-10-02 12:48:10.825502813 +0000 UTC m=+0.047226370 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 08:48:11 np0005465987 nova_compute[230713]: 2025-10-02 12:48:11.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:11 np0005465987 nova_compute[230713]: 2025-10-02 12:48:11.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:11 np0005465987 nova_compute[230713]: 2025-10-02 12:48:11.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:48:11 np0005465987 nova_compute[230713]: 2025-10-02 12:48:11.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:48:12 np0005465987 nova_compute[230713]: 2025-10-02 12:48:12.112 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:48:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:12.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:12.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e363 e363: 3 total, 3 up, 3 in
Oct  2 08:48:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:14.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:14.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:14 np0005465987 podman[294571]: 2025-10-02 12:48:14.828785532 +0000 UTC m=+0.048908625 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 08:48:14 np0005465987 podman[294572]: 2025-10-02 12:48:14.849599503 +0000 UTC m=+0.068236327 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:48:14 np0005465987 podman[294570]: 2025-10-02 12:48:14.862914471 +0000 UTC m=+0.086228320 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:48:14 np0005465987 nova_compute[230713]: 2025-10-02 12:48:14.931 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:14 np0005465987 nova_compute[230713]: 2025-10-02 12:48:14.931 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:14 np0005465987 nova_compute[230713]: 2025-10-02 12:48:14.955 2 DEBUG nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.057 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.057 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.065 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.066 2 INFO nova.compute.claims [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.177 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.178 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.199 2 DEBUG nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.223 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.291 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1080719275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.654 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.659 2 DEBUG nova.compute.provider_tree [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.675 2 DEBUG nova.scheduler.client.report [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.714 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.715 2 DEBUG nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.718 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.725 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.726 2 INFO nova.compute.claims [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.842 2 DEBUG nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.842 2 DEBUG nova.network.neutron [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.867 2 INFO nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:48:15 np0005465987 nova_compute[230713]: 2025-10-02 12:48:15.891 2 DEBUG nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.008 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.076 2 DEBUG nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.078 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.079 2 INFO nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Creating image(s)#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.109 2 DEBUG nova.storage.rbd_utils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.142 2 DEBUG nova.storage.rbd_utils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.172 2 DEBUG nova.storage.rbd_utils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.176 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.202 2 DEBUG nova.policy [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16730f38111542e58a05fb4deb2b3914', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5ade962c517a483dbfe4bb13386f0006', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.240 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.241 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.242 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.242 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.277 2 DEBUG nova.storage.rbd_utils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.281 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:16.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1882792622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.475 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.482 2 DEBUG nova.compute.provider_tree [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.510 2 DEBUG nova.scheduler.client.report [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.539 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.540 2 DEBUG nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.604 2 DEBUG nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.604 2 DEBUG nova.network.neutron [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.614 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.643 2 INFO nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.692 2 DEBUG nova.storage.rbd_utils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] resizing rbd image a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.728 2 DEBUG nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.808 2 DEBUG nova.objects.instance [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'migration_context' on Instance uuid a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:16.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.853 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.853 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Ensure instance console log exists: /var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.854 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.855 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.855 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.921 2 DEBUG nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.924 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.924 2 INFO nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Creating image(s)#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.951 2 DEBUG nova.storage.rbd_utils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] rbd image dc0b661c-6c12-4408-9e23-dc3883bd5138_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:16 np0005465987 nova_compute[230713]: 2025-10-02 12:48:16.979 2 DEBUG nova.storage.rbd_utils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] rbd image dc0b661c-6c12-4408-9e23-dc3883bd5138_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.008 2 DEBUG nova.storage.rbd_utils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] rbd image dc0b661c-6c12-4408-9e23-dc3883bd5138_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.011 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "2a5e56526b0a579b52b734eac501a3817a9030eb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.012 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "2a5e56526b0a579b52b734eac501a3817a9030eb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.024 2 DEBUG nova.policy [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a24a7109471f4d96ad5f11b637fdb8e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a837417d42da439cb794b4295bca2cee', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.105 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.447 2 DEBUG nova.virt.libvirt.imagebackend [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/a62d470f-f755-4a5a-b8e5-0dc1be0600d1/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/a62d470f-f755-4a5a-b8e5-0dc1be0600d1/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.506 2 DEBUG nova.virt.libvirt.imagebackend [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/a62d470f-f755-4a5a-b8e5-0dc1be0600d1/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.507 2 DEBUG nova.storage.rbd_utils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] cloning images/a62d470f-f755-4a5a-b8e5-0dc1be0600d1@snap to None/dc0b661c-6c12-4408-9e23-dc3883bd5138_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.641 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "2a5e56526b0a579b52b734eac501a3817a9030eb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.769 2 DEBUG nova.objects.instance [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lazy-loading 'migration_context' on Instance uuid dc0b661c-6c12-4408-9e23-dc3883bd5138 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.775 2 DEBUG nova.network.neutron [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Successfully created port: 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.798 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.799 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Ensure instance console log exists: /var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.800 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.800 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:17 np0005465987 nova_compute[230713]: 2025-10-02 12:48:17.800 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:18 np0005465987 nova_compute[230713]: 2025-10-02 12:48:18.125 2 DEBUG nova.network.neutron [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Successfully created port: d031f59a-c8a6-4384-a905-41cab6c3b876 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:48:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:18.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:18 np0005465987 nova_compute[230713]: 2025-10-02 12:48:18.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:18 np0005465987 nova_compute[230713]: 2025-10-02 12:48:18.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.029 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.030 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.030 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.030 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.031 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.443 2 DEBUG nova.network.neutron [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Successfully updated port: 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:48:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2259826827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.475 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "refresh_cache-a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.475 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquired lock "refresh_cache-a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.475 2 DEBUG nova.network.neutron [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.481 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.577 2 DEBUG nova.compute.manager [req-bf263ccb-a315-484f-a54a-c432e7ef6d06 req-15ab5712-2d36-4c6a-ba1d-0319f0fe2755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Received event network-changed-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.578 2 DEBUG nova.compute.manager [req-bf263ccb-a315-484f-a54a-c432e7ef6d06 req-15ab5712-2d36-4c6a-ba1d-0319f0fe2755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Refreshing instance network info cache due to event network-changed-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.578 2 DEBUG oslo_concurrency.lockutils [req-bf263ccb-a315-484f-a54a-c432e7ef6d06 req-15ab5712-2d36-4c6a-ba1d-0319f0fe2755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.632 2 DEBUG nova.network.neutron [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Successfully updated port: d031f59a-c8a6-4384-a905-41cab6c3b876 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.640 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.641 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4334MB free_disk=20.897113800048828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.641 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.641 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.667 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.667 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquired lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.667 2 DEBUG nova.network.neutron [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.727 2 DEBUG nova.network.neutron [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.744 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.744 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance dc0b661c-6c12-4408-9e23-dc3883bd5138 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.745 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.745 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.825 2 DEBUG nova.compute.manager [req-124213dc-2fde-432a-bb25-effc028d4c7a req-9b4ecf8e-562c-4112-b67b-f7fa3792c04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received event network-changed-d031f59a-c8a6-4384-a905-41cab6c3b876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.826 2 DEBUG nova.compute.manager [req-124213dc-2fde-432a-bb25-effc028d4c7a req-9b4ecf8e-562c-4112-b67b-f7fa3792c04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Refreshing instance network info cache due to event network-changed-d031f59a-c8a6-4384-a905-41cab6c3b876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.826 2 DEBUG oslo_concurrency.lockutils [req-124213dc-2fde-432a-bb25-effc028d4c7a req-9b4ecf8e-562c-4112-b67b-f7fa3792c04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:19 np0005465987 nova_compute[230713]: 2025-10-02 12:48:19.846 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:20 np0005465987 nova_compute[230713]: 2025-10-02 12:48:20.000 2 DEBUG nova.network.neutron [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:48:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1654658840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:20 np0005465987 nova_compute[230713]: 2025-10-02 12:48:20.278 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:20 np0005465987 nova_compute[230713]: 2025-10-02 12:48:20.283 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:20 np0005465987 nova_compute[230713]: 2025-10-02 12:48:20.312 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:20 np0005465987 nova_compute[230713]: 2025-10-02 12:48:20.337 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:48:20 np0005465987 nova_compute[230713]: 2025-10-02 12:48:20.337 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:20.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:20 np0005465987 nova_compute[230713]: 2025-10-02 12:48:20.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:20.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:20 np0005465987 nova_compute[230713]: 2025-10-02 12:48:20.981 2 DEBUG nova.network.neutron [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Updating instance_info_cache with network_info: [{"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.032 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Releasing lock "refresh_cache-a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.032 2 DEBUG nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Instance network_info: |[{"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.033 2 DEBUG oslo_concurrency.lockutils [req-bf263ccb-a315-484f-a54a-c432e7ef6d06 req-15ab5712-2d36-4c6a-ba1d-0319f0fe2755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.033 2 DEBUG nova.network.neutron [req-bf263ccb-a315-484f-a54a-c432e7ef6d06 req-15ab5712-2d36-4c6a-ba1d-0319f0fe2755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Refreshing network info cache for port 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.037 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Start _get_guest_xml network_info=[{"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.042 2 WARNING nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.051 2 DEBUG nova.virt.libvirt.host [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.052 2 DEBUG nova.virt.libvirt.host [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.055 2 DEBUG nova.virt.libvirt.host [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.056 2 DEBUG nova.virt.libvirt.host [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.057 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.057 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.058 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.058 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.058 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.058 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.059 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.059 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.059 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.060 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.060 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.060 2 DEBUG nova.virt.hardware [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.063 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.337 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/207111384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.485 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.515 2 DEBUG nova.storage.rbd_utils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.520 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.654 2 DEBUG nova.network.neutron [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updating instance_info_cache with network_info: [{"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.678 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Releasing lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.678 2 DEBUG nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Instance network_info: |[{"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.679 2 DEBUG oslo_concurrency.lockutils [req-124213dc-2fde-432a-bb25-effc028d4c7a req-9b4ecf8e-562c-4112-b67b-f7fa3792c04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.679 2 DEBUG nova.network.neutron [req-124213dc-2fde-432a-bb25-effc028d4c7a req-9b4ecf8e-562c-4112-b67b-f7fa3792c04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Refreshing network info cache for port d031f59a-c8a6-4384-a905-41cab6c3b876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.683 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Start _get_guest_xml network_info=[{"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:48:00Z,direct_url=<?>,disk_format='raw',id=a62d470f-f755-4a5a-b8e5-0dc1be0600d1,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1647522224',owner='a837417d42da439cb794b4295bca2cee',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:48:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'a62d470f-f755-4a5a-b8e5-0dc1be0600d1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.687 2 WARNING nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.695 2 DEBUG nova.virt.libvirt.host [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.696 2 DEBUG nova.virt.libvirt.host [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.703 2 DEBUG nova.virt.libvirt.host [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.703 2 DEBUG nova.virt.libvirt.host [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.704 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.704 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:48:00Z,direct_url=<?>,disk_format='raw',id=a62d470f-f755-4a5a-b8e5-0dc1be0600d1,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1647522224',owner='a837417d42da439cb794b4295bca2cee',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:48:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.705 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.705 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.705 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.705 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.706 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.706 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.706 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.706 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.706 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.707 2 DEBUG nova.virt.hardware [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.709 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2753409632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.990 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.993 2 DEBUG nova.virt.libvirt.vif [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-0-1140484248',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-0-1140484248',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ge',id=171,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJKVdwh9QlTYgLCl82UngDe09ls/tGoKAoumO3RPyeFokkc9UfX0iTNo9e+/yxLXmyBLnEjwGIuPQii5VKTmJn2JEq7Lrn8EeslRSQRtYOdFDT4FmHE55JNasW1Hhg+tIg==',key_name='tempest-TestSecurityGroupsBasicOps-1055572894',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-xpsjds7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:15Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.994 2 DEBUG nova.network.os_vif_util [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.996 2 DEBUG nova.network.os_vif_util [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:4a:32,bridge_name='br-int',has_traffic_filtering=True,id=0bc32725-9ef8-42c3-8c96-6b5c03e2ef63,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bc32725-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:21 np0005465987 nova_compute[230713]: 2025-10-02 12:48:21.999 2 DEBUG nova.objects.instance [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.028 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <uuid>a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e</uuid>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <name>instance-000000ab</name>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-0-1140484248</nova:name>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:48:21</nova:creationTime>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:user uuid="16730f38111542e58a05fb4deb2b3914">tempest-TestSecurityGroupsBasicOps-1031871880-project-member</nova:user>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:project uuid="5ade962c517a483dbfe4bb13386f0006">tempest-TestSecurityGroupsBasicOps-1031871880</nova:project>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:port uuid="0bc32725-9ef8-42c3-8c96-6b5c03e2ef63">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="serial">a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="uuid">a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk.config">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:2a:4a:32"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <target dev="tap0bc32725-9e"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e/console.log" append="off"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:48:22 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:48:22 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.031 2 DEBUG nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Preparing to wait for external event network-vif-plugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.032 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.033 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.033 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.035 2 DEBUG nova.virt.libvirt.vif [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-0-1140484248',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-0-1140484248',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ge',id=171,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJKVdwh9QlTYgLCl82UngDe09ls/tGoKAoumO3RPyeFokkc9UfX0iTNo9e+/yxLXmyBLnEjwGIuPQii5VKTmJn2JEq7Lrn8EeslRSQRtYOdFDT4FmHE55JNasW1Hhg+tIg==',key_name='tempest-TestSecurityGroupsBasicOps-1055572894',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-xpsjds7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:15Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.036 2 DEBUG nova.network.os_vif_util [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.037 2 DEBUG nova.network.os_vif_util [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:4a:32,bridge_name='br-int',has_traffic_filtering=True,id=0bc32725-9ef8-42c3-8c96-6b5c03e2ef63,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bc32725-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.038 2 DEBUG os_vif [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:4a:32,bridge_name='br-int',has_traffic_filtering=True,id=0bc32725-9ef8-42c3-8c96-6b5c03e2ef63,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bc32725-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.040 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bc32725-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0bc32725-9e, col_values=(('external_ids', {'iface-id': '0bc32725-9ef8-42c3-8c96-6b5c03e2ef63', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:4a:32', 'vm-uuid': 'a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:22 np0005465987 NetworkManager[44910]: <info>  [1759409302.0511] manager: (tap0bc32725-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.060 2 INFO os_vif [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:4a:32,bridge_name='br-int',has_traffic_filtering=True,id=0bc32725-9ef8-42c3-8c96-6b5c03e2ef63,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bc32725-9e')#033[00m
Oct  2 08:48:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:22 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1462268955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.127 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.128 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.128 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No VIF found with MAC fa:16:3e:2a:4a:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.129 2 INFO nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Using config drive#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.157 2 DEBUG nova.storage.rbd_utils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.163 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.190 2 DEBUG nova.storage.rbd_utils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] rbd image dc0b661c-6c12-4408-9e23-dc3883bd5138_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.194 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:22.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:22 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1839463791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.613 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.615 2 DEBUG nova.virt.libvirt.vif [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:48:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1567525510',display_name='tempest-TestStampPattern-server-1567525510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-teststamppattern-server-1567525510',id=172,image_ref='a62d470f-f755-4a5a-b8e5-0dc1be0600d1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCPnME7yj7AGpi8+zR0j6wiXokmK+k5Lh86YSlnXsP0prCCTi2saYZPKg3ZreiW8R+IqbWLBOHnWbtyyC7ToJeWaqTKxTG25O47OUrV5FbVX8vZbUi2AzjxwYa4KvWo3jw==',key_name='tempest-TestStampPattern-1303591770',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a837417d42da439cb794b4295bca2cee',ramdisk_id='',reservation_id='r-k5ikch2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='bf9e8de1-5081-4daa-9041-1d329e06be86',image_min_disk='1',image_min_ram='0',image_owner_id='a837417d42da439cb794b4295bca2cee',image_owner_project_name='tempest-TestStampPattern-901207223',image_owner_user_name='tempest-TestStampPattern-901207223-project-member',image_user_id='a24a7109471f4d96ad5f11b637fdb8e7',network_allocated='True',owner_project_name='tempest-TestStampPattern-901207223',owner_user_name='tempest-TestStampPattern-901207223-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:16Z,user_data=None,user_id='a24a7109471f4d96ad5f11b637fdb8e7',uuid=dc0b661c-6c12-4408-9e23-dc3883bd5138,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.615 2 DEBUG nova.network.os_vif_util [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Converting VIF {"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.616 2 DEBUG nova.network.os_vif_util [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:81:59,bridge_name='br-int',has_traffic_filtering=True,id=d031f59a-c8a6-4384-a905-41cab6c3b876,network=Network(0739dafe-4d9b-4048-ac97-c017fd298447),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd031f59a-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.617 2 DEBUG nova.objects.instance [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lazy-loading 'pci_devices' on Instance uuid dc0b661c-6c12-4408-9e23-dc3883bd5138 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.642 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <uuid>dc0b661c-6c12-4408-9e23-dc3883bd5138</uuid>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <name>instance-000000ac</name>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestStampPattern-server-1567525510</nova:name>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:48:21</nova:creationTime>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:user uuid="a24a7109471f4d96ad5f11b637fdb8e7">tempest-TestStampPattern-901207223-project-member</nova:user>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:project uuid="a837417d42da439cb794b4295bca2cee">tempest-TestStampPattern-901207223</nova:project>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="a62d470f-f755-4a5a-b8e5-0dc1be0600d1"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <nova:port uuid="d031f59a-c8a6-4384-a905-41cab6c3b876">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="serial">dc0b661c-6c12-4408-9e23-dc3883bd5138</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="uuid">dc0b661c-6c12-4408-9e23-dc3883bd5138</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/dc0b661c-6c12-4408-9e23-dc3883bd5138_disk">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/dc0b661c-6c12-4408-9e23-dc3883bd5138_disk.config">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:10:81:59"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <target dev="tapd031f59a-c8"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138/console.log" append="off"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <input type="keyboard" bus="usb"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:48:22 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:48:22 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:48:22 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:48:22 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.644 2 DEBUG nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Preparing to wait for external event network-vif-plugged-d031f59a-c8a6-4384-a905-41cab6c3b876 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.645 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.645 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.646 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.646 2 DEBUG nova.virt.libvirt.vif [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:48:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1567525510',display_name='tempest-TestStampPattern-server-1567525510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-teststamppattern-server-1567525510',id=172,image_ref='a62d470f-f755-4a5a-b8e5-0dc1be0600d1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCPnME7yj7AGpi8+zR0j6wiXokmK+k5Lh86YSlnXsP0prCCTi2saYZPKg3ZreiW8R+IqbWLBOHnWbtyyC7ToJeWaqTKxTG25O47OUrV5FbVX8vZbUi2AzjxwYa4KvWo3jw==',key_name='tempest-TestStampPattern-1303591770',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a837417d42da439cb794b4295bca2cee',ramdisk_id='',reservation_id='r-k5ikch2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='bf9e8de1-5081-4daa-9041-1d329e06be86',image_min_disk='1',image_min_ram='0',image_owner_id='a837417d42da439cb794b4295bca2cee',image_owner_project_name='tempest-TestStampPattern-901207223',image_owner_user_name='tempest-TestStampPattern-901207223-project-member',image_user_id='a24a7109471f4d96ad5f11b637fdb8e7',network_allocated='True',owner_project_name='tempest-TestStampPattern-901207223',owner_user_name='tempest-TestStampPattern-901207223-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:48:16Z,user_data=None,user_id='a24a7109471f4d96ad5f11b637fdb8e7',uuid=dc0b661c-6c12-4408-9e23-dc3883bd5138,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.647 2 DEBUG nova.network.os_vif_util [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Converting VIF {"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.648 2 DEBUG nova.network.os_vif_util [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:81:59,bridge_name='br-int',has_traffic_filtering=True,id=d031f59a-c8a6-4384-a905-41cab6c3b876,network=Network(0739dafe-4d9b-4048-ac97-c017fd298447),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd031f59a-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.648 2 DEBUG os_vif [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:81:59,bridge_name='br-int',has_traffic_filtering=True,id=d031f59a-c8a6-4384-a905-41cab6c3b876,network=Network(0739dafe-4d9b-4048-ac97-c017fd298447),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd031f59a-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd031f59a-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.655 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd031f59a-c8, col_values=(('external_ids', {'iface-id': 'd031f59a-c8a6-4384-a905-41cab6c3b876', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:81:59', 'vm-uuid': 'dc0b661c-6c12-4408-9e23-dc3883bd5138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:22 np0005465987 NetworkManager[44910]: <info>  [1759409302.6575] manager: (tapd031f59a-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.664 2 INFO os_vif [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:81:59,bridge_name='br-int',has_traffic_filtering=True,id=d031f59a-c8a6-4384-a905-41cab6c3b876,network=Network(0739dafe-4d9b-4048-ac97-c017fd298447),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd031f59a-c8')#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.720 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.720 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.720 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] No VIF found with MAC fa:16:3e:10:81:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.721 2 INFO nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Using config drive#033[00m
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.750 2 DEBUG nova.storage.rbd_utils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] rbd image dc0b661c-6c12-4408-9e23-dc3883bd5138_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:22.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:22 np0005465987 nova_compute[230713]: 2025-10-02 12:48:22.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.171 2 INFO nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Creating config drive at /var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e/disk.config#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.178 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uitv4rk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.315 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uitv4rk" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.350 2 DEBUG nova.storage.rbd_utils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.354 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e/disk.config a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.435 2 INFO nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Creating config drive at /var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138/disk.config#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.441 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjy56mg_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.547 2 DEBUG oslo_concurrency.processutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e/disk.config a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.549 2 INFO nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Deleting local config drive /var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e/disk.config because it was imported into RBD.#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.594 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjy56mg_" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:23 np0005465987 NetworkManager[44910]: <info>  [1759409303.5953] manager: (tap0bc32725-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Oct  2 08:48:23 np0005465987 kernel: tap0bc32725-9e: entered promiscuous mode
Oct  2 08:48:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:23Z|00663|binding|INFO|Claiming lport 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 for this chassis.
Oct  2 08:48:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:23Z|00664|binding|INFO|0bc32725-9ef8-42c3-8c96-6b5c03e2ef63: Claiming fa:16:3e:2a:4a:32 10.100.0.10
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.622 2 DEBUG nova.storage.rbd_utils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] rbd image dc0b661c-6c12-4408-9e23-dc3883bd5138_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:23 np0005465987 NetworkManager[44910]: <info>  [1759409303.6275] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Oct  2 08:48:23 np0005465987 NetworkManager[44910]: <info>  [1759409303.6286] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.629 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:4a:32 10.100.0.10'], port_security=['fa:16:3e:2a:4a:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccdcfae6-20a7-43b2-a33d-9f1e1bd6bb8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81b23ccf-1bf4-44ff-8965-dd1a37c9d290, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0bc32725-9ef8-42c3-8c96-6b5c03e2ef63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.630 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 in datapath a0b40647-5bdc-42ab-8337-b3fcdc66ecfc bound to our chassis#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.632 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0b40647-5bdc-42ab-8337-b3fcdc66ecfc#033[00m
Oct  2 08:48:23 np0005465987 systemd-udevd[295306]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:48:23 np0005465987 NetworkManager[44910]: <info>  [1759409303.6484] device (tap0bc32725-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.647 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138/disk.config dc0b661c-6c12-4408-9e23-dc3883bd5138_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.644 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dca130f4-2336-469e-a9e5-2c5a254d7521]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.646 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0b40647-51 in ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:48:23 np0005465987 NetworkManager[44910]: <info>  [1759409303.6508] device (tap0bc32725-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:48:23 np0005465987 systemd-machined[188335]: New machine qemu-78-instance-000000ab.
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.647 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0b40647-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.647 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[179f680b-8439-4871-a8a2-b6b2531c6c34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.652 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e1f569-b3dc-4772-a38d-200fb302deba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.663 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[a17c168e-f083-46e6-8c93-14c323cfaceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 systemd[1]: Started Virtual Machine qemu-78-instance-000000ab.
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.690 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6db704ef-27b9-4401-b94a-8d2d266599fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.719 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5caa7e99-0d76-437a-a32c-71e22e431482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 NetworkManager[44910]: <info>  [1759409303.7293] manager: (tapa0b40647-50): new Veth device (/org/freedesktop/NetworkManager/Devices/304)
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.728 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb77049-37a7-48fa-a719-eec7156f685b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.763 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[087b828a-8a3a-41f4-88a1-0440360ff2ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.766 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3771192d-bce6-4268-94ce-e6d6a1d2e579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 NetworkManager[44910]: <info>  [1759409303.7869] device (tapa0b40647-50): carrier: link connected
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.793 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf8943e-40bd-46b3-b4fd-927b460bbfc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.803 2 DEBUG nova.network.neutron [req-bf263ccb-a315-484f-a54a-c432e7ef6d06 req-15ab5712-2d36-4c6a-ba1d-0319f0fe2755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Updated VIF entry in instance network info cache for port 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.803 2 DEBUG nova.network.neutron [req-bf263ccb-a315-484f-a54a-c432e7ef6d06 req-15ab5712-2d36-4c6a-ba1d-0319f0fe2755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Updating instance_info_cache with network_info: [{"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.810 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5b72cf-a69e-4df7-963c-2309f31af624]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0b40647-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:6a:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737236, 'reachable_time': 44133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295355, 'error': None, 'target': 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.824 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e5377788-dbb4-43d1-8403-e6d20144efa2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:6a85'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737236, 'tstamp': 737236}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295356, 'error': None, 'target': 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.832 2 DEBUG oslo_concurrency.lockutils [req-bf263ccb-a315-484f-a54a-c432e7ef6d06 req-15ab5712-2d36-4c6a-ba1d-0319f0fe2755 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.842 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2177b3c5-8a55-4028-8c20-9cf162e3fa13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0b40647-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:6a:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737236, 'reachable_time': 44133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295357, 'error': None, 'target': 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.871 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d118746e-2410-4f51-827f-fcd3be85d2fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.934 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ff735105-ef16-43fe-a96d-3ff1ab2d8442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.936 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0b40647-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.936 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.936 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0b40647-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:23 np0005465987 NetworkManager[44910]: <info>  [1759409303.9384] manager: (tapa0b40647-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:23Z|00665|binding|INFO|Setting lport 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 ovn-installed in OVS
Oct  2 08:48:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:23Z|00666|binding|INFO|Setting lport 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 up in Southbound
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005465987 kernel: tapa0b40647-50: entered promiscuous mode
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.953 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0b40647-50, col_values=(('external_ids', {'iface-id': 'd1c4722b-043f-4662-89a4-55feb7508119'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:23Z|00667|binding|INFO|Releasing lport d1c4722b-043f-4662-89a4-55feb7508119 from this chassis (sb_readonly=0)
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.958 2 DEBUG oslo_concurrency.processutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138/disk.config dc0b661c-6c12-4408-9e23-dc3883bd5138_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.958 2 INFO nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Deleting local config drive /var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138/disk.config because it was imported into RBD.#033[00m
Oct  2 08:48:23 np0005465987 nova_compute[230713]: 2025-10-02 12:48:23.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.973 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0b40647-5bdc-42ab-8337-b3fcdc66ecfc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0b40647-5bdc-42ab-8337-b3fcdc66ecfc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.974 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2696fd0b-7b80-4e4e-b04b-9290bb24f3c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.975 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/a0b40647-5bdc-42ab-8337-b3fcdc66ecfc.pid.haproxy
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID a0b40647-5bdc-42ab-8337-b3fcdc66ecfc
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:48:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:23.975 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'env', 'PROCESS_TAG=haproxy-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0b40647-5bdc-42ab-8337-b3fcdc66ecfc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:48:23 np0005465987 kernel: tapd031f59a-c8: entered promiscuous mode
Oct  2 08:48:23 np0005465987 NetworkManager[44910]: <info>  [1759409303.9990] manager: (tapd031f59a-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Oct  2 08:48:24 np0005465987 systemd-udevd[295343]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:23Z|00668|binding|INFO|Claiming lport d031f59a-c8a6-4384-a905-41cab6c3b876 for this chassis.
Oct  2 08:48:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:23Z|00669|binding|INFO|d031f59a-c8a6-4384-a905-41cab6c3b876: Claiming fa:16:3e:10:81:59 10.100.0.5
Oct  2 08:48:24 np0005465987 NetworkManager[44910]: <info>  [1759409304.0088] device (tapd031f59a-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:48:24 np0005465987 NetworkManager[44910]: <info>  [1759409304.0100] device (tapd031f59a-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:48:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:24Z|00670|binding|INFO|Setting lport d031f59a-c8a6-4384-a905-41cab6c3b876 ovn-installed in OVS
Oct  2 08:48:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:24Z|00671|binding|INFO|Setting lport d031f59a-c8a6-4384-a905-41cab6c3b876 up in Southbound
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.018 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:81:59 10.100.0.5'], port_security=['fa:16:3e:10:81:59 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dc0b661c-6c12-4408-9e23-dc3883bd5138', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0739dafe-4d9b-4048-ac97-c017fd298447', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a837417d42da439cb794b4295bca2cee', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2135b395-ac43-463e-b267-fe36f0a53800', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57c22c18-07d9-4913-840f-1bcc05bb2313, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=d031f59a-c8a6-4384-a905-41cab6c3b876) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:48:24 np0005465987 systemd-machined[188335]: New machine qemu-79-instance-000000ac.
Oct  2 08:48:24 np0005465987 systemd[1]: Started Virtual Machine qemu-79-instance-000000ac.
Oct  2 08:48:24 np0005465987 podman[295488]: 2025-10-02 12:48:24.346965918 +0000 UTC m=+0.057030854 container create 00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:48:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:24.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:24 np0005465987 systemd[1]: Started libpod-conmon-00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf.scope.
Oct  2 08:48:24 np0005465987 podman[295488]: 2025-10-02 12:48:24.315812171 +0000 UTC m=+0.025877137 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:48:24 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:48:24 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ebd9edf4560ad449e5ba6438225b60e219117e73fffc2fa9e12271432c0d19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:48:24 np0005465987 podman[295488]: 2025-10-02 12:48:24.434201824 +0000 UTC m=+0.144266790 container init 00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:48:24 np0005465987 podman[295488]: 2025-10-02 12:48:24.439448385 +0000 UTC m=+0.149513311 container start 00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:48:24 np0005465987 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[295515]: [NOTICE]   (295519) : New worker (295521) forked
Oct  2 08:48:24 np0005465987 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[295515]: [NOTICE]   (295519) : Loading success.
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.472 2 DEBUG nova.compute.manager [req-4985a3c1-5588-40e9-a1b8-8e2a77150b1a req-b8b49f46-fbf7-4d9e-a409-957082b293c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received event network-vif-plugged-d031f59a-c8a6-4384-a905-41cab6c3b876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.472 2 DEBUG oslo_concurrency.lockutils [req-4985a3c1-5588-40e9-a1b8-8e2a77150b1a req-b8b49f46-fbf7-4d9e-a409-957082b293c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.472 2 DEBUG oslo_concurrency.lockutils [req-4985a3c1-5588-40e9-a1b8-8e2a77150b1a req-b8b49f46-fbf7-4d9e-a409-957082b293c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.472 2 DEBUG oslo_concurrency.lockutils [req-4985a3c1-5588-40e9-a1b8-8e2a77150b1a req-b8b49f46-fbf7-4d9e-a409-957082b293c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.473 2 DEBUG nova.compute.manager [req-4985a3c1-5588-40e9-a1b8-8e2a77150b1a req-b8b49f46-fbf7-4d9e-a409-957082b293c8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Processing event network-vif-plugged-d031f59a-c8a6-4384-a905-41cab6c3b876 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.509 138325 INFO neutron.agent.ovn.metadata.agent [-] Port d031f59a-c8a6-4384-a905-41cab6c3b876 in datapath 0739dafe-4d9b-4048-ac97-c017fd298447 unbound from our chassis#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.512 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0739dafe-4d9b-4048-ac97-c017fd298447#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.525 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7b21fc97-a06f-46ea-8eba-8c2d140b5389]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.526 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0739dafe-41 in ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.527 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0739dafe-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.527 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3b87b42a-af2c-4f7d-ac79-f61d525b8f7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.528 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3b59a462-435c-491e-b18e-89bb8a5ffe94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.537 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[0eea17ea-0e48-4edf-93c3-a83a36727ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.549 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d7daaa0a-5f8d-4329-bc08-88bf40a74e65]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.578 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3a17c1-a16f-49de-b1a0-239a6bcbe5a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.583 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[14d5b242-9aac-4a30-ab5f-f06d4ad3813c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 NetworkManager[44910]: <info>  [1759409304.5851] manager: (tap0739dafe-40): new Veth device (/org/freedesktop/NetworkManager/Devices/307)
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.620 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6caeb966-c6be-4b38-885e-ffb1c23bcced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.626 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d2439b76-b204-424b-b5af-55c6e243dc1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 NetworkManager[44910]: <info>  [1759409304.6474] device (tap0739dafe-40): carrier: link connected
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.655 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6f5027-1ba5-4831-ab4d-70897616ed76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.671 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9f4a26-f787-4d1a-8535-44246def04ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0739dafe-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:9f:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737322, 'reachable_time': 23162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295540, 'error': None, 'target': 'ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.687 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[21fec525-4686-4d38-9273-6532858489dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:9fc0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737322, 'tstamp': 737322}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295541, 'error': None, 'target': 'ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.705 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6e283082-32b9-469f-a814-b95587682e58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0739dafe-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:9f:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737322, 'reachable_time': 23162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295542, 'error': None, 'target': 'ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.746 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c4162fb2-23fd-4898-9563-b22464fcdadb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.749 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409304.7494667, a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.750 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.787 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.790 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409304.750455, a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.791 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.803 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[37adec6e-d98e-4ade-b7ef-8d450b8d4bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.804 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0739dafe-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.805 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.805 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0739dafe-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:24 np0005465987 NetworkManager[44910]: <info>  [1759409304.8080] manager: (tap0739dafe-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Oct  2 08:48:24 np0005465987 kernel: tap0739dafe-40: entered promiscuous mode
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.810 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0739dafe-40, col_values=(('external_ids', {'iface-id': '28a672bd-7c4d-49bd-8937-0e065b62aa5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:24 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:24Z|00672|binding|INFO|Releasing lport 28a672bd-7c4d-49bd-8937-0e065b62aa5f from this chassis (sb_readonly=0)
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.819 2 DEBUG nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.824 2 DEBUG nova.virt.libvirt.driver [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.828 2 INFO nova.virt.libvirt.driver [-] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Instance spawned successfully.#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.828 2 INFO nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Took 7.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.829 2 DEBUG nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.828 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0739dafe-4d9b-4048-ac97-c017fd298447.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0739dafe-4d9b-4048-ac97-c017fd298447.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.830 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0d070b-01f5-4226-9867-4969f32b48ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.831 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-0739dafe-4d9b-4048-ac97-c017fd298447
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/0739dafe-4d9b-4048-ac97-c017fd298447.pid.haproxy
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:48:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:48:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:24.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 0739dafe-4d9b-4048-ac97-c017fd298447
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:48:24 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:24.832 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447', 'env', 'PROCESS_TAG=haproxy-0739dafe-4d9b-4048-ac97-c017fd298447', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0739dafe-4d9b-4048-ac97-c017fd298447.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.922 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:24 np0005465987 nova_compute[230713]: 2025-10-02 12:48:24.930 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.018 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.018 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409304.8188882, dc0b661c-6c12-4408-9e23-dc3883bd5138 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.019 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] VM Started (Lifecycle Event)#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.035 2 INFO nova.compute.manager [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Took 9.77 seconds to build instance.#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.128 2 DEBUG nova.compute.manager [req-fd915af4-a234-4b5a-9283-bd76d9c63c6d req-0b1ab5f2-03de-473c-8d37-5bc08295c8cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Received event network-vif-plugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.128 2 DEBUG oslo_concurrency.lockutils [req-fd915af4-a234-4b5a-9283-bd76d9c63c6d req-0b1ab5f2-03de-473c-8d37-5bc08295c8cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.129 2 DEBUG oslo_concurrency.lockutils [req-fd915af4-a234-4b5a-9283-bd76d9c63c6d req-0b1ab5f2-03de-473c-8d37-5bc08295c8cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.129 2 DEBUG oslo_concurrency.lockutils [req-fd915af4-a234-4b5a-9283-bd76d9c63c6d req-0b1ab5f2-03de-473c-8d37-5bc08295c8cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.129 2 DEBUG nova.compute.manager [req-fd915af4-a234-4b5a-9283-bd76d9c63c6d req-0b1ab5f2-03de-473c-8d37-5bc08295c8cb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Processing event network-vif-plugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.130 2 DEBUG nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.134 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.141 2 INFO nova.virt.libvirt.driver [-] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Instance spawned successfully.#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.141 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.156 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.163 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.170 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.170 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.170 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.171 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.171 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.172 2 DEBUG nova.virt.libvirt.driver [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:25 np0005465987 podman[295574]: 2025-10-02 12:48:25.201425271 +0000 UTC m=+0.054181678 container create df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.227 2 DEBUG oslo_concurrency.lockutils [None req-1798cfa0-d33d-40e6-986e-9e09700d8c56 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:25 np0005465987 systemd[1]: Started libpod-conmon-df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e.scope.
Oct  2 08:48:25 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:48:25 np0005465987 podman[295574]: 2025-10-02 12:48:25.176438879 +0000 UTC m=+0.029195306 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:48:25 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c5b7feb8ae0c3598ab8fe4838e02f81d8cc94d61d8dacd0ae91760d63540c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:48:25 np0005465987 podman[295574]: 2025-10-02 12:48:25.287618868 +0000 UTC m=+0.140375295 container init df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:48:25 np0005465987 podman[295574]: 2025-10-02 12:48:25.294816801 +0000 UTC m=+0.147573208 container start df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:48:25 np0005465987 neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447[295587]: [NOTICE]   (295591) : New worker (295593) forked
Oct  2 08:48:25 np0005465987 neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447[295587]: [NOTICE]   (295591) : Loading success.
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.322 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409304.8190312, dc0b661c-6c12-4408-9e23-dc3883bd5138 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.322 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:48:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.449 2 INFO nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Took 9.37 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.450 2 DEBUG nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.519 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.524 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409304.821208, dc0b661c-6c12-4408-9e23-dc3883bd5138 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.525 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.566 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.569 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.644 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409305.1338792, a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.644 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.647 2 INFO nova.compute.manager [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Took 10.62 seconds to build instance.#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.772 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.775 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:25 np0005465987 nova_compute[230713]: 2025-10-02 12:48:25.812 2 DEBUG oslo_concurrency.lockutils [None req-1a55acb1-bfc2-4c84-8a91-52a9dac7172a 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.228 2 DEBUG nova.network.neutron [req-124213dc-2fde-432a-bb25-effc028d4c7a req-9b4ecf8e-562c-4112-b67b-f7fa3792c04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updated VIF entry in instance network info cache for port d031f59a-c8a6-4384-a905-41cab6c3b876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.229 2 DEBUG nova.network.neutron [req-124213dc-2fde-432a-bb25-effc028d4c7a req-9b4ecf8e-562c-4112-b67b-f7fa3792c04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updating instance_info_cache with network_info: [{"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.317 2 DEBUG oslo_concurrency.lockutils [req-124213dc-2fde-432a-bb25-effc028d4c7a req-9b4ecf8e-562c-4112-b67b-f7fa3792c04d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:26.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.654 2 DEBUG nova.compute.manager [req-fc7ca4dc-8632-4968-a873-38ca436ee2fb req-63a8e219-3320-49a4-97fb-01108bab3051 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received event network-vif-plugged-d031f59a-c8a6-4384-a905-41cab6c3b876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.655 2 DEBUG oslo_concurrency.lockutils [req-fc7ca4dc-8632-4968-a873-38ca436ee2fb req-63a8e219-3320-49a4-97fb-01108bab3051 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.655 2 DEBUG oslo_concurrency.lockutils [req-fc7ca4dc-8632-4968-a873-38ca436ee2fb req-63a8e219-3320-49a4-97fb-01108bab3051 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.655 2 DEBUG oslo_concurrency.lockutils [req-fc7ca4dc-8632-4968-a873-38ca436ee2fb req-63a8e219-3320-49a4-97fb-01108bab3051 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.655 2 DEBUG nova.compute.manager [req-fc7ca4dc-8632-4968-a873-38ca436ee2fb req-63a8e219-3320-49a4-97fb-01108bab3051 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] No waiting events found dispatching network-vif-plugged-d031f59a-c8a6-4384-a905-41cab6c3b876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:48:26 np0005465987 nova_compute[230713]: 2025-10-02 12:48:26.655 2 WARNING nova.compute.manager [req-fc7ca4dc-8632-4968-a873-38ca436ee2fb req-63a8e219-3320-49a4-97fb-01108bab3051 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received unexpected event network-vif-plugged-d031f59a-c8a6-4384-a905-41cab6c3b876 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:48:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:26.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:27 np0005465987 nova_compute[230713]: 2025-10-02 12:48:27.292 2 DEBUG nova.compute.manager [req-3d141b38-ed01-4e5f-8d48-a39926ccadf5 req-9e15f607-f3a5-47ce-960c-34d9481d6fcb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Received event network-vif-plugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:27 np0005465987 nova_compute[230713]: 2025-10-02 12:48:27.292 2 DEBUG oslo_concurrency.lockutils [req-3d141b38-ed01-4e5f-8d48-a39926ccadf5 req-9e15f607-f3a5-47ce-960c-34d9481d6fcb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:27 np0005465987 nova_compute[230713]: 2025-10-02 12:48:27.292 2 DEBUG oslo_concurrency.lockutils [req-3d141b38-ed01-4e5f-8d48-a39926ccadf5 req-9e15f607-f3a5-47ce-960c-34d9481d6fcb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:27 np0005465987 nova_compute[230713]: 2025-10-02 12:48:27.293 2 DEBUG oslo_concurrency.lockutils [req-3d141b38-ed01-4e5f-8d48-a39926ccadf5 req-9e15f607-f3a5-47ce-960c-34d9481d6fcb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:27 np0005465987 nova_compute[230713]: 2025-10-02 12:48:27.293 2 DEBUG nova.compute.manager [req-3d141b38-ed01-4e5f-8d48-a39926ccadf5 req-9e15f607-f3a5-47ce-960c-34d9481d6fcb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] No waiting events found dispatching network-vif-plugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:48:27 np0005465987 nova_compute[230713]: 2025-10-02 12:48:27.293 2 WARNING nova.compute.manager [req-3d141b38-ed01-4e5f-8d48-a39926ccadf5 req-9e15f607-f3a5-47ce-960c-34d9481d6fcb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Received unexpected event network-vif-plugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:48:27 np0005465987 nova_compute[230713]: 2025-10-02 12:48:27.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:27.974 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:27.975 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:27.977 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:28.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:48:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:28.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:48:28 np0005465987 nova_compute[230713]: 2025-10-02 12:48:28.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:28.890 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:48:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:28.892 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:48:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:29.893 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:30.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:30 np0005465987 nova_compute[230713]: 2025-10-02 12:48:30.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:30 np0005465987 nova_compute[230713]: 2025-10-02 12:48:30.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:48:31 np0005465987 nova_compute[230713]: 2025-10-02 12:48:31.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:31 np0005465987 nova_compute[230713]: 2025-10-02 12:48:31.513 2 DEBUG nova.compute.manager [req-e475408a-9380-42b7-89fa-8401f3e98993 req-be7dacfa-cab6-432c-b222-f2b3abdb3f28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received event network-changed-d031f59a-c8a6-4384-a905-41cab6c3b876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:31 np0005465987 nova_compute[230713]: 2025-10-02 12:48:31.513 2 DEBUG nova.compute.manager [req-e475408a-9380-42b7-89fa-8401f3e98993 req-be7dacfa-cab6-432c-b222-f2b3abdb3f28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Refreshing instance network info cache due to event network-changed-d031f59a-c8a6-4384-a905-41cab6c3b876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:31 np0005465987 nova_compute[230713]: 2025-10-02 12:48:31.513 2 DEBUG oslo_concurrency.lockutils [req-e475408a-9380-42b7-89fa-8401f3e98993 req-be7dacfa-cab6-432c-b222-f2b3abdb3f28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:31 np0005465987 nova_compute[230713]: 2025-10-02 12:48:31.514 2 DEBUG oslo_concurrency.lockutils [req-e475408a-9380-42b7-89fa-8401f3e98993 req-be7dacfa-cab6-432c-b222-f2b3abdb3f28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:31 np0005465987 nova_compute[230713]: 2025-10-02 12:48:31.514 2 DEBUG nova.network.neutron [req-e475408a-9380-42b7-89fa-8401f3e98993 req-be7dacfa-cab6-432c-b222-f2b3abdb3f28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Refreshing network info cache for port d031f59a-c8a6-4384-a905-41cab6c3b876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:32.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:32 np0005465987 nova_compute[230713]: 2025-10-02 12:48:32.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:32.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e364 e364: 3 total, 3 up, 3 in
Oct  2 08:48:34 np0005465987 nova_compute[230713]: 2025-10-02 12:48:34.124 2 DEBUG nova.network.neutron [req-e475408a-9380-42b7-89fa-8401f3e98993 req-be7dacfa-cab6-432c-b222-f2b3abdb3f28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updated VIF entry in instance network info cache for port d031f59a-c8a6-4384-a905-41cab6c3b876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:34 np0005465987 nova_compute[230713]: 2025-10-02 12:48:34.124 2 DEBUG nova.network.neutron [req-e475408a-9380-42b7-89fa-8401f3e98993 req-be7dacfa-cab6-432c-b222-f2b3abdb3f28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updating instance_info_cache with network_info: [{"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:34 np0005465987 nova_compute[230713]: 2025-10-02 12:48:34.160 2 DEBUG oslo_concurrency.lockutils [req-e475408a-9380-42b7-89fa-8401f3e98993 req-be7dacfa-cab6-432c-b222-f2b3abdb3f28 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:34.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:34.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:35 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct  2 08:48:35 np0005465987 nova_compute[230713]: 2025-10-02 12:48:35.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e365 e365: 3 total, 3 up, 3 in
Oct  2 08:48:36 np0005465987 nova_compute[230713]: 2025-10-02 12:48:36.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:36.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:36.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e366 e366: 3 total, 3 up, 3 in
Oct  2 08:48:37 np0005465987 nova_compute[230713]: 2025-10-02 12:48:37.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:38Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:4a:32 10.100.0.10
Oct  2 08:48:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:38Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:4a:32 10.100.0.10
Oct  2 08:48:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:38Z|00071|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.5
Oct  2 08:48:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:38Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:10:81:59 10.100.0.5
Oct  2 08:48:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:38.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:38.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:40.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:40.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:41 np0005465987 nova_compute[230713]: 2025-10-02 12:48:41.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:41 np0005465987 podman[295603]: 2025-10-02 12:48:41.884821746 +0000 UTC m=+0.091189223 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:48:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:42Z|00073|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.5
Oct  2 08:48:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:42Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:10:81:59 10.100.0.5
Oct  2 08:48:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:48:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:42.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:48:42 np0005465987 nova_compute[230713]: 2025-10-02 12:48:42.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:42.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:43Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:81:59 10.100.0.5
Oct  2 08:48:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:43Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:81:59 10.100.0.5
Oct  2 08:48:43 np0005465987 nova_compute[230713]: 2025-10-02 12:48:43.754 2 DEBUG oslo_concurrency.lockutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:43 np0005465987 nova_compute[230713]: 2025-10-02 12:48:43.755 2 DEBUG oslo_concurrency.lockutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:43 np0005465987 nova_compute[230713]: 2025-10-02 12:48:43.755 2 DEBUG oslo_concurrency.lockutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:43 np0005465987 nova_compute[230713]: 2025-10-02 12:48:43.755 2 DEBUG oslo_concurrency.lockutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:43 np0005465987 nova_compute[230713]: 2025-10-02 12:48:43.755 2 DEBUG oslo_concurrency.lockutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:43 np0005465987 nova_compute[230713]: 2025-10-02 12:48:43.756 2 INFO nova.compute.manager [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Terminating instance#033[00m
Oct  2 08:48:43 np0005465987 nova_compute[230713]: 2025-10-02 12:48:43.757 2 DEBUG nova.compute.manager [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e367 e367: 3 total, 3 up, 3 in
Oct  2 08:48:44 np0005465987 kernel: tap0bc32725-9e (unregistering): left promiscuous mode
Oct  2 08:48:44 np0005465987 NetworkManager[44910]: <info>  [1759409324.2984] device (tap0bc32725-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:48:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:44Z|00673|binding|INFO|Releasing lport 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 from this chassis (sb_readonly=0)
Oct  2 08:48:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:44Z|00674|binding|INFO|Setting lport 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 down in Southbound
Oct  2 08:48:44 np0005465987 ovn_controller[129172]: 2025-10-02T12:48:44Z|00675|binding|INFO|Removing iface tap0bc32725-9e ovn-installed in OVS
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.319 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:4a:32 10.100.0.10'], port_security=['fa:16:3e:2a:4a:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccdcfae6-20a7-43b2-a33d-9f1e1bd6bb8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81b23ccf-1bf4-44ff-8965-dd1a37c9d290, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0bc32725-9ef8-42c3-8c96-6b5c03e2ef63) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.321 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 in datapath a0b40647-5bdc-42ab-8337-b3fcdc66ecfc unbound from our chassis#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.322 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.326 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea0338d-7fff-44d7-aa75-d74ca2547f71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.326 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc namespace which is not needed anymore#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005465987 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000ab.scope: Deactivated successfully.
Oct  2 08:48:44 np0005465987 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000ab.scope: Consumed 13.023s CPU time.
Oct  2 08:48:44 np0005465987 systemd-machined[188335]: Machine qemu-78-instance-000000ab terminated.
Oct  2 08:48:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:44.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:44 np0005465987 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[295515]: [NOTICE]   (295519) : haproxy version is 2.8.14-c23fe91
Oct  2 08:48:44 np0005465987 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[295515]: [NOTICE]   (295519) : path to executable is /usr/sbin/haproxy
Oct  2 08:48:44 np0005465987 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[295515]: [WARNING]  (295519) : Exiting Master process...
Oct  2 08:48:44 np0005465987 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[295515]: [ALERT]    (295519) : Current worker (295521) exited with code 143 (Terminated)
Oct  2 08:48:44 np0005465987 neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc[295515]: [WARNING]  (295519) : All workers exited. Exiting... (0)
Oct  2 08:48:44 np0005465987 systemd[1]: libpod-00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf.scope: Deactivated successfully.
Oct  2 08:48:44 np0005465987 conmon[295515]: conmon 00ac296374b676686638 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf.scope/container/memory.events
Oct  2 08:48:44 np0005465987 podman[295647]: 2025-10-02 12:48:44.468019496 +0000 UTC m=+0.051259050 container died 00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:48:44 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf-userdata-shm.mount: Deactivated successfully.
Oct  2 08:48:44 np0005465987 systemd[1]: var-lib-containers-storage-overlay-e0ebd9edf4560ad449e5ba6438225b60e219117e73fffc2fa9e12271432c0d19-merged.mount: Deactivated successfully.
Oct  2 08:48:44 np0005465987 podman[295647]: 2025-10-02 12:48:44.511794933 +0000 UTC m=+0.095034467 container cleanup 00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.519069) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324519112, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 869, "num_deletes": 253, "total_data_size": 1478644, "memory_usage": 1509256, "flush_reason": "Manual Compaction"}
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Oct  2 08:48:44 np0005465987 systemd[1]: libpod-conmon-00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf.scope: Deactivated successfully.
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324531021, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 728548, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63179, "largest_seqno": 64042, "table_properties": {"data_size": 724805, "index_size": 1459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9940, "raw_average_key_size": 21, "raw_value_size": 716877, "raw_average_value_size": 1538, "num_data_blocks": 62, "num_entries": 466, "num_filter_entries": 466, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409279, "oldest_key_time": 1759409279, "file_creation_time": 1759409324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 11993 microseconds, and 2757 cpu microseconds.
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.531061) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 728548 bytes OK
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.531078) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.532891) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.532912) EVENT_LOG_v1 {"time_micros": 1759409324532906, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.532932) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1474156, prev total WAL file size 1474156, number of live WAL files 2.
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.533677) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303039' seq:72057594037927935, type:22 .. '6D6772737461740032323631' seq:0, type:0; will stop at (end)
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(711KB)], [123(13MB)]
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324533832, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 14540151, "oldest_snapshot_seqno": -1}
Oct  2 08:48:44 np0005465987 podman[295676]: 2025-10-02 12:48:44.571064116 +0000 UTC m=+0.038097015 container remove 00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.577 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b72a9672-6100-4509-9c88-ef1f6a579cd0]: (4, ('Thu Oct  2 12:48:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc (00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf)\n00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf\nThu Oct  2 12:48:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc (00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf)\n00ac296374b6766866384fafd7b295791475857fad55a0302efd1002c5f915cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.580 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[12a65182-4e66-47dd-a286-b817ea4f5ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.581 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0b40647-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.592 2 INFO nova.virt.libvirt.driver [-] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Instance destroyed successfully.#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.593 2 DEBUG nova.objects.instance [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'resources' on Instance uuid a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005465987 kernel: tapa0b40647-50: left promiscuous mode
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.619 2 DEBUG nova.virt.libvirt.vif [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-0-1140484248',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-gen-0-1140484248',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ge',id=171,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJKVdwh9QlTYgLCl82UngDe09ls/tGoKAoumO3RPyeFokkc9UfX0iTNo9e+/yxLXmyBLnEjwGIuPQii5VKTmJn2JEq7Lrn8EeslRSQRtYOdFDT4FmHE55JNasW1Hhg+tIg==',key_name='tempest-TestSecurityGroupsBasicOps-1055572894',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-xpsjds7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:25Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.620 2 DEBUG nova.network.os_vif_util [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "address": "fa:16:3e:2a:4a:32", "network": {"id": "a0b40647-5bdc-42ab-8337-b3fcdc66ecfc", "bridge": "br-int", "label": "tempest-network-smoke--749866782", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bc32725-9e", "ovs_interfaceid": "0bc32725-9ef8-42c3-8c96-6b5c03e2ef63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.620 2 DEBUG nova.network.os_vif_util [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:4a:32,bridge_name='br-int',has_traffic_filtering=True,id=0bc32725-9ef8-42c3-8c96-6b5c03e2ef63,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bc32725-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 8885 keys, 10918438 bytes, temperature: kUnknown
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.620 2 DEBUG os_vif [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:4a:32,bridge_name='br-int',has_traffic_filtering=True,id=0bc32725-9ef8-42c3-8c96-6b5c03e2ef63,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bc32725-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324621075, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 10918438, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10861732, "index_size": 33362, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 230215, "raw_average_key_size": 25, "raw_value_size": 10706616, "raw_average_value_size": 1205, "num_data_blocks": 1296, "num_entries": 8885, "num_filter_entries": 8885, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.621 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa0a6de-a730-4f54-8149-74cf4179fc75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bc32725-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.621766) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10918438 bytes
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.624064) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.8 rd, 124.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.2 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(34.9) write-amplify(15.0) OK, records in: 9393, records dropped: 508 output_compression: NoCompression
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.624096) EVENT_LOG_v1 {"time_micros": 1759409324624084, "job": 78, "event": "compaction_finished", "compaction_time_micros": 87689, "compaction_time_cpu_micros": 44947, "output_level": 6, "num_output_files": 1, "total_output_size": 10918438, "num_input_records": 9393, "num_output_records": 8885, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324625273, "job": 78, "event": "table_file_deletion", "file_number": 125}
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409324628022, "job": 78, "event": "table_file_deletion", "file_number": 123}
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.533529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.628083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.628089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.628091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.628092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:48:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:48:44.628094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:48:44 np0005465987 nova_compute[230713]: 2025-10-02 12:48:44.629 2 INFO os_vif [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:4a:32,bridge_name='br-int',has_traffic_filtering=True,id=0bc32725-9ef8-42c3-8c96-6b5c03e2ef63,network=Network(a0b40647-5bdc-42ab-8337-b3fcdc66ecfc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bc32725-9e')#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.655 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e7807e29-b52d-48d4-ac41-1fbb94839b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.656 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9e24e9d2-da89-4f85-99ce-1c4a2c64a5fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.670 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[97572a74-876b-4614-a64c-0935a0ff49a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737228, 'reachable_time': 28976, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295718, 'error': None, 'target': 'ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.672 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0b40647-5bdc-42ab-8337-b3fcdc66ecfc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:48:44 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:48:44.673 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2f58c6-a710-4f70-840a-66b0799cecd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:44 np0005465987 systemd[1]: run-netns-ovnmeta\x2da0b40647\x2d5bdc\x2d42ab\x2d8337\x2db3fcdc66ecfc.mount: Deactivated successfully.
Oct  2 08:48:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:48:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:44.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:48:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:45 np0005465987 podman[295723]: 2025-10-02 12:48:45.836253741 +0000 UTC m=+0.054117656 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:48:45 np0005465987 podman[295724]: 2025-10-02 12:48:45.846855496 +0000 UTC m=+0.061739551 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:48:45 np0005465987 podman[295722]: 2025-10-02 12:48:45.8975792 +0000 UTC m=+0.121521168 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:48:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:46 np0005465987 nova_compute[230713]: 2025-10-02 12:48:46.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:46.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:46.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.539 2 INFO nova.virt.libvirt.driver [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Deleting instance files /var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_del#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.540 2 INFO nova.virt.libvirt.driver [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Deletion of /var/lib/nova/instances/a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e_del complete#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.622 2 INFO nova.compute.manager [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Took 3.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.622 2 DEBUG oslo.service.loopingcall [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.622 2 DEBUG nova.compute.manager [-] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.622 2 DEBUG nova.network.neutron [-] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.707 2 DEBUG nova.compute.manager [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Received event network-vif-unplugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.708 2 DEBUG oslo_concurrency.lockutils [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.708 2 DEBUG oslo_concurrency.lockutils [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.708 2 DEBUG oslo_concurrency.lockutils [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.708 2 DEBUG nova.compute.manager [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] No waiting events found dispatching network-vif-unplugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.709 2 DEBUG nova.compute.manager [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Received event network-vif-unplugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.709 2 DEBUG nova.compute.manager [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Received event network-vif-plugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.709 2 DEBUG oslo_concurrency.lockutils [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.709 2 DEBUG oslo_concurrency.lockutils [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.709 2 DEBUG oslo_concurrency.lockutils [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.709 2 DEBUG nova.compute.manager [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] No waiting events found dispatching network-vif-plugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:48:47 np0005465987 nova_compute[230713]: 2025-10-02 12:48:47.709 2 WARNING nova.compute.manager [req-43066dc9-229c-47cb-9110-b31e95f16c96 req-54e9b7d8-0cc1-4cdc-9223-e0244eef79ab d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Received unexpected event network-vif-plugged-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:48:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:48.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:48.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.272 2 DEBUG nova.network.neutron [-] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.319 2 INFO nova.compute.manager [-] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Took 1.70 seconds to deallocate network for instance.#033[00m
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.385 2 DEBUG oslo_concurrency.lockutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.386 2 DEBUG oslo_concurrency.lockutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.425 2 DEBUG nova.compute.manager [req-3bb5dac1-4a89-4e58-94a7-42fd7821d8ec req-2921962f-de22-4c7b-acc7-3275ac8e0def d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Received event network-vif-deleted-0bc32725-9ef8-42c3-8c96-6b5c03e2ef63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.492 2 DEBUG oslo_concurrency.processutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2322862970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.928 2 DEBUG oslo_concurrency.processutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.934 2 DEBUG nova.compute.provider_tree [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:49 np0005465987 nova_compute[230713]: 2025-10-02 12:48:49.955 2 DEBUG nova.scheduler.client.report [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:50 np0005465987 nova_compute[230713]: 2025-10-02 12:48:50.007 2 DEBUG oslo_concurrency.lockutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:50 np0005465987 nova_compute[230713]: 2025-10-02 12:48:50.041 2 INFO nova.scheduler.client.report [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Deleted allocations for instance a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e#033[00m
Oct  2 08:48:50 np0005465987 nova_compute[230713]: 2025-10-02 12:48:50.129 2 DEBUG oslo_concurrency.lockutils [None req-701a9001-2344-444f-96cf-afd1b91b47dd 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:50.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:50.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:51 np0005465987 nova_compute[230713]: 2025-10-02 12:48:51.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:48:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:52.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:48:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:48:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:54.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:48:54 np0005465987 nova_compute[230713]: 2025-10-02 12:48:54.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:48:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/411056511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:48:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:48:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/411056511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:48:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:56.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:56 np0005465987 nova_compute[230713]: 2025-10-02 12:48:56.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:56.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:48:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:58.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:48:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:48:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:58.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:59 np0005465987 nova_compute[230713]: 2025-10-02 12:48:59.592 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409324.5905905, a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:59 np0005465987 nova_compute[230713]: 2025-10-02 12:48:59.592 2 INFO nova.compute.manager [-] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:48:59 np0005465987 nova_compute[230713]: 2025-10-02 12:48:59.610 2 DEBUG nova.compute.manager [None req-5495f3bd-a646-4d3a-84f1-6ca366fbac76 - - - - - -] [instance: a6bd3e2d-dcc2-4c05-ad2c-a48a915bee4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:59 np0005465987 nova_compute[230713]: 2025-10-02 12:48:59.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:00.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:00.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:01 np0005465987 nova_compute[230713]: 2025-10-02 12:49:01.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:02 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:02Z|00676|binding|INFO|Releasing lport 28a672bd-7c4d-49bd-8937-0e065b62aa5f from this chassis (sb_readonly=0)
Oct  2 08:49:02 np0005465987 nova_compute[230713]: 2025-10-02 12:49:02.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:02.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:04 np0005465987 nova_compute[230713]: 2025-10-02 12:49:04.339 2 DEBUG oslo_concurrency.lockutils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:04 np0005465987 nova_compute[230713]: 2025-10-02 12:49:04.339 2 DEBUG oslo_concurrency.lockutils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:04 np0005465987 nova_compute[230713]: 2025-10-02 12:49:04.353 2 DEBUG nova.objects.instance [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lazy-loading 'flavor' on Instance uuid dc0b661c-6c12-4408-9e23-dc3883bd5138 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:04 np0005465987 nova_compute[230713]: 2025-10-02 12:49:04.397 2 DEBUG oslo_concurrency.lockutils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:49:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:04.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:49:04 np0005465987 ceph-mgr[81180]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct  2 08:49:04 np0005465987 nova_compute[230713]: 2025-10-02 12:49:04.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:04 np0005465987 nova_compute[230713]: 2025-10-02 12:49:04.868 2 DEBUG oslo_concurrency.lockutils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:04 np0005465987 nova_compute[230713]: 2025-10-02 12:49:04.868 2 DEBUG oslo_concurrency.lockutils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:04 np0005465987 nova_compute[230713]: 2025-10-02 12:49:04.868 2 INFO nova.compute.manager [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Attaching volume 174dd3a1-2323-41da-aed2-49740942102d to /dev/vdb#033[00m
Oct  2 08:49:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.054 2 DEBUG os_brick.utils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.055 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.066 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.067 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[fe99b065-301d-4cf2-8662-7a373570bb64]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.068 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.075 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.075 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[d08db05c-4a85-4102-9042-8b985dff17df]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.077 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.083 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.084 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[28bfa94f-9cfb-41cb-807c-b97baf0b5a73]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.085 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[53452c7e-e8b7-4467-97c9-ee33bfb59fc4]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.086 2 DEBUG oslo_concurrency.processutils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.118 2 DEBUG oslo_concurrency.processutils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.121 2 DEBUG os_brick.initiator.connectors.lightos [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.121 2 DEBUG os_brick.initiator.connectors.lightos [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.122 2 DEBUG os_brick.initiator.connectors.lightos [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.122 2 DEBUG os_brick.utils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:49:05 np0005465987 nova_compute[230713]: 2025-10-02 12:49:05.123 2 DEBUG nova.virt.block_device [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updating existing volume attachment record: aa0c4477-8370-490e-ad5b-2deebd8bc7b6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:49:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:06.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:06 np0005465987 nova_compute[230713]: 2025-10-02 12:49:06.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231071178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:06 np0005465987 nova_compute[230713]: 2025-10-02 12:49:06.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:06 np0005465987 nova_compute[230713]: 2025-10-02 12:49:06.798 2 DEBUG nova.objects.instance [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lazy-loading 'flavor' on Instance uuid dc0b661c-6c12-4408-9e23-dc3883bd5138 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:06 np0005465987 nova_compute[230713]: 2025-10-02 12:49:06.866 2 DEBUG nova.virt.libvirt.driver [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Attempting to attach volume 174dd3a1-2323-41da-aed2-49740942102d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:49:06 np0005465987 nova_compute[230713]: 2025-10-02 12:49:06.869 2 DEBUG nova.virt.libvirt.guest [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:49:06 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:49:06 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-174dd3a1-2323-41da-aed2-49740942102d">
Oct  2 08:49:06 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:49:06 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:49:06 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:49:06 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:49:06 np0005465987 nova_compute[230713]:  <auth username="openstack">
Oct  2 08:49:06 np0005465987 nova_compute[230713]:    <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:49:06 np0005465987 nova_compute[230713]:  </auth>
Oct  2 08:49:06 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:49:06 np0005465987 nova_compute[230713]:  <serial>174dd3a1-2323-41da-aed2-49740942102d</serial>
Oct  2 08:49:06 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:49:06 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:49:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:07 np0005465987 nova_compute[230713]: 2025-10-02 12:49:07.089 2 DEBUG nova.virt.libvirt.driver [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:07 np0005465987 nova_compute[230713]: 2025-10-02 12:49:07.089 2 DEBUG nova.virt.libvirt.driver [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:07 np0005465987 nova_compute[230713]: 2025-10-02 12:49:07.089 2 DEBUG nova.virt.libvirt.driver [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:07 np0005465987 nova_compute[230713]: 2025-10-02 12:49:07.089 2 DEBUG nova.virt.libvirt.driver [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] No VIF found with MAC fa:16:3e:10:81:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:49:07 np0005465987 nova_compute[230713]: 2025-10-02 12:49:07.712 2 DEBUG oslo_concurrency.lockutils [None req-19de2242-90c6-4ef3-930f-1bfee5caa8b4 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:07 np0005465987 nova_compute[230713]: 2025-10-02 12:49:07.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:07 np0005465987 nova_compute[230713]: 2025-10-02 12:49:07.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:49:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:08.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:49:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:49:09 np0005465987 nova_compute[230713]: 2025-10-02 12:49:09.063 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:49:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:49:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:49:09 np0005465987 nova_compute[230713]: 2025-10-02 12:49:09.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:10 np0005465987 nova_compute[230713]: 2025-10-02 12:49:10.187 2 DEBUG oslo_concurrency.lockutils [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:10 np0005465987 nova_compute[230713]: 2025-10-02 12:49:10.187 2 DEBUG oslo_concurrency.lockutils [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:10 np0005465987 nova_compute[230713]: 2025-10-02 12:49:10.270 2 INFO nova.compute.manager [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Detaching volume 174dd3a1-2323-41da-aed2-49740942102d#033[00m
Oct  2 08:49:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:10.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:10.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.804 2 INFO nova.virt.block_device [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Attempting to driver detach volume 174dd3a1-2323-41da-aed2-49740942102d from mountpoint /dev/vdb#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.812 2 DEBUG nova.virt.libvirt.driver [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Attempting to detach device vdb from instance dc0b661c-6c12-4408-9e23-dc3883bd5138 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.813 2 DEBUG nova.virt.libvirt.guest [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-174dd3a1-2323-41da-aed2-49740942102d">
Oct  2 08:49:11 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <serial>174dd3a1-2323-41da-aed2-49740942102d</serial>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:49:11 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.821 2 INFO nova.virt.libvirt.driver [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Successfully detached device vdb from instance dc0b661c-6c12-4408-9e23-dc3883bd5138 from the persistent domain config.#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.821 2 DEBUG nova.virt.libvirt.driver [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance dc0b661c-6c12-4408-9e23-dc3883bd5138 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.822 2 DEBUG nova.virt.libvirt.guest [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-174dd3a1-2323-41da-aed2-49740942102d">
Oct  2 08:49:11 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <serial>174dd3a1-2323-41da-aed2-49740942102d</serial>
Oct  2 08:49:11 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:49:11 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:49:11 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.928 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759409351.9284456, dc0b661c-6c12-4408-9e23-dc3883bd5138 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.931 2 DEBUG nova.virt.libvirt.driver [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance dc0b661c-6c12-4408-9e23-dc3883bd5138 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.933 2 INFO nova.virt.libvirt.driver [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Successfully detached device vdb from instance dc0b661c-6c12-4408-9e23-dc3883bd5138 from the live domain config.#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:49:11 np0005465987 nova_compute[230713]: 2025-10-02 12:49:11.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:49:12 np0005465987 nova_compute[230713]: 2025-10-02 12:49:12.272 2 DEBUG nova.objects.instance [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lazy-loading 'flavor' on Instance uuid dc0b661c-6c12-4408-9e23-dc3883bd5138 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:12.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:12 np0005465987 nova_compute[230713]: 2025-10-02 12:49:12.527 2 DEBUG oslo_concurrency.lockutils [None req-1f1184c7-bc66-462f-9a56-d6e48f2f05e7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:12 np0005465987 nova_compute[230713]: 2025-10-02 12:49:12.576 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:49:12 np0005465987 nova_compute[230713]: 2025-10-02 12:49:12.576 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:49:12 np0005465987 nova_compute[230713]: 2025-10-02 12:49:12.576 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:49:12 np0005465987 nova_compute[230713]: 2025-10-02 12:49:12.576 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dc0b661c-6c12-4408-9e23-dc3883bd5138 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:12.678 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:49:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:12.679 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:49:12 np0005465987 nova_compute[230713]: 2025-10-02 12:49:12.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:12 np0005465987 podman[295972]: 2025-10-02 12:49:12.857226768 +0000 UTC m=+0.066854458 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:49:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:12.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:14 np0005465987 nova_compute[230713]: 2025-10-02 12:49:14.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:14.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:14 np0005465987 nova_compute[230713]: 2025-10-02 12:49:14.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:14.685 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:14.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.408 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updating instance_info_cache with network_info: [{"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.458 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.458 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.567 2 DEBUG nova.compute.manager [req-4c842aea-6969-43c1-abb1-ef1a447e258f req-8883250b-e209-4094-8f84-897678c94ae4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received event network-changed-d031f59a-c8a6-4384-a905-41cab6c3b876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.567 2 DEBUG nova.compute.manager [req-4c842aea-6969-43c1-abb1-ef1a447e258f req-8883250b-e209-4094-8f84-897678c94ae4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Refreshing instance network info cache due to event network-changed-d031f59a-c8a6-4384-a905-41cab6c3b876. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.568 2 DEBUG oslo_concurrency.lockutils [req-4c842aea-6969-43c1-abb1-ef1a447e258f req-8883250b-e209-4094-8f84-897678c94ae4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.568 2 DEBUG oslo_concurrency.lockutils [req-4c842aea-6969-43c1-abb1-ef1a447e258f req-8883250b-e209-4094-8f84-897678c94ae4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.569 2 DEBUG nova.network.neutron [req-4c842aea-6969-43c1-abb1-ef1a447e258f req-8883250b-e209-4094-8f84-897678c94ae4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Refreshing network info cache for port d031f59a-c8a6-4384-a905-41cab6c3b876 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.921 2 DEBUG oslo_concurrency.lockutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.922 2 DEBUG oslo_concurrency.lockutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.922 2 DEBUG oslo_concurrency.lockutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.922 2 DEBUG oslo_concurrency.lockutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.922 2 DEBUG oslo_concurrency.lockutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.923 2 INFO nova.compute.manager [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Terminating instance#033[00m
Oct  2 08:49:15 np0005465987 nova_compute[230713]: 2025-10-02 12:49:15.924 2 DEBUG nova.compute.manager [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:49:15 np0005465987 podman[296018]: 2025-10-02 12:49:15.940795451 +0000 UTC m=+0.066363464 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:49:15 np0005465987 podman[296024]: 2025-10-02 12:49:15.945670092 +0000 UTC m=+0.066203331 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  2 08:49:15 np0005465987 kernel: tapd031f59a-c8 (unregistering): left promiscuous mode
Oct  2 08:49:15 np0005465987 NetworkManager[44910]: <info>  [1759409355.9879] device (tapd031f59a-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:49:16 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:16Z|00677|binding|INFO|Releasing lport d031f59a-c8a6-4384-a905-41cab6c3b876 from this chassis (sb_readonly=0)
Oct  2 08:49:16 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:16Z|00678|binding|INFO|Setting lport d031f59a-c8a6-4384-a905-41cab6c3b876 down in Southbound
Oct  2 08:49:16 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:16Z|00679|binding|INFO|Removing iface tapd031f59a-c8 ovn-installed in OVS
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005465987 podman[296071]: 2025-10-02 12:49:16.094672238 +0000 UTC m=+0.151303969 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct  2 08:49:16 np0005465987 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Oct  2 08:49:16 np0005465987 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000ac.scope: Consumed 16.680s CPU time.
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.110 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:81:59 10.100.0.5'], port_security=['fa:16:3e:10:81:59 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dc0b661c-6c12-4408-9e23-dc3883bd5138', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0739dafe-4d9b-4048-ac97-c017fd298447', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a837417d42da439cb794b4295bca2cee', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2135b395-ac43-463e-b267-fe36f0a53800', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57c22c18-07d9-4913-840f-1bcc05bb2313, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=d031f59a-c8a6-4384-a905-41cab6c3b876) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.112 138325 INFO neutron.agent.ovn.metadata.agent [-] Port d031f59a-c8a6-4384-a905-41cab6c3b876 in datapath 0739dafe-4d9b-4048-ac97-c017fd298447 unbound from our chassis#033[00m
Oct  2 08:49:16 np0005465987 systemd-machined[188335]: Machine qemu-79-instance-000000ac terminated.
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.113 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0739dafe-4d9b-4048-ac97-c017fd298447, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.114 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b64d49e4-2b5d-409d-9e4b-77adf2a099f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.114 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447 namespace which is not needed anymore#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.160 2 INFO nova.virt.libvirt.driver [-] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Instance destroyed successfully.#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.160 2 DEBUG nova.objects.instance [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lazy-loading 'resources' on Instance uuid dc0b661c-6c12-4408-9e23-dc3883bd5138 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:16 np0005465987 neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447[295587]: [NOTICE]   (295591) : haproxy version is 2.8.14-c23fe91
Oct  2 08:49:16 np0005465987 neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447[295587]: [NOTICE]   (295591) : path to executable is /usr/sbin/haproxy
Oct  2 08:49:16 np0005465987 neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447[295587]: [WARNING]  (295591) : Exiting Master process...
Oct  2 08:49:16 np0005465987 neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447[295587]: [ALERT]    (295591) : Current worker (295593) exited with code 143 (Terminated)
Oct  2 08:49:16 np0005465987 neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447[295587]: [WARNING]  (295591) : All workers exited. Exiting... (0)
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.237 2 DEBUG nova.virt.libvirt.vif [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:48:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1567525510',display_name='tempest-TestStampPattern-server-1567525510',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-teststamppattern-server-1567525510',id=172,image_ref='a62d470f-f755-4a5a-b8e5-0dc1be0600d1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCPnME7yj7AGpi8+zR0j6wiXokmK+k5Lh86YSlnXsP0prCCTi2saYZPKg3ZreiW8R+IqbWLBOHnWbtyyC7ToJeWaqTKxTG25O47OUrV5FbVX8vZbUi2AzjxwYa4KvWo3jw==',key_name='tempest-TestStampPattern-1303591770',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a837417d42da439cb794b4295bca2cee',ramdisk_id='',reservation_id='r-k5ikch2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='bf9e8de1-5081-4daa-9041-1d329e06be86',image_min_disk='1',image_min_ram='0',image_owner_id='a837417d42da439cb794b4295bca2cee',image_owner_project_name='tempest-TestStampPattern-901207223',image_owner_user_name='tempest-TestStampPattern-901207223-project-member',image_user_id='a24a7109471f4d96ad5f11b637fdb8e7',owner_project_name='tempest-TestStampPattern-901207223',owner_user_name='tempest-TestStampPattern-901207223-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:24Z,user_data=None,user_id='a24a7109471f4d96ad5f11b637fdb8e7',uuid=dc0b661c-6c12-4408-9e23-dc3883bd5138,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.238 2 DEBUG nova.network.os_vif_util [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Converting VIF {"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.238 2 DEBUG nova.network.os_vif_util [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:81:59,bridge_name='br-int',has_traffic_filtering=True,id=d031f59a-c8a6-4384-a905-41cab6c3b876,network=Network(0739dafe-4d9b-4048-ac97-c017fd298447),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd031f59a-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.239 2 DEBUG os_vif [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:81:59,bridge_name='br-int',has_traffic_filtering=True,id=d031f59a-c8a6-4384-a905-41cab6c3b876,network=Network(0739dafe-4d9b-4048-ac97-c017fd298447),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd031f59a-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:49:16 np0005465987 systemd[1]: libpod-df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e.scope: Deactivated successfully.
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd031f59a-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.245 2 INFO os_vif [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:81:59,bridge_name='br-int',has_traffic_filtering=True,id=d031f59a-c8a6-4384-a905-41cab6c3b876,network=Network(0739dafe-4d9b-4048-ac97-c017fd298447),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd031f59a-c8')#033[00m
Oct  2 08:49:16 np0005465987 podman[296142]: 2025-10-02 12:49:16.247571119 +0000 UTC m=+0.043786188 container died df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:49:16 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e-userdata-shm.mount: Deactivated successfully.
Oct  2 08:49:16 np0005465987 systemd[1]: var-lib-containers-storage-overlay-25c5b7feb8ae0c3598ab8fe4838e02f81d8cc94d61d8dacd0ae91760d63540c4-merged.mount: Deactivated successfully.
Oct  2 08:49:16 np0005465987 podman[296142]: 2025-10-02 12:49:16.319062791 +0000 UTC m=+0.115277900 container cleanup df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:49:16 np0005465987 systemd[1]: libpod-conmon-df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e.scope: Deactivated successfully.
Oct  2 08:49:16 np0005465987 podman[296188]: 2025-10-02 12:49:16.377533313 +0000 UTC m=+0.039072122 container remove df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.383 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[78760114-c4b6-4be3-adcd-0ff6b2085a14]: (4, ('Thu Oct  2 12:49:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447 (df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e)\ndf675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e\nThu Oct  2 12:49:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447 (df675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e)\ndf675ccd6f21a19def9b2868ded39049d018b922859abb241fcae8c324e9e07e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.387 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b09118-eb24-4194-8cb6-0395dfec4bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.388 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0739dafe-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005465987 kernel: tap0739dafe-40: left promiscuous mode
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.409 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2634994d-fe48-4d43-9ca7-46b3eda2deef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.444 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf837bc-a05e-42d6-877e-c0f141e4143f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.445 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[49f28cbe-8c36-4a70-9b79-b9e226247459]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:16.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:16 np0005465987 nova_compute[230713]: 2025-10-02 12:49:16.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.461 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[af9b8cde-82c9-4284-9031-8fafbf0b4010]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737314, 'reachable_time': 26913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296203, 'error': None, 'target': 'ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.463 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0739dafe-4d9b-4048-ac97-c017fd298447 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:49:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:16.464 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffbda57-ab76-49db-8d65-4d6ca20fc318]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:16 np0005465987 systemd[1]: run-netns-ovnmeta\x2d0739dafe\x2d4d9b\x2d4048\x2dac97\x2dc017fd298447.mount: Deactivated successfully.
Oct  2 08:49:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:49:16 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:49:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:16.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:17 np0005465987 nova_compute[230713]: 2025-10-02 12:49:17.415 2 INFO nova.virt.libvirt.driver [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Deleting instance files /var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138_del#033[00m
Oct  2 08:49:17 np0005465987 nova_compute[230713]: 2025-10-02 12:49:17.416 2 INFO nova.virt.libvirt.driver [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Deletion of /var/lib/nova/instances/dc0b661c-6c12-4408-9e23-dc3883bd5138_del complete#033[00m
Oct  2 08:49:17 np0005465987 nova_compute[230713]: 2025-10-02 12:49:17.659 2 INFO nova.compute.manager [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Took 1.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:49:17 np0005465987 nova_compute[230713]: 2025-10-02 12:49:17.660 2 DEBUG oslo.service.loopingcall [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:49:17 np0005465987 nova_compute[230713]: 2025-10-02 12:49:17.660 2 DEBUG nova.compute.manager [-] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:49:17 np0005465987 nova_compute[230713]: 2025-10-02 12:49:17.661 2 DEBUG nova.network.neutron [-] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:49:17 np0005465987 nova_compute[230713]: 2025-10-02 12:49:17.968 2 DEBUG nova.network.neutron [req-4c842aea-6969-43c1-abb1-ef1a447e258f req-8883250b-e209-4094-8f84-897678c94ae4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updated VIF entry in instance network info cache for port d031f59a-c8a6-4384-a905-41cab6c3b876. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:49:17 np0005465987 nova_compute[230713]: 2025-10-02 12:49:17.969 2 DEBUG nova.network.neutron [req-4c842aea-6969-43c1-abb1-ef1a447e258f req-8883250b-e209-4094-8f84-897678c94ae4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updating instance_info_cache with network_info: [{"id": "d031f59a-c8a6-4384-a905-41cab6c3b876", "address": "fa:16:3e:10:81:59", "network": {"id": "0739dafe-4d9b-4048-ac97-c017fd298447", "bridge": "br-int", "label": "tempest-TestStampPattern-656915325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a837417d42da439cb794b4295bca2cee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd031f59a-c8", "ovs_interfaceid": "d031f59a-c8a6-4384-a905-41cab6c3b876", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:17 np0005465987 nova_compute[230713]: 2025-10-02 12:49:17.989 2 DEBUG oslo_concurrency.lockutils [req-4c842aea-6969-43c1-abb1-ef1a447e258f req-8883250b-e209-4094-8f84-897678c94ae4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-dc0b661c-6c12-4408-9e23-dc3883bd5138" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:49:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:18.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:18.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.259 2 DEBUG nova.network.neutron [-] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.293 2 INFO nova.compute.manager [-] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Took 1.63 seconds to deallocate network for instance.#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.357 2 DEBUG oslo_concurrency.lockutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.358 2 DEBUG oslo_concurrency.lockutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.390 2 DEBUG nova.compute.manager [req-65c4ccd7-514c-4040-ab81-ad4b8d6a4cfb req-23d833d7-fdc2-4ac4-9824-d9509b27846a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received event network-vif-deleted-d031f59a-c8a6-4384-a905-41cab6c3b876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.500 2 DEBUG oslo_concurrency.processutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4095847724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.967 2 DEBUG oslo_concurrency.processutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.972 2 DEBUG nova.compute.provider_tree [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:49:19 np0005465987 nova_compute[230713]: 2025-10-02 12:49:19.999 2 DEBUG nova.scheduler.client.report [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.031 2 DEBUG oslo_concurrency.lockutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.034 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.034 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.034 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.035 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.097 2 INFO nova.scheduler.client.report [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Deleted allocations for instance dc0b661c-6c12-4408-9e23-dc3883bd5138#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.242 2 DEBUG oslo_concurrency.lockutils [None req-7876cec6-a3f7-4415-b62e-ed6f3be02cb7 a24a7109471f4d96ad5f11b637fdb8e7 a837417d42da439cb794b4295bca2cee - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:20.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/384528648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.509 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.665 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.666 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4340MB free_disk=20.830440521240234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.666 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.666 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.762 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.763 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.801 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:20.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.947 2 DEBUG nova.compute.manager [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received event network-vif-unplugged-d031f59a-c8a6-4384-a905-41cab6c3b876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.948 2 DEBUG oslo_concurrency.lockutils [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.948 2 DEBUG oslo_concurrency.lockutils [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.948 2 DEBUG oslo_concurrency.lockutils [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.948 2 DEBUG nova.compute.manager [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] No waiting events found dispatching network-vif-unplugged-d031f59a-c8a6-4384-a905-41cab6c3b876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.949 2 WARNING nova.compute.manager [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received unexpected event network-vif-unplugged-d031f59a-c8a6-4384-a905-41cab6c3b876 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.949 2 DEBUG nova.compute.manager [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received event network-vif-plugged-d031f59a-c8a6-4384-a905-41cab6c3b876 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.949 2 DEBUG oslo_concurrency.lockutils [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.949 2 DEBUG oslo_concurrency.lockutils [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.949 2 DEBUG oslo_concurrency.lockutils [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "dc0b661c-6c12-4408-9e23-dc3883bd5138-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.950 2 DEBUG nova.compute.manager [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] No waiting events found dispatching network-vif-plugged-d031f59a-c8a6-4384-a905-41cab6c3b876 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:49:20 np0005465987 nova_compute[230713]: 2025-10-02 12:49:20.950 2 WARNING nova.compute.manager [req-1a4c7136-33af-47c2-9fec-ee9f9eb65f01 req-4100d4e3-f132-40eb-87a6-e0ad40bd929e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Received unexpected event network-vif-plugged-d031f59a-c8a6-4384-a905-41cab6c3b876 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:49:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1572234307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:21 np0005465987 nova_compute[230713]: 2025-10-02 12:49:21.217 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:21 np0005465987 nova_compute[230713]: 2025-10-02 12:49:21.223 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:49:21 np0005465987 nova_compute[230713]: 2025-10-02 12:49:21.240 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:49:21 np0005465987 nova_compute[230713]: 2025-10-02 12:49:21.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:21 np0005465987 nova_compute[230713]: 2025-10-02 12:49:21.266 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:49:21 np0005465987 nova_compute[230713]: 2025-10-02 12:49:21.266 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:21 np0005465987 nova_compute[230713]: 2025-10-02 12:49:21.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:49:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/615274900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:49:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:49:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/615274900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.098 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "56230639-61e0-4b9f-9bc0-ccf51bb51195" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.099 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.145 2 DEBUG nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.231 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.231 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.236 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.237 2 INFO nova.compute.claims [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.265 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.359 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:22.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e368 e368: 3 total, 3 up, 3 in
Oct  2 08:49:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:22 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2938637152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.795 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.802 2 DEBUG nova.compute.provider_tree [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.829 2 DEBUG nova.scheduler.client.report [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.861 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.862 2 DEBUG nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:49:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:22.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.942 2 DEBUG nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.943 2 DEBUG nova.network.neutron [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.965 2 INFO nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:49:22 np0005465987 nova_compute[230713]: 2025-10-02 12:49:22.967 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.002 2 DEBUG nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.128 2 DEBUG nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.130 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.130 2 INFO nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Creating image(s)#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.156 2 DEBUG nova.storage.rbd_utils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.186 2 DEBUG nova.storage.rbd_utils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.216 2 DEBUG nova.storage.rbd_utils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.220 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.248 2 DEBUG nova.policy [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16730f38111542e58a05fb4deb2b3914', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5ade962c517a483dbfe4bb13386f0006', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.285 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.286 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.287 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.287 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.315 2 DEBUG nova.storage.rbd_utils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:23 np0005465987 nova_compute[230713]: 2025-10-02 12:49:23.318 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:24 np0005465987 nova_compute[230713]: 2025-10-02 12:49:24.403 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:49:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:24.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:49:24 np0005465987 nova_compute[230713]: 2025-10-02 12:49:24.474 2 DEBUG nova.storage.rbd_utils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] resizing rbd image 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:49:24 np0005465987 nova_compute[230713]: 2025-10-02 12:49:24.565 2 DEBUG nova.objects.instance [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'migration_context' on Instance uuid 56230639-61e0-4b9f-9bc0-ccf51bb51195 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:24 np0005465987 nova_compute[230713]: 2025-10-02 12:49:24.585 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:49:24 np0005465987 nova_compute[230713]: 2025-10-02 12:49:24.586 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Ensure instance console log exists: /var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:49:24 np0005465987 nova_compute[230713]: 2025-10-02 12:49:24.586 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:24 np0005465987 nova_compute[230713]: 2025-10-02 12:49:24.587 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:24 np0005465987 nova_compute[230713]: 2025-10-02 12:49:24.587 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:49:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:24.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:49:25 np0005465987 nova_compute[230713]: 2025-10-02 12:49:25.153 2 DEBUG nova.network.neutron [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Successfully created port: 94c92796-5e8b-402b-bab2-2bb77f2148ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:49:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e369 e369: 3 total, 3 up, 3 in
Oct  2 08:49:25 np0005465987 nova_compute[230713]: 2025-10-02 12:49:25.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:26 np0005465987 nova_compute[230713]: 2025-10-02 12:49:26.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:26 np0005465987 nova_compute[230713]: 2025-10-02 12:49:26.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:26.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:26.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:27 np0005465987 nova_compute[230713]: 2025-10-02 12:49:27.100 2 DEBUG nova.network.neutron [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Successfully updated port: 94c92796-5e8b-402b-bab2-2bb77f2148ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:49:27 np0005465987 nova_compute[230713]: 2025-10-02 12:49:27.144 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:49:27 np0005465987 nova_compute[230713]: 2025-10-02 12:49:27.144 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquired lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:49:27 np0005465987 nova_compute[230713]: 2025-10-02 12:49:27.145 2 DEBUG nova.network.neutron [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:49:27 np0005465987 nova_compute[230713]: 2025-10-02 12:49:27.242 2 DEBUG nova.compute.manager [req-5e6355bf-abf1-4432-b42b-e25524ef8e95 req-d30c5b31-fc5a-487e-a2e4-b38d4b65db74 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received event network-changed-94c92796-5e8b-402b-bab2-2bb77f2148ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:27 np0005465987 nova_compute[230713]: 2025-10-02 12:49:27.243 2 DEBUG nova.compute.manager [req-5e6355bf-abf1-4432-b42b-e25524ef8e95 req-d30c5b31-fc5a-487e-a2e4-b38d4b65db74 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Refreshing instance network info cache due to event network-changed-94c92796-5e8b-402b-bab2-2bb77f2148ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:49:27 np0005465987 nova_compute[230713]: 2025-10-02 12:49:27.243 2 DEBUG oslo_concurrency.lockutils [req-5e6355bf-abf1-4432-b42b-e25524ef8e95 req-d30c5b31-fc5a-487e-a2e4-b38d4b65db74 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:49:27 np0005465987 nova_compute[230713]: 2025-10-02 12:49:27.580 2 DEBUG nova.network.neutron [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:49:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e370 e370: 3 total, 3 up, 3 in
Oct  2 08:49:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:27.974 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:27.975 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:27.975 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:28.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:28.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.943 2 DEBUG nova.network.neutron [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Updating instance_info_cache with network_info: [{"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.970 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Releasing lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.971 2 DEBUG nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Instance network_info: |[{"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.971 2 DEBUG oslo_concurrency.lockutils [req-5e6355bf-abf1-4432-b42b-e25524ef8e95 req-d30c5b31-fc5a-487e-a2e4-b38d4b65db74 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.972 2 DEBUG nova.network.neutron [req-5e6355bf-abf1-4432-b42b-e25524ef8e95 req-d30c5b31-fc5a-487e-a2e4-b38d4b65db74 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Refreshing network info cache for port 94c92796-5e8b-402b-bab2-2bb77f2148ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.975 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Start _get_guest_xml network_info=[{"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.979 2 WARNING nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.984 2 DEBUG nova.virt.libvirt.host [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.985 2 DEBUG nova.virt.libvirt.host [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.991 2 DEBUG nova.virt.libvirt.host [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.992 2 DEBUG nova.virt.libvirt.host [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.994 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.994 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.995 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.995 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.995 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.995 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.996 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.996 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.996 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.997 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.997 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:49:28 np0005465987 nova_compute[230713]: 2025-10-02 12:49:28.997 2 DEBUG nova.virt.hardware [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:49:29 np0005465987 nova_compute[230713]: 2025-10-02 12:49:29.001 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e371 e371: 3 total, 3 up, 3 in
Oct  2 08:49:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e372 e372: 3 total, 3 up, 3 in
Oct  2 08:49:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:29 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/232811485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:29 np0005465987 nova_compute[230713]: 2025-10-02 12:49:29.473 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:29 np0005465987 nova_compute[230713]: 2025-10-02 12:49:29.507 2 DEBUG nova.storage.rbd_utils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:29 np0005465987 nova_compute[230713]: 2025-10-02 12:49:29.511 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:29 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/783607838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:29 np0005465987 nova_compute[230713]: 2025-10-02 12:49:29.983 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:29 np0005465987 nova_compute[230713]: 2025-10-02 12:49:29.985 2 DEBUG nova.virt.libvirt.vif [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:49:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1043137860',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1043137860',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=176,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI76ELGXWiA+GK+T9UIVIpe3RF7FnLdHt1r64FjSu6VP3p+NhW/jR9hBYQ+9GIG1srG7vAu2r17A8gA4IFD8GdKb+uaIKikiK9XRxTWoso5Ldl7KapMPNhvgKJ6EzOx9dQ==',key_name='tempest-TestSecurityGroupsBasicOps-359554467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-x6x8yyca',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:49:23Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=56230639-61e0-4b9f-9bc0-ccf51bb51195,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:49:29 np0005465987 nova_compute[230713]: 2025-10-02 12:49:29.985 2 DEBUG nova.network.os_vif_util [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:49:29 np0005465987 nova_compute[230713]: 2025-10-02 12:49:29.986 2 DEBUG nova.network.os_vif_util [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:4a:ed,bridge_name='br-int',has_traffic_filtering=True,id=94c92796-5e8b-402b-bab2-2bb77f2148ad,network=Network(482a0957-cb98-415e-82f4-fff1a8f9ce2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c92796-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:49:29 np0005465987 nova_compute[230713]: 2025-10-02 12:49:29.987 2 DEBUG nova.objects.instance [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'pci_devices' on Instance uuid 56230639-61e0-4b9f-9bc0-ccf51bb51195 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.018 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <uuid>56230639-61e0-4b9f-9bc0-ccf51bb51195</uuid>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <name>instance-000000b0</name>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1043137860</nova:name>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:49:28</nova:creationTime>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <nova:user uuid="16730f38111542e58a05fb4deb2b3914">tempest-TestSecurityGroupsBasicOps-1031871880-project-member</nova:user>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <nova:project uuid="5ade962c517a483dbfe4bb13386f0006">tempest-TestSecurityGroupsBasicOps-1031871880</nova:project>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <nova:port uuid="94c92796-5e8b-402b-bab2-2bb77f2148ad">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <entry name="serial">56230639-61e0-4b9f-9bc0-ccf51bb51195</entry>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <entry name="uuid">56230639-61e0-4b9f-9bc0-ccf51bb51195</entry>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/56230639-61e0-4b9f-9bc0-ccf51bb51195_disk">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/56230639-61e0-4b9f-9bc0-ccf51bb51195_disk.config">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:76:4a:ed"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <target dev="tap94c92796-5e"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195/console.log" append="off"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:49:30 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:49:30 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:49:30 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:49:30 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.020 2 DEBUG nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Preparing to wait for external event network-vif-plugged-94c92796-5e8b-402b-bab2-2bb77f2148ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.020 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.021 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.021 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.022 2 DEBUG nova.virt.libvirt.vif [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:49:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1043137860',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1043137860',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=176,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI76ELGXWiA+GK+T9UIVIpe3RF7FnLdHt1r64FjSu6VP3p+NhW/jR9hBYQ+9GIG1srG7vAu2r17A8gA4IFD8GdKb+uaIKikiK9XRxTWoso5Ldl7KapMPNhvgKJ6EzOx9dQ==',key_name='tempest-TestSecurityGroupsBasicOps-359554467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-x6x8yyca',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:49:23Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=56230639-61e0-4b9f-9bc0-ccf51bb51195,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.022 2 DEBUG nova.network.os_vif_util [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.023 2 DEBUG nova.network.os_vif_util [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:4a:ed,bridge_name='br-int',has_traffic_filtering=True,id=94c92796-5e8b-402b-bab2-2bb77f2148ad,network=Network(482a0957-cb98-415e-82f4-fff1a8f9ce2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c92796-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.023 2 DEBUG os_vif [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:4a:ed,bridge_name='br-int',has_traffic_filtering=True,id=94c92796-5e8b-402b-bab2-2bb77f2148ad,network=Network(482a0957-cb98-415e-82f4-fff1a8f9ce2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c92796-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94c92796-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94c92796-5e, col_values=(('external_ids', {'iface-id': '94c92796-5e8b-402b-bab2-2bb77f2148ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:4a:ed', 'vm-uuid': '56230639-61e0-4b9f-9bc0-ccf51bb51195'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:30 np0005465987 NetworkManager[44910]: <info>  [1759409370.0331] manager: (tap94c92796-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.039 2 INFO os_vif [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:4a:ed,bridge_name='br-int',has_traffic_filtering=True,id=94c92796-5e8b-402b-bab2-2bb77f2148ad,network=Network(482a0957-cb98-415e-82f4-fff1a8f9ce2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c92796-5e')#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.095 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.096 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.096 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No VIF found with MAC fa:16:3e:76:4a:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.097 2 INFO nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Using config drive#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.123 2 DEBUG nova.storage.rbd_utils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:30.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.982 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:30 np0005465987 nova_compute[230713]: 2025-10-02 12:49:30.983 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.158 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409356.1568983, dc0b661c-6c12-4408-9e23-dc3883bd5138 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.159 2 INFO nova.compute.manager [-] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.219 2 DEBUG nova.compute.manager [None req-135bfcb1-5723-48ba-8e67-6cba5c677062 - - - - - -] [instance: dc0b661c-6c12-4408-9e23-dc3883bd5138] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.313 2 INFO nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Creating config drive at /var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195/disk.config#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.319 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_virjnv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.459 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_virjnv" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.487 2 DEBUG nova.storage.rbd_utils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.491 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195/disk.config 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.651 2 DEBUG nova.network.neutron [req-5e6355bf-abf1-4432-b42b-e25524ef8e95 req-d30c5b31-fc5a-487e-a2e4-b38d4b65db74 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Updated VIF entry in instance network info cache for port 94c92796-5e8b-402b-bab2-2bb77f2148ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.652 2 DEBUG nova.network.neutron [req-5e6355bf-abf1-4432-b42b-e25524ef8e95 req-d30c5b31-fc5a-487e-a2e4-b38d4b65db74 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Updating instance_info_cache with network_info: [{"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.655 2 DEBUG oslo_concurrency.processutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195/disk.config 56230639-61e0-4b9f-9bc0-ccf51bb51195_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.655 2 INFO nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Deleting local config drive /var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195/disk.config because it was imported into RBD.#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.669 2 DEBUG oslo_concurrency.lockutils [req-5e6355bf-abf1-4432-b42b-e25524ef8e95 req-d30c5b31-fc5a-487e-a2e4-b38d4b65db74 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:49:31 np0005465987 kernel: tap94c92796-5e: entered promiscuous mode
Oct  2 08:49:31 np0005465987 NetworkManager[44910]: <info>  [1759409371.7155] manager: (tap94c92796-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Oct  2 08:49:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:31Z|00680|binding|INFO|Claiming lport 94c92796-5e8b-402b-bab2-2bb77f2148ad for this chassis.
Oct  2 08:49:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:31Z|00681|binding|INFO|94c92796-5e8b-402b-bab2-2bb77f2148ad: Claiming fa:16:3e:76:4a:ed 10.100.0.6
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:31Z|00682|binding|INFO|Setting lport 94c92796-5e8b-402b-bab2-2bb77f2148ad ovn-installed in OVS
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:31 np0005465987 nova_compute[230713]: 2025-10-02 12:49:31.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:31 np0005465987 systemd-udevd[296595]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:49:31 np0005465987 NetworkManager[44910]: <info>  [1759409371.7611] device (tap94c92796-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:49:31 np0005465987 NetworkManager[44910]: <info>  [1759409371.7621] device (tap94c92796-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:49:31 np0005465987 systemd-machined[188335]: New machine qemu-80-instance-000000b0.
Oct  2 08:49:31 np0005465987 systemd[1]: Started Virtual Machine qemu-80-instance-000000b0.
Oct  2 08:49:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:31Z|00683|binding|INFO|Setting lport 94c92796-5e8b-402b-bab2-2bb77f2148ad up in Southbound
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.785 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:4a:ed 10.100.0.6'], port_security=['fa:16:3e:76:4a:ed 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56230639-61e0-4b9f-9bc0-ccf51bb51195', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-482a0957-cb98-415e-82f4-fff1a8f9ce2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '2', 'neutron:security_group_ids': '54e0c5d5-4c43-48bb-a326-52f9491a60f6 8029b78d-6854-4737-b038-a3f23de8b7f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e93e939-794c-411b-80f2-69fd82927838, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=94c92796-5e8b-402b-bab2-2bb77f2148ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.788 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 94c92796-5e8b-402b-bab2-2bb77f2148ad in datapath 482a0957-cb98-415e-82f4-fff1a8f9ce2d bound to our chassis#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.790 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 482a0957-cb98-415e-82f4-fff1a8f9ce2d#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.806 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e0ce2e-0a97-4274-acdb-70db4882960c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.807 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap482a0957-c1 in ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.809 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap482a0957-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.809 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[47641201-97fc-4a3b-abf6-6aa48a11ca1d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.810 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bd7ed7-ba65-4f55-9f32-aea971a98d36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.824 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[a064140f-0a6c-44ff-abb1-647c73b03b4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.849 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f699f0ac-5cb7-4bff-88e9-520f9d88f044]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.882 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[826697b8-d42e-40ea-9ea2-dab9ea5a3c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.889 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[33ba800a-6ec1-42b3-9a53-287cf1381704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 NetworkManager[44910]: <info>  [1759409371.8906] manager: (tap482a0957-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/311)
Oct  2 08:49:31 np0005465987 systemd-udevd[296599]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.924 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1cab5e1a-1c92-407e-ad11-2cf406c8c7aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.927 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9e387fc2-f766-4270-a6b7-2443a8e68959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 NetworkManager[44910]: <info>  [1759409371.9511] device (tap482a0957-c0): carrier: link connected
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.956 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3938b5e8-20d2-4c5a-9428-da94e35a9597]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.975 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[39b2d9b3-825e-4ce8-bf68-ddba06b8defb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap482a0957-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:a3:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744052, 'reachable_time': 32889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296630, 'error': None, 'target': 'ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:31 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:31.991 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[449ff7e6-33bc-4f11-a33d-53381bf2b6f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:a35d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 744052, 'tstamp': 744052}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296631, 'error': None, 'target': 'ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.006 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f9d3ac-d3a4-4676-919e-6c10576b5e24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap482a0957-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:a3:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744052, 'reachable_time': 32889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296640, 'error': None, 'target': 'ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.051 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa4dd86-f879-4d54-b5fb-10b34393ae99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.122 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[92f1575d-7a2c-42fb-a610-888d0139f377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.124 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap482a0957-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.124 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.125 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap482a0957-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:32 np0005465987 kernel: tap482a0957-c0: entered promiscuous mode
Oct  2 08:49:32 np0005465987 NetworkManager[44910]: <info>  [1759409372.1652] manager: (tap482a0957-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.169 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap482a0957-c0, col_values=(('external_ids', {'iface-id': '1429dcbf-68c2-4c1a-a413-db3c21475646'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:32 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:32Z|00684|binding|INFO|Releasing lport 1429dcbf-68c2-4c1a-a413-db3c21475646 from this chassis (sb_readonly=0)
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.173 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/482a0957-cb98-415e-82f4-fff1a8f9ce2d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/482a0957-cb98-415e-82f4-fff1a8f9ce2d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.174 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1d4d42-e90b-49fc-8b3d-da4e26ea53dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.175 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-482a0957-cb98-415e-82f4-fff1a8f9ce2d
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/482a0957-cb98-415e-82f4-fff1a8f9ce2d.pid.haproxy
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 482a0957-cb98-415e-82f4-fff1a8f9ce2d
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:49:32 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:32.176 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d', 'env', 'PROCESS_TAG=haproxy-482a0957-cb98-415e-82f4-fff1a8f9ce2d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/482a0957-cb98-415e-82f4-fff1a8f9ce2d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:32.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:32 np0005465987 podman[296706]: 2025-10-02 12:49:32.541641405 +0000 UTC m=+0.050484738 container create d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 08:49:32 np0005465987 systemd[1]: Started libpod-conmon-d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d.scope.
Oct  2 08:49:32 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:49:32 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a14bb88e5ec8004753960c1164c246ca6f1548286e500aa5d71714ee63391cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:49:32 np0005465987 podman[296706]: 2025-10-02 12:49:32.515631976 +0000 UTC m=+0.024475359 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:49:32 np0005465987 podman[296706]: 2025-10-02 12:49:32.623424054 +0000 UTC m=+0.132267387 container init d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:49:32 np0005465987 podman[296706]: 2025-10-02 12:49:32.628980253 +0000 UTC m=+0.137823586 container start d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:49:32 np0005465987 neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d[296720]: [NOTICE]   (296726) : New worker (296728) forked
Oct  2 08:49:32 np0005465987 neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d[296720]: [NOTICE]   (296726) : Loading success.
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.659 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409372.6586366, 56230639-61e0-4b9f-9bc0-ccf51bb51195 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.659 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] VM Started (Lifecycle Event)#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.711 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.715 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409372.6588855, 56230639-61e0-4b9f-9bc0-ccf51bb51195 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.715 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.809 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.813 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:49:32 np0005465987 nova_compute[230713]: 2025-10-02 12:49:32.904 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:49:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:32.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:34.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e373 e373: 3 total, 3 up, 3 in
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.651 2 DEBUG nova.compute.manager [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received event network-vif-plugged-94c92796-5e8b-402b-bab2-2bb77f2148ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.652 2 DEBUG oslo_concurrency.lockutils [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.652 2 DEBUG oslo_concurrency.lockutils [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.653 2 DEBUG oslo_concurrency.lockutils [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.653 2 DEBUG nova.compute.manager [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Processing event network-vif-plugged-94c92796-5e8b-402b-bab2-2bb77f2148ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.653 2 DEBUG nova.compute.manager [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received event network-vif-plugged-94c92796-5e8b-402b-bab2-2bb77f2148ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.654 2 DEBUG oslo_concurrency.lockutils [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.654 2 DEBUG oslo_concurrency.lockutils [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.655 2 DEBUG oslo_concurrency.lockutils [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.655 2 DEBUG nova.compute.manager [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] No waiting events found dispatching network-vif-plugged-94c92796-5e8b-402b-bab2-2bb77f2148ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.655 2 WARNING nova.compute.manager [req-ddd8ac28-bf6c-4f46-8e87-10b7b68d533c req-2a5bb63d-64ee-4703-8ca6-b7ea122ffc79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received unexpected event network-vif-plugged-94c92796-5e8b-402b-bab2-2bb77f2148ad for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.656 2 DEBUG nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.665 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409374.6642606, 56230639-61e0-4b9f-9bc0-ccf51bb51195 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.665 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.666 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.669 2 INFO nova.virt.libvirt.driver [-] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Instance spawned successfully.#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.669 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.796 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.802 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.803 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.803 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.804 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.805 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.805 2 DEBUG nova.virt.libvirt.driver [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.811 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:49:34 np0005465987 nova_compute[230713]: 2025-10-02 12:49:34.896 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:49:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:34.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:35 np0005465987 nova_compute[230713]: 2025-10-02 12:49:35.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:35 np0005465987 nova_compute[230713]: 2025-10-02 12:49:35.082 2 INFO nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Took 11.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:49:35 np0005465987 nova_compute[230713]: 2025-10-02 12:49:35.083 2 DEBUG nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:49:35 np0005465987 nova_compute[230713]: 2025-10-02 12:49:35.225 2 INFO nova.compute.manager [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Took 13.02 seconds to build instance.#033[00m
Oct  2 08:49:35 np0005465987 nova_compute[230713]: 2025-10-02 12:49:35.322 2 DEBUG oslo_concurrency.lockutils [None req-ef62780a-9b6e-4050-b50c-7691d569f166 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:36 np0005465987 nova_compute[230713]: 2025-10-02 12:49:36.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:36.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:36.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:38.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:38 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:38Z|00685|binding|INFO|Releasing lport 1429dcbf-68c2-4c1a-a413-db3c21475646 from this chassis (sb_readonly=0)
Oct  2 08:49:38 np0005465987 nova_compute[230713]: 2025-10-02 12:49:38.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:38 np0005465987 nova_compute[230713]: 2025-10-02 12:49:38.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:38 np0005465987 nova_compute[230713]: 2025-10-02 12:49:38.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:49:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:39Z|00686|binding|INFO|Releasing lport 1429dcbf-68c2-4c1a-a413-db3c21475646 from this chassis (sb_readonly=0)
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.100 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:49:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e374 e374: 3 total, 3 up, 3 in
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:39 np0005465987 NetworkManager[44910]: <info>  [1759409379.3165] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Oct  2 08:49:39 np0005465987 NetworkManager[44910]: <info>  [1759409379.3180] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:39Z|00687|binding|INFO|Releasing lport 1429dcbf-68c2-4c1a-a413-db3c21475646 from this chassis (sb_readonly=0)
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.844 2 DEBUG nova.compute.manager [req-1a6a293f-fdfe-483d-818c-70dd3d2a6a55 req-3ed8533d-fce8-4aad-a342-5b316dd382e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received event network-changed-94c92796-5e8b-402b-bab2-2bb77f2148ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.844 2 DEBUG nova.compute.manager [req-1a6a293f-fdfe-483d-818c-70dd3d2a6a55 req-3ed8533d-fce8-4aad-a342-5b316dd382e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Refreshing instance network info cache due to event network-changed-94c92796-5e8b-402b-bab2-2bb77f2148ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.844 2 DEBUG oslo_concurrency.lockutils [req-1a6a293f-fdfe-483d-818c-70dd3d2a6a55 req-3ed8533d-fce8-4aad-a342-5b316dd382e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.845 2 DEBUG oslo_concurrency.lockutils [req-1a6a293f-fdfe-483d-818c-70dd3d2a6a55 req-3ed8533d-fce8-4aad-a342-5b316dd382e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:49:39 np0005465987 nova_compute[230713]: 2025-10-02 12:49:39.845 2 DEBUG nova.network.neutron [req-1a6a293f-fdfe-483d-818c-70dd3d2a6a55 req-3ed8533d-fce8-4aad-a342-5b316dd382e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Refreshing network info cache for port 94c92796-5e8b-402b-bab2-2bb77f2148ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:49:40 np0005465987 nova_compute[230713]: 2025-10-02 12:49:40.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.003000079s ======
Oct  2 08:49:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:40.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Oct  2 08:49:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:40.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:41 np0005465987 nova_compute[230713]: 2025-10-02 12:49:41.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:42.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:49:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:42.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:49:43 np0005465987 nova_compute[230713]: 2025-10-02 12:49:43.380 2 DEBUG nova.network.neutron [req-1a6a293f-fdfe-483d-818c-70dd3d2a6a55 req-3ed8533d-fce8-4aad-a342-5b316dd382e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Updated VIF entry in instance network info cache for port 94c92796-5e8b-402b-bab2-2bb77f2148ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:49:43 np0005465987 nova_compute[230713]: 2025-10-02 12:49:43.381 2 DEBUG nova.network.neutron [req-1a6a293f-fdfe-483d-818c-70dd3d2a6a55 req-3ed8533d-fce8-4aad-a342-5b316dd382e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Updating instance_info_cache with network_info: [{"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:43 np0005465987 nova_compute[230713]: 2025-10-02 12:49:43.417 2 DEBUG oslo_concurrency.lockutils [req-1a6a293f-fdfe-483d-818c-70dd3d2a6a55 req-3ed8533d-fce8-4aad-a342-5b316dd382e8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:49:43 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:43Z|00688|binding|INFO|Releasing lport 1429dcbf-68c2-4c1a-a413-db3c21475646 from this chassis (sb_readonly=0)
Oct  2 08:49:43 np0005465987 nova_compute[230713]: 2025-10-02 12:49:43.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:43 np0005465987 podman[296738]: 2025-10-02 12:49:43.843425955 +0000 UTC m=+0.052294908 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.142669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384142765, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 963, "num_deletes": 258, "total_data_size": 1789407, "memory_usage": 1809584, "flush_reason": "Manual Compaction"}
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384150604, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 1179227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64047, "largest_seqno": 65005, "table_properties": {"data_size": 1174768, "index_size": 2046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10372, "raw_average_key_size": 19, "raw_value_size": 1165541, "raw_average_value_size": 2245, "num_data_blocks": 89, "num_entries": 519, "num_filter_entries": 519, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409324, "oldest_key_time": 1759409324, "file_creation_time": 1759409384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 7968 microseconds, and 3569 cpu microseconds.
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.150637) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 1179227 bytes OK
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.150654) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.152759) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.152771) EVENT_LOG_v1 {"time_micros": 1759409384152767, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.152788) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1784496, prev total WAL file size 1784496, number of live WAL files 2.
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.153405) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323536' seq:72057594037927935, type:22 .. '6C6F676D0032353037' seq:0, type:0; will stop at (end)
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(1151KB)], [126(10MB)]
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384153463, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 12097665, "oldest_snapshot_seqno": -1}
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 8872 keys, 11946323 bytes, temperature: kUnknown
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384218192, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 11946323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11888371, "index_size": 34671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 231076, "raw_average_key_size": 26, "raw_value_size": 11732092, "raw_average_value_size": 1322, "num_data_blocks": 1349, "num_entries": 8872, "num_filter_entries": 8872, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.218433) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11946323 bytes
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.219798) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.6 rd, 184.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.4 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(20.4) write-amplify(10.1) OK, records in: 9404, records dropped: 532 output_compression: NoCompression
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.219813) EVENT_LOG_v1 {"time_micros": 1759409384219806, "job": 80, "event": "compaction_finished", "compaction_time_micros": 64821, "compaction_time_cpu_micros": 28634, "output_level": 6, "num_output_files": 1, "total_output_size": 11946323, "num_input_records": 9404, "num_output_records": 8872, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384220061, "job": 80, "event": "table_file_deletion", "file_number": 128}
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409384221871, "job": 80, "event": "table_file_deletion", "file_number": 126}
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.153283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.221962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.221968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.221970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.221972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:49:44 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:49:44.221974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:49:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:49:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:44.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:49:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:44.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:45 np0005465987 nova_compute[230713]: 2025-10-02 12:49:45.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:46.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:46 np0005465987 nova_compute[230713]: 2025-10-02 12:49:46.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:46 np0005465987 podman[296759]: 2025-10-02 12:49:46.869336906 +0000 UTC m=+0.066485439 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:49:46 np0005465987 podman[296758]: 2025-10-02 12:49:46.872639345 +0000 UTC m=+0.071648168 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:49:46 np0005465987 podman[296757]: 2025-10-02 12:49:46.905113178 +0000 UTC m=+0.108781546 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct  2 08:49:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:46.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:47 np0005465987 nova_compute[230713]: 2025-10-02 12:49:47.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:48.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:48Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:4a:ed 10.100.0.6
Oct  2 08:49:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:48Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:4a:ed 10.100.0.6
Oct  2 08:49:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:49:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:48.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:49:50 np0005465987 nova_compute[230713]: 2025-10-02 12:49:50.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:50 np0005465987 ovn_controller[129172]: 2025-10-02T12:49:50Z|00689|binding|INFO|Releasing lport 1429dcbf-68c2-4c1a-a413-db3c21475646 from this chassis (sb_readonly=0)
Oct  2 08:49:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:50.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:50 np0005465987 nova_compute[230713]: 2025-10-02 12:49:50.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:50.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:51 np0005465987 nova_compute[230713]: 2025-10-02 12:49:51.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:52.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:52.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:52 np0005465987 nova_compute[230713]: 2025-10-02 12:49:52.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:54.034 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:49:54 np0005465987 nova_compute[230713]: 2025-10-02 12:49:54.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:54.035 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:49:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:54.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:54.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:49:55.038 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:55 np0005465987 nova_compute[230713]: 2025-10-02 12:49:55.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:56 np0005465987 nova_compute[230713]: 2025-10-02 12:49:56.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:49:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:56.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:49:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:56.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:58.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:49:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:58.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.495 2 DEBUG nova.compute.manager [req-f5fb26aa-46c2-488e-b9c8-387cc4f6616a req-37fb9175-4a01-43cc-9b1f-34801c346bc9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received event network-changed-94c92796-5e8b-402b-bab2-2bb77f2148ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.496 2 DEBUG nova.compute.manager [req-f5fb26aa-46c2-488e-b9c8-387cc4f6616a req-37fb9175-4a01-43cc-9b1f-34801c346bc9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Refreshing instance network info cache due to event network-changed-94c92796-5e8b-402b-bab2-2bb77f2148ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.496 2 DEBUG oslo_concurrency.lockutils [req-f5fb26aa-46c2-488e-b9c8-387cc4f6616a req-37fb9175-4a01-43cc-9b1f-34801c346bc9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.496 2 DEBUG oslo_concurrency.lockutils [req-f5fb26aa-46c2-488e-b9c8-387cc4f6616a req-37fb9175-4a01-43cc-9b1f-34801c346bc9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.496 2 DEBUG nova.network.neutron [req-f5fb26aa-46c2-488e-b9c8-387cc4f6616a req-37fb9175-4a01-43cc-9b1f-34801c346bc9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Refreshing network info cache for port 94c92796-5e8b-402b-bab2-2bb77f2148ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.758 2 DEBUG oslo_concurrency.lockutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "56230639-61e0-4b9f-9bc0-ccf51bb51195" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.759 2 DEBUG oslo_concurrency.lockutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.759 2 DEBUG oslo_concurrency.lockutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.759 2 DEBUG oslo_concurrency.lockutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.760 2 DEBUG oslo_concurrency.lockutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.761 2 INFO nova.compute.manager [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Terminating instance#033[00m
Oct  2 08:49:59 np0005465987 nova_compute[230713]: 2025-10-02 12:49:59.761 2 DEBUG nova.compute.manager [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005465987 kernel: tap94c92796-5e (unregistering): left promiscuous mode
Oct  2 08:50:00 np0005465987 NetworkManager[44910]: <info>  [1759409400.1723] device (tap94c92796-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:00Z|00690|binding|INFO|Releasing lport 94c92796-5e8b-402b-bab2-2bb77f2148ad from this chassis (sb_readonly=0)
Oct  2 08:50:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:00Z|00691|binding|INFO|Setting lport 94c92796-5e8b-402b-bab2-2bb77f2148ad down in Southbound
Oct  2 08:50:00 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:00Z|00692|binding|INFO|Removing iface tap94c92796-5e ovn-installed in OVS
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:00.225 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:4a:ed 10.100.0.6'], port_security=['fa:16:3e:76:4a:ed 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56230639-61e0-4b9f-9bc0-ccf51bb51195', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-482a0957-cb98-415e-82f4-fff1a8f9ce2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54e0c5d5-4c43-48bb-a326-52f9491a60f6 8029b78d-6854-4737-b038-a3f23de8b7f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e93e939-794c-411b-80f2-69fd82927838, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=94c92796-5e8b-402b-bab2-2bb77f2148ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:50:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:00.226 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 94c92796-5e8b-402b-bab2-2bb77f2148ad in datapath 482a0957-cb98-415e-82f4-fff1a8f9ce2d unbound from our chassis#033[00m
Oct  2 08:50:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:00.227 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 482a0957-cb98-415e-82f4-fff1a8f9ce2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:50:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:00.228 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1bae2b69-9eb6-4e21-9237-ac3112509ae9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:00.229 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d namespace which is not needed anymore#033[00m
Oct  2 08:50:00 np0005465987 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000b0.scope: Deactivated successfully.
Oct  2 08:50:00 np0005465987 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000b0.scope: Consumed 14.331s CPU time.
Oct  2 08:50:00 np0005465987 systemd-machined[188335]: Machine qemu-80-instance-000000b0 terminated.
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.407 2 INFO nova.virt.libvirt.driver [-] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Instance destroyed successfully.#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.407 2 DEBUG nova.objects.instance [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'resources' on Instance uuid 56230639-61e0-4b9f-9bc0-ccf51bb51195 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:00 np0005465987 neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d[296720]: [NOTICE]   (296726) : haproxy version is 2.8.14-c23fe91
Oct  2 08:50:00 np0005465987 neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d[296720]: [NOTICE]   (296726) : path to executable is /usr/sbin/haproxy
Oct  2 08:50:00 np0005465987 neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d[296720]: [WARNING]  (296726) : Exiting Master process...
Oct  2 08:50:00 np0005465987 neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d[296720]: [WARNING]  (296726) : Exiting Master process...
Oct  2 08:50:00 np0005465987 neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d[296720]: [ALERT]    (296726) : Current worker (296728) exited with code 143 (Terminated)
Oct  2 08:50:00 np0005465987 neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d[296720]: [WARNING]  (296726) : All workers exited. Exiting... (0)
Oct  2 08:50:00 np0005465987 systemd[1]: libpod-d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d.scope: Deactivated successfully.
Oct  2 08:50:00 np0005465987 podman[296843]: 2025-10-02 12:50:00.450178889 +0000 UTC m=+0.144387894 container died d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:50:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.468 2 DEBUG nova.virt.libvirt.vif [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:49:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1043137860',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-1043137860',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=176,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI76ELGXWiA+GK+T9UIVIpe3RF7FnLdHt1r64FjSu6VP3p+NhW/jR9hBYQ+9GIG1srG7vAu2r17A8gA4IFD8GdKb+uaIKikiK9XRxTWoso5Ldl7KapMPNhvgKJ6EzOx9dQ==',key_name='tempest-TestSecurityGroupsBasicOps-359554467',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:49:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-x6x8yyca',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:49:35Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=56230639-61e0-4b9f-9bc0-ccf51bb51195,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.468 2 DEBUG nova.network.os_vif_util [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.469 2 DEBUG nova.network.os_vif_util [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:4a:ed,bridge_name='br-int',has_traffic_filtering=True,id=94c92796-5e8b-402b-bab2-2bb77f2148ad,network=Network(482a0957-cb98-415e-82f4-fff1a8f9ce2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c92796-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.469 2 DEBUG os_vif [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:4a:ed,bridge_name='br-int',has_traffic_filtering=True,id=94c92796-5e8b-402b-bab2-2bb77f2148ad,network=Network(482a0957-cb98-415e-82f4-fff1a8f9ce2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c92796-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94c92796-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.475 2 INFO os_vif [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:4a:ed,bridge_name='br-int',has_traffic_filtering=True,id=94c92796-5e8b-402b-bab2-2bb77f2148ad,network=Network(482a0957-cb98-415e-82f4-fff1a8f9ce2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c92796-5e')#033[00m
Oct  2 08:50:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:00.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 08:50:00 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d-userdata-shm.mount: Deactivated successfully.
Oct  2 08:50:00 np0005465987 systemd[1]: var-lib-containers-storage-overlay-1a14bb88e5ec8004753960c1164c246ca6f1548286e500aa5d71714ee63391cf-merged.mount: Deactivated successfully.
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.717 2 DEBUG nova.compute.manager [req-26991f72-b2e9-4c9d-8e02-2e1bb33e0530 req-62c24c1f-baac-4413-a6d8-325dbd6ff6bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received event network-vif-unplugged-94c92796-5e8b-402b-bab2-2bb77f2148ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.718 2 DEBUG oslo_concurrency.lockutils [req-26991f72-b2e9-4c9d-8e02-2e1bb33e0530 req-62c24c1f-baac-4413-a6d8-325dbd6ff6bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.719 2 DEBUG oslo_concurrency.lockutils [req-26991f72-b2e9-4c9d-8e02-2e1bb33e0530 req-62c24c1f-baac-4413-a6d8-325dbd6ff6bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.719 2 DEBUG oslo_concurrency.lockutils [req-26991f72-b2e9-4c9d-8e02-2e1bb33e0530 req-62c24c1f-baac-4413-a6d8-325dbd6ff6bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.719 2 DEBUG nova.compute.manager [req-26991f72-b2e9-4c9d-8e02-2e1bb33e0530 req-62c24c1f-baac-4413-a6d8-325dbd6ff6bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] No waiting events found dispatching network-vif-unplugged-94c92796-5e8b-402b-bab2-2bb77f2148ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:00 np0005465987 nova_compute[230713]: 2025-10-02 12:50:00.720 2 DEBUG nova.compute.manager [req-26991f72-b2e9-4c9d-8e02-2e1bb33e0530 req-62c24c1f-baac-4413-a6d8-325dbd6ff6bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received event network-vif-unplugged-94c92796-5e8b-402b-bab2-2bb77f2148ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:50:00 np0005465987 podman[296843]: 2025-10-02 12:50:00.894032951 +0000 UTC m=+0.588241956 container cleanup d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:50:00 np0005465987 systemd[1]: libpod-conmon-d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d.scope: Deactivated successfully.
Oct  2 08:50:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:50:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:00.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:50:01 np0005465987 podman[296902]: 2025-10-02 12:50:01.105190799 +0000 UTC m=+0.186728622 container remove d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:50:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:01.112 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ecf99b-e8de-4065-a346-db1b29aeefb0]: (4, ('Thu Oct  2 12:50:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d (d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d)\nd9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d\nThu Oct  2 12:50:00 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d (d9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d)\nd9d959700f90371d6adde626ba5cf29206856e8a956e6f5e34d5447a5275e58d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:01.114 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3900709d-804d-41fd-a5ca-d9b84d82d2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:01.115 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap482a0957-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:01 np0005465987 nova_compute[230713]: 2025-10-02 12:50:01.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:01 np0005465987 kernel: tap482a0957-c0: left promiscuous mode
Oct  2 08:50:01 np0005465987 nova_compute[230713]: 2025-10-02 12:50:01.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:01.122 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6158af29-e252-4098-9bec-8512b7488fa8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:01 np0005465987 nova_compute[230713]: 2025-10-02 12:50:01.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:01.150 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bdfc7a76-f32d-46e0-a1bc-4aab42e170f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:01.151 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[11d34db0-488f-43b7-bf0c-cb005f820860]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:01.168 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce3f791-34fe-471e-8375-045ca624b582]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744045, 'reachable_time': 36526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296916, 'error': None, 'target': 'ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:01 np0005465987 systemd[1]: run-netns-ovnmeta\x2d482a0957\x2dcb98\x2d415e\x2d82f4\x2dfff1a8f9ce2d.mount: Deactivated successfully.
Oct  2 08:50:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:01.171 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-482a0957-cb98-415e-82f4-fff1a8f9ce2d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:50:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:01.171 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[5f144795-7b1c-43e5-8717-d8db1234bf6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:01 np0005465987 nova_compute[230713]: 2025-10-02 12:50:01.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:01 np0005465987 nova_compute[230713]: 2025-10-02 12:50:01.892 2 DEBUG nova.network.neutron [req-f5fb26aa-46c2-488e-b9c8-387cc4f6616a req-37fb9175-4a01-43cc-9b1f-34801c346bc9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Updated VIF entry in instance network info cache for port 94c92796-5e8b-402b-bab2-2bb77f2148ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:01 np0005465987 nova_compute[230713]: 2025-10-02 12:50:01.893 2 DEBUG nova.network.neutron [req-f5fb26aa-46c2-488e-b9c8-387cc4f6616a req-37fb9175-4a01-43cc-9b1f-34801c346bc9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Updating instance_info_cache with network_info: [{"id": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "address": "fa:16:3e:76:4a:ed", "network": {"id": "482a0957-cb98-415e-82f4-fff1a8f9ce2d", "bridge": "br-int", "label": "tempest-network-smoke--238073404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c92796-5e", "ovs_interfaceid": "94c92796-5e8b-402b-bab2-2bb77f2148ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:02 np0005465987 nova_compute[230713]: 2025-10-02 12:50:02.056 2 DEBUG oslo_concurrency.lockutils [req-f5fb26aa-46c2-488e-b9c8-387cc4f6616a req-37fb9175-4a01-43cc-9b1f-34801c346bc9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-56230639-61e0-4b9f-9bc0-ccf51bb51195" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:02.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:50:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:02.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:50:03 np0005465987 nova_compute[230713]: 2025-10-02 12:50:03.004 2 DEBUG nova.compute.manager [req-0be2959c-61e4-44ac-b65e-0f9af2e292fd req-3878abe2-1f38-45f2-a567-d575b77505a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received event network-vif-plugged-94c92796-5e8b-402b-bab2-2bb77f2148ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:03 np0005465987 nova_compute[230713]: 2025-10-02 12:50:03.004 2 DEBUG oslo_concurrency.lockutils [req-0be2959c-61e4-44ac-b65e-0f9af2e292fd req-3878abe2-1f38-45f2-a567-d575b77505a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:03 np0005465987 nova_compute[230713]: 2025-10-02 12:50:03.004 2 DEBUG oslo_concurrency.lockutils [req-0be2959c-61e4-44ac-b65e-0f9af2e292fd req-3878abe2-1f38-45f2-a567-d575b77505a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:03 np0005465987 nova_compute[230713]: 2025-10-02 12:50:03.004 2 DEBUG oslo_concurrency.lockutils [req-0be2959c-61e4-44ac-b65e-0f9af2e292fd req-3878abe2-1f38-45f2-a567-d575b77505a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:03 np0005465987 nova_compute[230713]: 2025-10-02 12:50:03.004 2 DEBUG nova.compute.manager [req-0be2959c-61e4-44ac-b65e-0f9af2e292fd req-3878abe2-1f38-45f2-a567-d575b77505a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] No waiting events found dispatching network-vif-plugged-94c92796-5e8b-402b-bab2-2bb77f2148ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:03 np0005465987 nova_compute[230713]: 2025-10-02 12:50:03.005 2 WARNING nova.compute.manager [req-0be2959c-61e4-44ac-b65e-0f9af2e292fd req-3878abe2-1f38-45f2-a567-d575b77505a1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received unexpected event network-vif-plugged-94c92796-5e8b-402b-bab2-2bb77f2148ad for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:50:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:04.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:04 np0005465987 nova_compute[230713]: 2025-10-02 12:50:04.981 2 INFO nova.virt.libvirt.driver [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Deleting instance files /var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195_del#033[00m
Oct  2 08:50:04 np0005465987 nova_compute[230713]: 2025-10-02 12:50:04.982 2 INFO nova.virt.libvirt.driver [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Deletion of /var/lib/nova/instances/56230639-61e0-4b9f-9bc0-ccf51bb51195_del complete#033[00m
Oct  2 08:50:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:05.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:05 np0005465987 nova_compute[230713]: 2025-10-02 12:50:05.227 2 INFO nova.compute.manager [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Took 5.46 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:50:05 np0005465987 nova_compute[230713]: 2025-10-02 12:50:05.227 2 DEBUG oslo.service.loopingcall [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:50:05 np0005465987 nova_compute[230713]: 2025-10-02 12:50:05.228 2 DEBUG nova.compute.manager [-] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:50:05 np0005465987 nova_compute[230713]: 2025-10-02 12:50:05.228 2 DEBUG nova.network.neutron [-] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:50:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:05 np0005465987 nova_compute[230713]: 2025-10-02 12:50:05.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:06 np0005465987 nova_compute[230713]: 2025-10-02 12:50:06.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:06.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:50:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:07.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:50:07 np0005465987 nova_compute[230713]: 2025-10-02 12:50:07.036 2 DEBUG nova.compute.manager [req-ad6449cc-c241-4f4e-b700-9b5aa78d4312 req-4938fe9b-f1cf-4cb0-8f31-020630420795 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Received event network-vif-deleted-94c92796-5e8b-402b-bab2-2bb77f2148ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:07 np0005465987 nova_compute[230713]: 2025-10-02 12:50:07.037 2 INFO nova.compute.manager [req-ad6449cc-c241-4f4e-b700-9b5aa78d4312 req-4938fe9b-f1cf-4cb0-8f31-020630420795 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Neutron deleted interface 94c92796-5e8b-402b-bab2-2bb77f2148ad; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:50:07 np0005465987 nova_compute[230713]: 2025-10-02 12:50:07.037 2 DEBUG nova.network.neutron [req-ad6449cc-c241-4f4e-b700-9b5aa78d4312 req-4938fe9b-f1cf-4cb0-8f31-020630420795 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:07 np0005465987 nova_compute[230713]: 2025-10-02 12:50:07.089 2 DEBUG nova.network.neutron [-] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:07 np0005465987 nova_compute[230713]: 2025-10-02 12:50:07.281 2 DEBUG nova.compute.manager [req-ad6449cc-c241-4f4e-b700-9b5aa78d4312 req-4938fe9b-f1cf-4cb0-8f31-020630420795 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Detach interface failed, port_id=94c92796-5e8b-402b-bab2-2bb77f2148ad, reason: Instance 56230639-61e0-4b9f-9bc0-ccf51bb51195 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:50:07 np0005465987 nova_compute[230713]: 2025-10-02 12:50:07.285 2 INFO nova.compute.manager [-] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Took 2.06 seconds to deallocate network for instance.#033[00m
Oct  2 08:50:07 np0005465987 nova_compute[230713]: 2025-10-02 12:50:07.497 2 DEBUG oslo_concurrency.lockutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:07 np0005465987 nova_compute[230713]: 2025-10-02 12:50:07.498 2 DEBUG oslo_concurrency.lockutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:08 np0005465987 nova_compute[230713]: 2025-10-02 12:50:08.342 2 DEBUG oslo_concurrency.processutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:08.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3377597495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:08 np0005465987 nova_compute[230713]: 2025-10-02 12:50:08.770 2 DEBUG oslo_concurrency.processutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:08 np0005465987 nova_compute[230713]: 2025-10-02 12:50:08.776 2 DEBUG nova.compute.provider_tree [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:50:08 np0005465987 nova_compute[230713]: 2025-10-02 12:50:08.857 2 DEBUG nova.scheduler.client.report [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:50:08 np0005465987 nova_compute[230713]: 2025-10-02 12:50:08.965 2 DEBUG oslo_concurrency.lockutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:09.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:09 np0005465987 nova_compute[230713]: 2025-10-02 12:50:09.117 2 INFO nova.scheduler.client.report [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Deleted allocations for instance 56230639-61e0-4b9f-9bc0-ccf51bb51195#033[00m
Oct  2 08:50:09 np0005465987 nova_compute[230713]: 2025-10-02 12:50:09.376 2 DEBUG oslo_concurrency.lockutils [None req-a2930270-e51b-4514-8a95-4214700143cb 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "56230639-61e0-4b9f-9bc0-ccf51bb51195" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:10 np0005465987 nova_compute[230713]: 2025-10-02 12:50:10.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:50:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:10.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:50:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:11.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:11 np0005465987 nova_compute[230713]: 2025-10-02 12:50:11.100 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:11 np0005465987 nova_compute[230713]: 2025-10-02 12:50:11.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:12.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:13.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:13 np0005465987 nova_compute[230713]: 2025-10-02 12:50:13.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:13 np0005465987 nova_compute[230713]: 2025-10-02 12:50:13.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:50:13 np0005465987 nova_compute[230713]: 2025-10-02 12:50:13.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:50:13 np0005465987 nova_compute[230713]: 2025-10-02 12:50:13.999 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:50:14 np0005465987 nova_compute[230713]: 2025-10-02 12:50:14.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:14 np0005465987 nova_compute[230713]: 2025-10-02 12:50:14.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:14.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:14 np0005465987 podman[296942]: 2025-10-02 12:50:14.832811354 +0000 UTC m=+0.053414617 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 08:50:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:15.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:15 np0005465987 nova_compute[230713]: 2025-10-02 12:50:15.405 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409400.4042647, 56230639-61e0-4b9f-9bc0-ccf51bb51195 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:15 np0005465987 nova_compute[230713]: 2025-10-02 12:50:15.406 2 INFO nova.compute.manager [-] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:50:15 np0005465987 nova_compute[230713]: 2025-10-02 12:50:15.422 2 DEBUG nova.compute.manager [None req-8e48eb23-f464-4210-9560-cb4d8cad39e8 - - - - - -] [instance: 56230639-61e0-4b9f-9bc0-ccf51bb51195] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:15 np0005465987 nova_compute[230713]: 2025-10-02 12:50:15.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:16 np0005465987 nova_compute[230713]: 2025-10-02 12:50:16.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:16.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:50:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:17.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:50:17 np0005465987 podman[297093]: 2025-10-02 12:50:17.876502814 +0000 UTC m=+0.071657817 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:50:17 np0005465987 podman[297094]: 2025-10-02 12:50:17.896971205 +0000 UTC m=+0.098182621 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct  2 08:50:17 np0005465987 podman[297092]: 2025-10-02 12:50:17.905740271 +0000 UTC m=+0.111298484 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:50:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:50:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:50:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:50:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e375 e375: 3 total, 3 up, 3 in
Oct  2 08:50:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:18.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:18 np0005465987 nova_compute[230713]: 2025-10-02 12:50:18.992 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:19.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e376 e376: 3 total, 3 up, 3 in
Oct  2 08:50:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:20 np0005465987 nova_compute[230713]: 2025-10-02 12:50:20.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:50:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:20.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:50:20 np0005465987 nova_compute[230713]: 2025-10-02 12:50:20.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:20 np0005465987 nova_compute[230713]: 2025-10-02 12:50:20.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:20 np0005465987 nova_compute[230713]: 2025-10-02 12:50:20.991 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:20 np0005465987 nova_compute[230713]: 2025-10-02 12:50:20.991 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:20 np0005465987 nova_compute[230713]: 2025-10-02 12:50:20.991 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:50:20 np0005465987 nova_compute[230713]: 2025-10-02 12:50:20.991 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:50:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:21.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:50:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/425157005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.433 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e377 e377: 3 total, 3 up, 3 in
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.602 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.603 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4370MB free_disk=20.855426788330078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.604 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.604 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.735 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.735 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.775 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.805 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.805 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.820 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.841 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:50:21 np0005465987 nova_compute[230713]: 2025-10-02 12:50:21.869 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:22 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3775429891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:22 np0005465987 nova_compute[230713]: 2025-10-02 12:50:22.279 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:22 np0005465987 nova_compute[230713]: 2025-10-02 12:50:22.285 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:50:22 np0005465987 nova_compute[230713]: 2025-10-02 12:50:22.364 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:50:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:22.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:22 np0005465987 nova_compute[230713]: 2025-10-02 12:50:22.565 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:50:22 np0005465987 nova_compute[230713]: 2025-10-02 12:50:22.565 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:50:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:23.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:50:23 np0005465987 nova_compute[230713]: 2025-10-02 12:50:23.567 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:23 np0005465987 nova_compute[230713]: 2025-10-02 12:50:23.567 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:50:23 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.185053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424185190, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 721, "num_deletes": 250, "total_data_size": 1183116, "memory_usage": 1198096, "flush_reason": "Manual Compaction"}
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424194695, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 780614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65010, "largest_seqno": 65726, "table_properties": {"data_size": 777105, "index_size": 1352, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7440, "raw_average_key_size": 17, "raw_value_size": 769916, "raw_average_value_size": 1803, "num_data_blocks": 59, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409384, "oldest_key_time": 1759409384, "file_creation_time": 1759409424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 9749 microseconds, and 4420 cpu microseconds.
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.194816) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 780614 bytes OK
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.194850) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.196787) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.196817) EVENT_LOG_v1 {"time_micros": 1759409424196807, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.196849) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1179247, prev total WAL file size 1179247, number of live WAL files 2.
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.197685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(762KB)], [129(11MB)]
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424197781, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 12726937, "oldest_snapshot_seqno": -1}
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 8782 keys, 11639378 bytes, temperature: kUnknown
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424308167, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 11639378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11582204, "index_size": 34085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 230985, "raw_average_key_size": 26, "raw_value_size": 11427540, "raw_average_value_size": 1301, "num_data_blocks": 1306, "num_entries": 8782, "num_filter_entries": 8782, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.308426) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11639378 bytes
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.310058) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.2 rd, 105.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(31.2) write-amplify(14.9) OK, records in: 9299, records dropped: 517 output_compression: NoCompression
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.310092) EVENT_LOG_v1 {"time_micros": 1759409424310081, "job": 82, "event": "compaction_finished", "compaction_time_micros": 110458, "compaction_time_cpu_micros": 33493, "output_level": 6, "num_output_files": 1, "total_output_size": 11639378, "num_input_records": 9299, "num_output_records": 8782, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424310316, "job": 82, "event": "table_file_deletion", "file_number": 131}
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409424312183, "job": 82, "event": "table_file_deletion", "file_number": 129}
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.197538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.312228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.312233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.312234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.312236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:24 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:50:24.312237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:24.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:24 np0005465987 nova_compute[230713]: 2025-10-02 12:50:24.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:24 np0005465987 nova_compute[230713]: 2025-10-02 12:50:24.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:25.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:25 np0005465987 nova_compute[230713]: 2025-10-02 12:50:25.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:26 np0005465987 nova_compute[230713]: 2025-10-02 12:50:26.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:26.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:50:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:27.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:50:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:27.975 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:27.975 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:27.975 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.045 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "80938103-e53b-4780-b241-fc1e8a76f526" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.046 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.068 2 DEBUG nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.153 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.154 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.159 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.159 2 INFO nova.compute.claims [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.260 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:28.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/569344712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.749 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.755 2 DEBUG nova.compute.provider_tree [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.905 2 DEBUG nova.scheduler.client.report [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.965 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:28 np0005465987 nova_compute[230713]: 2025-10-02 12:50:28.966 2 DEBUG nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.027 2 DEBUG nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.027 2 DEBUG nova.network.neutron [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:50:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:29.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.055 2 INFO nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.090 2 DEBUG nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.215 2 DEBUG nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.216 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.217 2 INFO nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Creating image(s)#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.248 2 DEBUG nova.storage.rbd_utils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 80938103-e53b-4780-b241-fc1e8a76f526_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.281 2 DEBUG nova.storage.rbd_utils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 80938103-e53b-4780-b241-fc1e8a76f526_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.311 2 DEBUG nova.storage.rbd_utils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 80938103-e53b-4780-b241-fc1e8a76f526_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.316 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.350 2 DEBUG nova.policy [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16730f38111542e58a05fb4deb2b3914', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5ade962c517a483dbfe4bb13386f0006', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.394 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.395 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.395 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.396 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.426 2 DEBUG nova.storage.rbd_utils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 80938103-e53b-4780-b241-fc1e8a76f526_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:29 np0005465987 nova_compute[230713]: 2025-10-02 12:50:29.430 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 80938103-e53b-4780-b241-fc1e8a76f526_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 e378: 3 total, 3 up, 3 in
Oct  2 08:50:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:30 np0005465987 nova_compute[230713]: 2025-10-02 12:50:30.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:30.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.119 2 DEBUG nova.network.neutron [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Successfully created port: 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.543 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 80938103-e53b-4780-b241-fc1e8a76f526_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.609 2 DEBUG nova.storage.rbd_utils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] resizing rbd image 80938103-e53b-4780-b241-fc1e8a76f526_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.736 2 DEBUG nova.objects.instance [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'migration_context' on Instance uuid 80938103-e53b-4780-b241-fc1e8a76f526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.781 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.781 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Ensure instance console log exists: /var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.782 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.783 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.783 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:31 np0005465987 nova_compute[230713]: 2025-10-02 12:50:31.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:50:32 np0005465987 nova_compute[230713]: 2025-10-02 12:50:32.234 2 DEBUG nova.network.neutron [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Successfully updated port: 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:50:32 np0005465987 nova_compute[230713]: 2025-10-02 12:50:32.455 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:32 np0005465987 nova_compute[230713]: 2025-10-02 12:50:32.455 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquired lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:32 np0005465987 nova_compute[230713]: 2025-10-02 12:50:32.455 2 DEBUG nova.network.neutron [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:50:32 np0005465987 nova_compute[230713]: 2025-10-02 12:50:32.492 2 DEBUG nova.compute.manager [req-acb299ea-5cd0-4450-9dfa-87baded51d72 req-6b9504bc-e8a8-4596-9851-2e0be56517a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received event network-changed-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:32 np0005465987 nova_compute[230713]: 2025-10-02 12:50:32.493 2 DEBUG nova.compute.manager [req-acb299ea-5cd0-4450-9dfa-87baded51d72 req-6b9504bc-e8a8-4596-9851-2e0be56517a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Refreshing instance network info cache due to event network-changed-27dd1a62-ec6c-4786-bb38-37a5553ad6e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:32 np0005465987 nova_compute[230713]: 2025-10-02 12:50:32.493 2 DEBUG oslo_concurrency.lockutils [req-acb299ea-5cd0-4450-9dfa-87baded51d72 req-6b9504bc-e8a8-4596-9851-2e0be56517a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:32.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:32 np0005465987 nova_compute[230713]: 2025-10-02 12:50:32.945 2 DEBUG nova.network.neutron [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:50:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:50:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:50:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:50:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:50:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:35.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.362 2 DEBUG nova.network.neutron [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updating instance_info_cache with network_info: [{"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.436 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Releasing lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.436 2 DEBUG nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Instance network_info: |[{"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.437 2 DEBUG oslo_concurrency.lockutils [req-acb299ea-5cd0-4450-9dfa-87baded51d72 req-6b9504bc-e8a8-4596-9851-2e0be56517a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.437 2 DEBUG nova.network.neutron [req-acb299ea-5cd0-4450-9dfa-87baded51d72 req-6b9504bc-e8a8-4596-9851-2e0be56517a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Refreshing network info cache for port 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.439 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Start _get_guest_xml network_info=[{"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.443 2 WARNING nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.448 2 DEBUG nova.virt.libvirt.host [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.449 2 DEBUG nova.virt.libvirt.host [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.459 2 DEBUG nova.virt.libvirt.host [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.460 2 DEBUG nova.virt.libvirt.host [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:50:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.461 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.461 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.462 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.462 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.462 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.462 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.462 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.463 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.463 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.463 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.463 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.464 2 DEBUG nova.virt.hardware [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.467 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:50:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4256102784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.903 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.928 2 DEBUG nova.storage.rbd_utils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 80938103-e53b-4780-b241-fc1e8a76f526_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:35 np0005465987 nova_compute[230713]: 2025-10-02 12:50:35.932 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:50:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/28671049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.367 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.369 2 DEBUG nova.virt.libvirt.vif [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-471874977',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-471874977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=179,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNswVb0qzE60u4BIwTdoxN1wCq1mjTg0LI0f0dRfL9Dy5OWE5514P4onN+QOd1Rw6rOzpCQmFmdmXCTuqTp4M4QoUJGeM1vbbJ7M/8I3p45f/7+fxTSWHkhjk1j/aCR05g==',key_name='tempest-TestSecurityGroupsBasicOps-605389685',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-g4qiumxb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:29Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=80938103-e53b-4780-b241-fc1e8a76f526,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.369 2 DEBUG nova.network.os_vif_util [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.370 2 DEBUG nova.network.os_vif_util [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:fa:0e,bridge_name='br-int',has_traffic_filtering=True,id=27dd1a62-ec6c-4786-bb38-37a5553ad6e1,network=Network(9da548d8-bbbf-425f-9d2c-5b231ba235e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27dd1a62-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.371 2 DEBUG nova.objects.instance [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80938103-e53b-4780-b241-fc1e8a76f526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.408 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <uuid>80938103-e53b-4780-b241-fc1e8a76f526</uuid>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <name>instance-000000b3</name>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-471874977</nova:name>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:50:35</nova:creationTime>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <nova:user uuid="16730f38111542e58a05fb4deb2b3914">tempest-TestSecurityGroupsBasicOps-1031871880-project-member</nova:user>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <nova:project uuid="5ade962c517a483dbfe4bb13386f0006">tempest-TestSecurityGroupsBasicOps-1031871880</nova:project>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <nova:port uuid="27dd1a62-ec6c-4786-bb38-37a5553ad6e1">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <entry name="serial">80938103-e53b-4780-b241-fc1e8a76f526</entry>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <entry name="uuid">80938103-e53b-4780-b241-fc1e8a76f526</entry>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/80938103-e53b-4780-b241-fc1e8a76f526_disk">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/80938103-e53b-4780-b241-fc1e8a76f526_disk.config">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:0a:fa:0e"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <target dev="tap27dd1a62-ec"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526/console.log" append="off"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:50:36 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:50:36 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:50:36 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:50:36 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.409 2 DEBUG nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Preparing to wait for external event network-vif-plugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.410 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "80938103-e53b-4780-b241-fc1e8a76f526-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.410 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.410 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.411 2 DEBUG nova.virt.libvirt.vif [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-471874977',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-471874977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=179,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNswVb0qzE60u4BIwTdoxN1wCq1mjTg0LI0f0dRfL9Dy5OWE5514P4onN+QOd1Rw6rOzpCQmFmdmXCTuqTp4M4QoUJGeM1vbbJ7M/8I3p45f/7+fxTSWHkhjk1j/aCR05g==',key_name='tempest-TestSecurityGroupsBasicOps-605389685',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-g4qiumxb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:29Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=80938103-e53b-4780-b241-fc1e8a76f526,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.412 2 DEBUG nova.network.os_vif_util [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.412 2 DEBUG nova.network.os_vif_util [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:fa:0e,bridge_name='br-int',has_traffic_filtering=True,id=27dd1a62-ec6c-4786-bb38-37a5553ad6e1,network=Network(9da548d8-bbbf-425f-9d2c-5b231ba235e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27dd1a62-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.413 2 DEBUG os_vif [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:fa:0e,bridge_name='br-int',has_traffic_filtering=True,id=27dd1a62-ec6c-4786-bb38-37a5553ad6e1,network=Network(9da548d8-bbbf-425f-9d2c-5b231ba235e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27dd1a62-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27dd1a62-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27dd1a62-ec, col_values=(('external_ids', {'iface-id': '27dd1a62-ec6c-4786-bb38-37a5553ad6e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:fa:0e', 'vm-uuid': '80938103-e53b-4780-b241-fc1e8a76f526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:36 np0005465987 NetworkManager[44910]: <info>  [1759409436.4897] manager: (tap27dd1a62-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.495 2 INFO os_vif [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:fa:0e,bridge_name='br-int',has_traffic_filtering=True,id=27dd1a62-ec6c-4786-bb38-37a5553ad6e1,network=Network(9da548d8-bbbf-425f-9d2c-5b231ba235e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27dd1a62-ec')#033[00m
Oct  2 08:50:36 np0005465987 nova_compute[230713]: 2025-10-02 12:50:36.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:36.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:37.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:37 np0005465987 nova_compute[230713]: 2025-10-02 12:50:37.659 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:50:37 np0005465987 nova_compute[230713]: 2025-10-02 12:50:37.660 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:50:37 np0005465987 nova_compute[230713]: 2025-10-02 12:50:37.660 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] No VIF found with MAC fa:16:3e:0a:fa:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:50:37 np0005465987 nova_compute[230713]: 2025-10-02 12:50:37.660 2 INFO nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Using config drive#033[00m
Oct  2 08:50:37 np0005465987 nova_compute[230713]: 2025-10-02 12:50:37.683 2 DEBUG nova.storage.rbd_utils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 80938103-e53b-4780-b241-fc1e8a76f526_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:38 np0005465987 nova_compute[230713]: 2025-10-02 12:50:38.263 2 INFO nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Creating config drive at /var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526/disk.config#033[00m
Oct  2 08:50:38 np0005465987 nova_compute[230713]: 2025-10-02 12:50:38.268 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwb6y28az execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:38 np0005465987 nova_compute[230713]: 2025-10-02 12:50:38.402 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwb6y28az" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:38 np0005465987 nova_compute[230713]: 2025-10-02 12:50:38.430 2 DEBUG nova.storage.rbd_utils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] rbd image 80938103-e53b-4780-b241-fc1e8a76f526_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:38 np0005465987 nova_compute[230713]: 2025-10-02 12:50:38.433 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526/disk.config 80938103-e53b-4780-b241-fc1e8a76f526_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:38 np0005465987 nova_compute[230713]: 2025-10-02 12:50:38.465 2 DEBUG nova.network.neutron [req-acb299ea-5cd0-4450-9dfa-87baded51d72 req-6b9504bc-e8a8-4596-9851-2e0be56517a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updated VIF entry in instance network info cache for port 27dd1a62-ec6c-4786-bb38-37a5553ad6e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:38 np0005465987 nova_compute[230713]: 2025-10-02 12:50:38.466 2 DEBUG nova.network.neutron [req-acb299ea-5cd0-4450-9dfa-87baded51d72 req-6b9504bc-e8a8-4596-9851-2e0be56517a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updating instance_info_cache with network_info: [{"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:38 np0005465987 nova_compute[230713]: 2025-10-02 12:50:38.560 2 DEBUG oslo_concurrency.lockutils [req-acb299ea-5cd0-4450-9dfa-87baded51d72 req-6b9504bc-e8a8-4596-9851-2e0be56517a4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:38.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:39 np0005465987 nova_compute[230713]: 2025-10-02 12:50:39.874 2 DEBUG oslo_concurrency.processutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526/disk.config 80938103-e53b-4780-b241-fc1e8a76f526_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:39 np0005465987 nova_compute[230713]: 2025-10-02 12:50:39.875 2 INFO nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Deleting local config drive /var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526/disk.config because it was imported into RBD.#033[00m
Oct  2 08:50:39 np0005465987 kernel: tap27dd1a62-ec: entered promiscuous mode
Oct  2 08:50:39 np0005465987 NetworkManager[44910]: <info>  [1759409439.9366] manager: (tap27dd1a62-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Oct  2 08:50:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:39Z|00693|binding|INFO|Claiming lport 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 for this chassis.
Oct  2 08:50:39 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:39Z|00694|binding|INFO|27dd1a62-ec6c-4786-bb38-37a5553ad6e1: Claiming fa:16:3e:0a:fa:0e 10.100.0.12
Oct  2 08:50:39 np0005465987 nova_compute[230713]: 2025-10-02 12:50:39.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:39 np0005465987 nova_compute[230713]: 2025-10-02 12:50:39.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:39 np0005465987 nova_compute[230713]: 2025-10-02 12:50:39.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:39 np0005465987 systemd-udevd[297578]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:50:39 np0005465987 systemd-machined[188335]: New machine qemu-81-instance-000000b3.
Oct  2 08:50:39 np0005465987 NetworkManager[44910]: <info>  [1759409439.9851] device (tap27dd1a62-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:50:39 np0005465987 NetworkManager[44910]: <info>  [1759409439.9859] device (tap27dd1a62-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:50:40 np0005465987 systemd[1]: Started Virtual Machine qemu-81-instance-000000b3.
Oct  2 08:50:40 np0005465987 nova_compute[230713]: 2025-10-02 12:50:40.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:40Z|00695|binding|INFO|Setting lport 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 ovn-installed in OVS
Oct  2 08:50:40 np0005465987 nova_compute[230713]: 2025-10-02 12:50:40.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:40Z|00696|binding|INFO|Setting lport 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 up in Southbound
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.108 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:fa:0e 10.100.0.12'], port_security=['fa:16:3e:0a:fa:0e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '80938103-e53b-4780-b241-fc1e8a76f526', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da548d8-bbbf-425f-9d2c-5b231ba235e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e949d296-273e-41b7-8397-6bc2e09daa3e eab0f3a3-d674-4d86-8dcb-4153be335100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb6054ec-09a9-4745-8f7e-27b822d71fe5, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=27dd1a62-ec6c-4786-bb38-37a5553ad6e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.109 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 in datapath 9da548d8-bbbf-425f-9d2c-5b231ba235e1 bound to our chassis#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.110 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9da548d8-bbbf-425f-9d2c-5b231ba235e1#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.123 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bc8c4570-2c46-42d5-9bdb-c8502fac6af9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.124 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9da548d8-b1 in ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.127 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9da548d8-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.127 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6400238b-3531-4061-a17d-dbe927706baa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.128 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a99900-2d69-4a1c-9480-211db1696a5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.139 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[e53cd282-db7f-461d-81e0-40487da5067b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.154 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cd0c5b94-d8c0-47b2-99c9-c8bd87ea2527]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.190 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[8e8b0a23-f10c-4289-b353-36bcd69447d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 NetworkManager[44910]: <info>  [1759409440.2001] manager: (tap9da548d8-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/317)
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.201 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ce40aa-9865-4ff1-9630-edb063ba6565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 systemd-udevd[297580]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.235 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[28bfe769-ba81-436d-867e-fdb17f4cee37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.241 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2bb38c-a50b-46f8-83da-317d943cedc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 NetworkManager[44910]: <info>  [1759409440.2698] device (tap9da548d8-b0): carrier: link connected
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.275 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[bc86df5f-23bd-4ff9-b2fa-06a5494214e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.292 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ab89634f-2c62-49e9-bff1-bae66f17c198]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da548d8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:aa:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750884, 'reachable_time': 44537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297629, 'error': None, 'target': 'ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.308 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb0ccf3-8b8b-41ea-afc2-5940a40fbf45]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:aa7c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750884, 'tstamp': 750884}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297630, 'error': None, 'target': 'ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.325 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bb892c75-8b66-40d3-a892-185f08c0f0c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da548d8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:aa:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750884, 'reachable_time': 44537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297631, 'error': None, 'target': 'ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.352 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e5105e2d-bb2d-4c42-8c5f-0468d7aca99d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.410 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2c8677-617b-4fac-94cb-f3b2510ca7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.412 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da548d8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.412 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.413 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9da548d8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:40 np0005465987 nova_compute[230713]: 2025-10-02 12:50:40.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:40 np0005465987 NetworkManager[44910]: <info>  [1759409440.4155] manager: (tap9da548d8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Oct  2 08:50:40 np0005465987 kernel: tap9da548d8-b0: entered promiscuous mode
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.418 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9da548d8-b0, col_values=(('external_ids', {'iface-id': '82fa0cfe-92b9-4165-b82b-8f8d8f846f87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:40 np0005465987 nova_compute[230713]: 2025-10-02 12:50:40.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:40 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:40Z|00697|binding|INFO|Releasing lport 82fa0cfe-92b9-4165-b82b-8f8d8f846f87 from this chassis (sb_readonly=0)
Oct  2 08:50:40 np0005465987 nova_compute[230713]: 2025-10-02 12:50:40.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.435 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9da548d8-bbbf-425f-9d2c-5b231ba235e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9da548d8-bbbf-425f-9d2c-5b231ba235e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.436 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d593363c-a993-4028-a75d-986d4f2466b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.437 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-9da548d8-bbbf-425f-9d2c-5b231ba235e1
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/9da548d8-bbbf-425f-9d2c-5b231ba235e1.pid.haproxy
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 9da548d8-bbbf-425f-9d2c-5b231ba235e1
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:50:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:50:40.438 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1', 'env', 'PROCESS_TAG=haproxy-9da548d8-bbbf-425f-9d2c-5b231ba235e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9da548d8-bbbf-425f-9d2c-5b231ba235e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:50:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:40.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:40 np0005465987 podman[297687]: 2025-10-02 12:50:40.768751403 +0000 UTC m=+0.023762389 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:50:40 np0005465987 podman[297687]: 2025-10-02 12:50:40.874402164 +0000 UTC m=+0.129413140 container create b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:50:40 np0005465987 systemd[1]: Started libpod-conmon-b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc.scope.
Oct  2 08:50:40 np0005465987 nova_compute[230713]: 2025-10-02 12:50:40.984 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409440.9837575, 80938103-e53b-4780-b241-fc1e8a76f526 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:40 np0005465987 nova_compute[230713]: 2025-10-02 12:50:40.985 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] VM Started (Lifecycle Event)#033[00m
Oct  2 08:50:40 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:50:40 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbe1399605ef4d20915a75364b1e9b18eb3978ef1b3df88295f9c7b14a78841/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:50:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:41.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:41 np0005465987 podman[297687]: 2025-10-02 12:50:41.099653199 +0000 UTC m=+0.354664225 container init b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:50:41 np0005465987 podman[297687]: 2025-10-02 12:50:41.105938178 +0000 UTC m=+0.360949134 container start b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.112 2 DEBUG nova.compute.manager [req-6da8a37e-a65d-4474-8239-04291033d162 req-ef987c77-785b-453b-9fb4-6ccbad2fcd58 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received event network-vif-plugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.114 2 DEBUG oslo_concurrency.lockutils [req-6da8a37e-a65d-4474-8239-04291033d162 req-ef987c77-785b-453b-9fb4-6ccbad2fcd58 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80938103-e53b-4780-b241-fc1e8a76f526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.114 2 DEBUG oslo_concurrency.lockutils [req-6da8a37e-a65d-4474-8239-04291033d162 req-ef987c77-785b-453b-9fb4-6ccbad2fcd58 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.115 2 DEBUG oslo_concurrency.lockutils [req-6da8a37e-a65d-4474-8239-04291033d162 req-ef987c77-785b-453b-9fb4-6ccbad2fcd58 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.116 2 DEBUG nova.compute.manager [req-6da8a37e-a65d-4474-8239-04291033d162 req-ef987c77-785b-453b-9fb4-6ccbad2fcd58 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Processing event network-vif-plugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.117 2 DEBUG nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.121 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.127 2 INFO nova.virt.libvirt.driver [-] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Instance spawned successfully.#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.128 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:50:41 np0005465987 neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1[297702]: [NOTICE]   (297706) : New worker (297708) forked
Oct  2 08:50:41 np0005465987 neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1[297702]: [NOTICE]   (297706) : Loading success.
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.508 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.511 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:50:41 np0005465987 nova_compute[230713]: 2025-10-02 12:50:41.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.040 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.041 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.041 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.042 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.042 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.043 2 DEBUG nova.virt.libvirt.driver [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.135 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.135 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409440.984606, 80938103-e53b-4780-b241-fc1e8a76f526 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.136 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.415 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.419 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409441.1210244, 80938103-e53b-4780-b241-fc1e8a76f526 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.419 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.482 2 INFO nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Took 13.27 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.483 2 DEBUG nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.507 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.510 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:50:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:42 np0005465987 nova_compute[230713]: 2025-10-02 12:50:42.806 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:50:43 np0005465987 nova_compute[230713]: 2025-10-02 12:50:43.049 2 INFO nova.compute.manager [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Took 14.93 seconds to build instance.#033[00m
Oct  2 08:50:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:43.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:45.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:45 np0005465987 nova_compute[230713]: 2025-10-02 12:50:45.602 2 DEBUG nova.compute.manager [req-d2b26889-1b1f-4935-a521-7933787abf97 req-2c5bc871-da4b-4d96-82d0-6dbca890d00b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received event network-vif-plugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:45 np0005465987 nova_compute[230713]: 2025-10-02 12:50:45.602 2 DEBUG oslo_concurrency.lockutils [req-d2b26889-1b1f-4935-a521-7933787abf97 req-2c5bc871-da4b-4d96-82d0-6dbca890d00b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80938103-e53b-4780-b241-fc1e8a76f526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:45 np0005465987 nova_compute[230713]: 2025-10-02 12:50:45.603 2 DEBUG oslo_concurrency.lockutils [req-d2b26889-1b1f-4935-a521-7933787abf97 req-2c5bc871-da4b-4d96-82d0-6dbca890d00b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:45 np0005465987 nova_compute[230713]: 2025-10-02 12:50:45.603 2 DEBUG oslo_concurrency.lockutils [req-d2b26889-1b1f-4935-a521-7933787abf97 req-2c5bc871-da4b-4d96-82d0-6dbca890d00b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:45 np0005465987 nova_compute[230713]: 2025-10-02 12:50:45.603 2 DEBUG nova.compute.manager [req-d2b26889-1b1f-4935-a521-7933787abf97 req-2c5bc871-da4b-4d96-82d0-6dbca890d00b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] No waiting events found dispatching network-vif-plugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:45 np0005465987 nova_compute[230713]: 2025-10-02 12:50:45.603 2 WARNING nova.compute.manager [req-d2b26889-1b1f-4935-a521-7933787abf97 req-2c5bc871-da4b-4d96-82d0-6dbca890d00b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received unexpected event network-vif-plugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:50:45 np0005465987 nova_compute[230713]: 2025-10-02 12:50:45.681 2 DEBUG oslo_concurrency.lockutils [None req-696ebdc1-5d60-4fca-9d6d-49f722f523b7 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:45 np0005465987 podman[297717]: 2025-10-02 12:50:45.831565775 +0000 UTC m=+0.050370415 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:50:46 np0005465987 nova_compute[230713]: 2025-10-02 12:50:46.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:50:46 np0005465987 nova_compute[230713]: 2025-10-02 12:50:46.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:50:46 np0005465987 nova_compute[230713]: 2025-10-02 12:50:46.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 08:50:46 np0005465987 nova_compute[230713]: 2025-10-02 12:50:46.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:50:46 np0005465987 nova_compute[230713]: 2025-10-02 12:50:46.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:50:46 np0005465987 nova_compute[230713]: 2025-10-02 12:50:46.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:50:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:46.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:50:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:50:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:47.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:50:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:48 np0005465987 podman[297737]: 2025-10-02 12:50:48.842784542 +0000 UTC m=+0.052139892 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 08:50:48 np0005465987 podman[297738]: 2025-10-02 12:50:48.875515023 +0000 UTC m=+0.078576194 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:50:48 np0005465987 podman[297736]: 2025-10-02 12:50:48.918852807 +0000 UTC m=+0.134706662 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller)
Oct  2 08:50:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:49.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:49 np0005465987 NetworkManager[44910]: <info>  [1759409449.6348] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Oct  2 08:50:49 np0005465987 nova_compute[230713]: 2025-10-02 12:50:49.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:49 np0005465987 NetworkManager[44910]: <info>  [1759409449.6357] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Oct  2 08:50:49 np0005465987 nova_compute[230713]: 2025-10-02 12:50:49.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:49 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:49Z|00698|binding|INFO|Releasing lport 82fa0cfe-92b9-4165-b82b-8f8d8f846f87 from this chassis (sb_readonly=0)
Oct  2 08:50:49 np0005465987 nova_compute[230713]: 2025-10-02 12:50:49.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:50:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:50.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:50:50 np0005465987 nova_compute[230713]: 2025-10-02 12:50:50.616 2 DEBUG nova.compute.manager [req-e89faaa6-8dcd-4a1d-a3e0-5c9a25a9db6c req-577ea783-bebf-4631-94a1-b4afcb26c64a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received event network-changed-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:50 np0005465987 nova_compute[230713]: 2025-10-02 12:50:50.617 2 DEBUG nova.compute.manager [req-e89faaa6-8dcd-4a1d-a3e0-5c9a25a9db6c req-577ea783-bebf-4631-94a1-b4afcb26c64a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Refreshing instance network info cache due to event network-changed-27dd1a62-ec6c-4786-bb38-37a5553ad6e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:50 np0005465987 nova_compute[230713]: 2025-10-02 12:50:50.617 2 DEBUG oslo_concurrency.lockutils [req-e89faaa6-8dcd-4a1d-a3e0-5c9a25a9db6c req-577ea783-bebf-4631-94a1-b4afcb26c64a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:50 np0005465987 nova_compute[230713]: 2025-10-02 12:50:50.618 2 DEBUG oslo_concurrency.lockutils [req-e89faaa6-8dcd-4a1d-a3e0-5c9a25a9db6c req-577ea783-bebf-4631-94a1-b4afcb26c64a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:50 np0005465987 nova_compute[230713]: 2025-10-02 12:50:50.618 2 DEBUG nova.network.neutron [req-e89faaa6-8dcd-4a1d-a3e0-5c9a25a9db6c req-577ea783-bebf-4631-94a1-b4afcb26c64a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Refreshing network info cache for port 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:51.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:51 np0005465987 nova_compute[230713]: 2025-10-02 12:50:51.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:52 np0005465987 nova_compute[230713]: 2025-10-02 12:50:52.330 2 DEBUG nova.network.neutron [req-e89faaa6-8dcd-4a1d-a3e0-5c9a25a9db6c req-577ea783-bebf-4631-94a1-b4afcb26c64a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updated VIF entry in instance network info cache for port 27dd1a62-ec6c-4786-bb38-37a5553ad6e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:52 np0005465987 nova_compute[230713]: 2025-10-02 12:50:52.331 2 DEBUG nova.network.neutron [req-e89faaa6-8dcd-4a1d-a3e0-5c9a25a9db6c req-577ea783-bebf-4631-94a1-b4afcb26c64a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updating instance_info_cache with network_info: [{"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:50:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:50:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:53 np0005465987 nova_compute[230713]: 2025-10-02 12:50:53.360 2 DEBUG oslo_concurrency.lockutils [req-e89faaa6-8dcd-4a1d-a3e0-5c9a25a9db6c req-577ea783-bebf-4631-94a1-b4afcb26c64a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:50:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:54.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:50:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:55.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:56 np0005465987 nova_compute[230713]: 2025-10-02 12:50:56.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:56.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:57.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:57 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:57Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0a:fa:0e 10.100.0.12
Oct  2 08:50:57 np0005465987 ovn_controller[129172]: 2025-10-02T12:50:57Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:fa:0e 10.100.0.12
Oct  2 08:50:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:58.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:50:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:59.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:00.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:51:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:51:01 np0005465987 nova_compute[230713]: 2025-10-02 12:51:01.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:03.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:51:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:05.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:51:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:05.425 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:51:05 np0005465987 nova_compute[230713]: 2025-10-02 12:51:05.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:05.426 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:51:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:06 np0005465987 nova_compute[230713]: 2025-10-02 12:51:06.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:06.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:07.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.673481) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467673549, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 683, "num_deletes": 252, "total_data_size": 1137634, "memory_usage": 1164544, "flush_reason": "Manual Compaction"}
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467685487, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 749791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65732, "largest_seqno": 66409, "table_properties": {"data_size": 746508, "index_size": 1190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7898, "raw_average_key_size": 19, "raw_value_size": 739770, "raw_average_value_size": 1822, "num_data_blocks": 53, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409424, "oldest_key_time": 1759409424, "file_creation_time": 1759409467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 12052 microseconds, and 3621 cpu microseconds.
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.685541) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 749791 bytes OK
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.685564) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.688588) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.688616) EVENT_LOG_v1 {"time_micros": 1759409467688606, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.688642) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1133945, prev total WAL file size 1133945, number of live WAL files 2.
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.689577) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(732KB)], [132(11MB)]
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467689653, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 12389169, "oldest_snapshot_seqno": -1}
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 8670 keys, 10503439 bytes, temperature: kUnknown
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467749116, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 10503439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10448063, "index_size": 32607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 229356, "raw_average_key_size": 26, "raw_value_size": 10296222, "raw_average_value_size": 1187, "num_data_blocks": 1237, "num_entries": 8670, "num_filter_entries": 8670, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.749354) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10503439 bytes
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.750517) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.1 rd, 176.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.1 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(30.5) write-amplify(14.0) OK, records in: 9188, records dropped: 518 output_compression: NoCompression
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.750533) EVENT_LOG_v1 {"time_micros": 1759409467750526, "job": 84, "event": "compaction_finished", "compaction_time_micros": 59533, "compaction_time_cpu_micros": 35415, "output_level": 6, "num_output_files": 1, "total_output_size": 10503439, "num_input_records": 9188, "num_output_records": 8670, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467750773, "job": 84, "event": "table_file_deletion", "file_number": 134}
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409467752562, "job": 84, "event": "table_file_deletion", "file_number": 132}
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.689465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.752642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.752647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.752649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.752651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:51:07 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:51:07.752652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:51:08 np0005465987 ovn_controller[129172]: 2025-10-02T12:51:08Z|00699|binding|INFO|Releasing lport 82fa0cfe-92b9-4165-b82b-8f8d8f846f87 from this chassis (sb_readonly=0)
Oct  2 08:51:08 np0005465987 nova_compute[230713]: 2025-10-02 12:51:08.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:08.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:09.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:10.428 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:10.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:11 np0005465987 nova_compute[230713]: 2025-10-02 12:51:11.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:11 np0005465987 nova_compute[230713]: 2025-10-02 12:51:11.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:12.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:12 np0005465987 nova_compute[230713]: 2025-10-02 12:51:12.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:13.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:51:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:14.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:51:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:51:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:15.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:51:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:15 np0005465987 nova_compute[230713]: 2025-10-02 12:51:15.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:15 np0005465987 nova_compute[230713]: 2025-10-02 12:51:15.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:51:15 np0005465987 nova_compute[230713]: 2025-10-02 12:51:15.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:51:16 np0005465987 nova_compute[230713]: 2025-10-02 12:51:16.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:16.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:16 np0005465987 podman[297798]: 2025-10-02 12:51:16.828440808 +0000 UTC m=+0.052759449 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:51:16 np0005465987 nova_compute[230713]: 2025-10-02 12:51:16.919 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:51:16 np0005465987 nova_compute[230713]: 2025-10-02 12:51:16.920 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:51:16 np0005465987 nova_compute[230713]: 2025-10-02 12:51:16.920 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:51:16 np0005465987 nova_compute[230713]: 2025-10-02 12:51:16.920 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 80938103-e53b-4780-b241-fc1e8a76f526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:17.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:17 np0005465987 nova_compute[230713]: 2025-10-02 12:51:17.713 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:17 np0005465987 nova_compute[230713]: 2025-10-02 12:51:17.714 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:17 np0005465987 nova_compute[230713]: 2025-10-02 12:51:17.978 2 DEBUG nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:51:18 np0005465987 nova_compute[230713]: 2025-10-02 12:51:18.511 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:18 np0005465987 nova_compute[230713]: 2025-10-02 12:51:18.512 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:18 np0005465987 nova_compute[230713]: 2025-10-02 12:51:18.519 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:51:18 np0005465987 nova_compute[230713]: 2025-10-02 12:51:18.520 2 INFO nova.compute.claims [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:51:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:18 np0005465987 nova_compute[230713]: 2025-10-02 12:51:18.946 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:19.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:51:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2431425398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.390 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.397 2 DEBUG nova.compute.provider_tree [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.488 2 DEBUG nova.scheduler.client.report [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.671 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updating instance_info_cache with network_info: [{"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.712 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.714 2 DEBUG nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.741 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.741 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:51:19 np0005465987 podman[297841]: 2025-10-02 12:51:19.867522803 +0000 UTC m=+0.070776634 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:51:19 np0005465987 podman[297840]: 2025-10-02 12:51:19.873080942 +0000 UTC m=+0.079465717 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.885 2 DEBUG nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.886 2 DEBUG nova.network.neutron [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:51:19 np0005465987 podman[297839]: 2025-10-02 12:51:19.923698843 +0000 UTC m=+0.135327279 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:51:19 np0005465987 nova_compute[230713]: 2025-10-02 12:51:19.969 2 INFO nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.055 2 DEBUG nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.219 2 DEBUG nova.policy [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6785ffe5d6554514b4ed9fd47665eca0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a442bc513e14406b73e96e70396e6c3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:51:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.607 2 DEBUG nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.609 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.610 2 INFO nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Creating image(s)#033[00m
Oct  2 08:51:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:51:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:20.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.652 2 DEBUG nova.storage.rbd_utils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.702 2 DEBUG nova.storage.rbd_utils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.732 2 DEBUG nova.storage.rbd_utils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.737 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.819 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.820 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.821 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.821 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.848 2 DEBUG nova.storage.rbd_utils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:20 np0005465987 nova_compute[230713]: 2025-10-02 12:51:20.851 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 f5646848-ae0c-448c-9912-d5ee5734bd1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:21.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:21 np0005465987 nova_compute[230713]: 2025-10-02 12:51:21.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:51:21 np0005465987 nova_compute[230713]: 2025-10-02 12:51:21.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:21 np0005465987 nova_compute[230713]: 2025-10-02 12:51:21.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:22 np0005465987 nova_compute[230713]: 2025-10-02 12:51:22.068 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:22 np0005465987 nova_compute[230713]: 2025-10-02 12:51:22.070 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:22 np0005465987 nova_compute[230713]: 2025-10-02 12:51:22.071 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:22 np0005465987 nova_compute[230713]: 2025-10-02 12:51:22.072 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:51:22 np0005465987 nova_compute[230713]: 2025-10-02 12:51:22.072 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:22 np0005465987 nova_compute[230713]: 2025-10-02 12:51:22.114 2 DEBUG nova.network.neutron [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Successfully created port: 0656c688-251b-4b39-a4de-f3697ea3b306 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:51:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:51:22 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4005157176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:51:22 np0005465987 nova_compute[230713]: 2025-10-02 12:51:22.518 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:22.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.027 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.027 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.187 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.188 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4106MB free_disk=20.806190490722656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.188 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.188 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.444 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 f5646848-ae0c-448c-9912-d5ee5734bd1e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.577 2 DEBUG nova.network.neutron [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Successfully updated port: 0656c688-251b-4b39-a4de-f3697ea3b306 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.636 2 DEBUG nova.storage.rbd_utils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] resizing rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.978 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.979 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquired lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:51:23 np0005465987 nova_compute[230713]: 2025-10-02 12:51:23.979 2 DEBUG nova.network.neutron [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.016 2 DEBUG nova.compute.manager [req-a11e2359-86d3-4ef8-a15e-cc7ac36276cf req-ff57543c-a6eb-4ec7-a76e-1306550f8644 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-changed-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.017 2 DEBUG nova.compute.manager [req-a11e2359-86d3-4ef8-a15e-cc7ac36276cf req-ff57543c-a6eb-4ec7-a76e-1306550f8644 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Refreshing instance network info cache due to event network-changed-0656c688-251b-4b39-a4de-f3697ea3b306. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.017 2 DEBUG oslo_concurrency.lockutils [req-a11e2359-86d3-4ef8-a15e-cc7ac36276cf req-ff57543c-a6eb-4ec7-a76e-1306550f8644 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.070 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 80938103-e53b-4780-b241-fc1e8a76f526 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.070 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance f5646848-ae0c-448c-9912-d5ee5734bd1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.071 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.071 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.282 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:24 np0005465987 podman[298246]: 2025-10-02 12:51:24.294909483 +0000 UTC m=+0.064243277 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 08:51:24 np0005465987 podman[298246]: 2025-10-02 12:51:24.427121358 +0000 UTC m=+0.196455152 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.528 2 DEBUG nova.network.neutron [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:51:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:24.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:24 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:51:24 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3190617108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.752 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.757 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.851 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:51:24 np0005465987 nova_compute[230713]: 2025-10-02 12:51:24.859 2 DEBUG nova.objects.instance [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'migration_context' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.015 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.016 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Ensure instance console log exists: /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.017 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.017 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.018 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.037 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.038 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:25.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.571 2 DEBUG nova.network.neutron [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.687 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Releasing lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.688 2 DEBUG nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance network_info: |[{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.688 2 DEBUG oslo_concurrency.lockutils [req-a11e2359-86d3-4ef8-a15e-cc7ac36276cf req-ff57543c-a6eb-4ec7-a76e-1306550f8644 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.689 2 DEBUG nova.network.neutron [req-a11e2359-86d3-4ef8-a15e-cc7ac36276cf req-ff57543c-a6eb-4ec7-a76e-1306550f8644 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Refreshing network info cache for port 0656c688-251b-4b39-a4de-f3697ea3b306 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.692 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Start _get_guest_xml network_info=[{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.695 2 WARNING nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.700 2 DEBUG nova.virt.libvirt.host [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.701 2 DEBUG nova.virt.libvirt.host [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.703 2 DEBUG nova.virt.libvirt.host [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.704 2 DEBUG nova.virt.libvirt.host [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.705 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.705 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.705 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.706 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.706 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.706 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.706 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.706 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.707 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.707 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.707 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.707 2 DEBUG nova.virt.hardware [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:51:25 np0005465987 nova_compute[230713]: 2025-10-02 12:51:25.710 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.037 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.038 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.038 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:51:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3858465189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.179 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.204 2 DEBUG nova.storage.rbd_utils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.209 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:51:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:51:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:26.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:51:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3655991857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.799 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.801 2 DEBUG nova.virt.libvirt.vif [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-356002637',display_name='tempest-ServerStableDeviceRescueTest-server-356002637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-356002637',id=181,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoKV7bc1Ihcwh5u/ucbycc4MJFxvJi6ozOl0T1nn+zqbeWg5iK0RFaya9xHSofolJ2eQYNrdhLMrQSXhcg4FfGkUU62Zm08MSKDrCZr3cYV0q0VS0+lbILv9n1ItWEizA==',key_name='tempest-keypair-310042596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a442bc513e14406b73e96e70396e6c3',ramdisk_id='',reservation_id='r-z5xm6cn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-454391960',owner_user_name='tempest-ServerStableDeviceRescueTest-454391960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6785ffe5d6554514b4ed9fd47665eca0',uuid=f5646848-ae0c-448c-9912-d5ee5734bd1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.802 2 DEBUG nova.network.os_vif_util [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converting VIF {"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.803 2 DEBUG nova.network.os_vif_util [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:50:61,bridge_name='br-int',has_traffic_filtering=True,id=0656c688-251b-4b39-a4de-f3697ea3b306,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0656c688-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.804 2 DEBUG nova.objects.instance [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.914 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <uuid>f5646848-ae0c-448c-9912-d5ee5734bd1e</uuid>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <name>instance-000000b5</name>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-356002637</nova:name>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:51:25</nova:creationTime>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <nova:user uuid="6785ffe5d6554514b4ed9fd47665eca0">tempest-ServerStableDeviceRescueTest-454391960-project-member</nova:user>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <nova:project uuid="6a442bc513e14406b73e96e70396e6c3">tempest-ServerStableDeviceRescueTest-454391960</nova:project>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <nova:port uuid="0656c688-251b-4b39-a4de-f3697ea3b306">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <entry name="serial">f5646848-ae0c-448c-9912-d5ee5734bd1e</entry>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <entry name="uuid">f5646848-ae0c-448c-9912-d5ee5734bd1e</entry>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/f5646848-ae0c-448c-9912-d5ee5734bd1e_disk">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:2d:50:61"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <target dev="tap0656c688-25"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/console.log" append="off"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:51:26 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:51:26 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:51:26 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:51:26 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.915 2 DEBUG nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Preparing to wait for external event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.916 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.916 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.916 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.917 2 DEBUG nova.virt.libvirt.vif [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-356002637',display_name='tempest-ServerStableDeviceRescueTest-server-356002637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-356002637',id=181,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoKV7bc1Ihcwh5u/ucbycc4MJFxvJi6ozOl0T1nn+zqbeWg5iK0RFaya9xHSofolJ2eQYNrdhLMrQSXhcg4FfGkUU62Zm08MSKDrCZr3cYV0q0VS0+lbILv9n1ItWEizA==',key_name='tempest-keypair-310042596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a442bc513e14406b73e96e70396e6c3',ramdisk_id='',reservation_id='r-z5xm6cn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-454391960',owner_user_name='tempest-ServerStableDeviceRescueTest-454391960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6785ffe5d6554514b4ed9fd47665eca0',uuid=f5646848-ae0c-448c-9912-d5ee5734bd1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.917 2 DEBUG nova.network.os_vif_util [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converting VIF {"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.918 2 DEBUG nova.network.os_vif_util [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:50:61,bridge_name='br-int',has_traffic_filtering=True,id=0656c688-251b-4b39-a4de-f3697ea3b306,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0656c688-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.918 2 DEBUG os_vif [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:50:61,bridge_name='br-int',has_traffic_filtering=True,id=0656c688-251b-4b39-a4de-f3697ea3b306,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0656c688-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0656c688-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0656c688-25, col_values=(('external_ids', {'iface-id': '0656c688-251b-4b39-a4de-f3697ea3b306', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:50:61', 'vm-uuid': 'f5646848-ae0c-448c-9912-d5ee5734bd1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:26 np0005465987 NetworkManager[44910]: <info>  [1759409486.9279] manager: (tap0656c688-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.935 2 INFO os_vif [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:50:61,bridge_name='br-int',has_traffic_filtering=True,id=0656c688-251b-4b39-a4de-f3697ea3b306,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0656c688-25')#033[00m
Oct  2 08:51:26 np0005465987 nova_compute[230713]: 2025-10-02 12:51:26.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:27.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:27 np0005465987 nova_compute[230713]: 2025-10-02 12:51:27.246 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:51:27 np0005465987 nova_compute[230713]: 2025-10-02 12:51:27.246 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:51:27 np0005465987 nova_compute[230713]: 2025-10-02 12:51:27.246 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No VIF found with MAC fa:16:3e:2d:50:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:51:27 np0005465987 nova_compute[230713]: 2025-10-02 12:51:27.247 2 INFO nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Using config drive#033[00m
Oct  2 08:51:27 np0005465987 nova_compute[230713]: 2025-10-02 12:51:27.271 2 DEBUG nova.storage.rbd_utils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:51:27 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:51:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:27.976 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:27.976 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:27.977 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:28 np0005465987 nova_compute[230713]: 2025-10-02 12:51:28.067 2 INFO nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Creating config drive at /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config#033[00m
Oct  2 08:51:28 np0005465987 nova_compute[230713]: 2025-10-02 12:51:28.078 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp094idziv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:28 np0005465987 nova_compute[230713]: 2025-10-02 12:51:28.225 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp094idziv" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:28 np0005465987 nova_compute[230713]: 2025-10-02 12:51:28.258 2 DEBUG nova.storage.rbd_utils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:28 np0005465987 nova_compute[230713]: 2025-10-02 12:51:28.261 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:28.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:28 np0005465987 nova_compute[230713]: 2025-10-02 12:51:28.704 2 DEBUG nova.network.neutron [req-a11e2359-86d3-4ef8-a15e-cc7ac36276cf req-ff57543c-a6eb-4ec7-a76e-1306550f8644 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updated VIF entry in instance network info cache for port 0656c688-251b-4b39-a4de-f3697ea3b306. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:51:28 np0005465987 nova_compute[230713]: 2025-10-02 12:51:28.705 2 DEBUG nova.network.neutron [req-a11e2359-86d3-4ef8-a15e-cc7ac36276cf req-ff57543c-a6eb-4ec7-a76e-1306550f8644 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:51:28 np0005465987 nova_compute[230713]: 2025-10-02 12:51:28.825 2 DEBUG oslo_concurrency.lockutils [req-a11e2359-86d3-4ef8-a15e-cc7ac36276cf req-ff57543c-a6eb-4ec7-a76e-1306550f8644 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:51:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:51:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:51:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:51:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:29.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:30 np0005465987 nova_compute[230713]: 2025-10-02 12:51:30.304 2 DEBUG oslo_concurrency.processutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:30 np0005465987 nova_compute[230713]: 2025-10-02 12:51:30.304 2 INFO nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Deleting local config drive /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config because it was imported into RBD.#033[00m
Oct  2 08:51:30 np0005465987 NetworkManager[44910]: <info>  [1759409490.3763] manager: (tap0656c688-25): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Oct  2 08:51:30 np0005465987 kernel: tap0656c688-25: entered promiscuous mode
Oct  2 08:51:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:51:30Z|00700|binding|INFO|Claiming lport 0656c688-251b-4b39-a4de-f3697ea3b306 for this chassis.
Oct  2 08:51:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:51:30Z|00701|binding|INFO|0656c688-251b-4b39-a4de-f3697ea3b306: Claiming fa:16:3e:2d:50:61 10.100.0.7
Oct  2 08:51:30 np0005465987 nova_compute[230713]: 2025-10-02 12:51:30.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:51:30Z|00702|binding|INFO|Setting lport 0656c688-251b-4b39-a4de-f3697ea3b306 ovn-installed in OVS
Oct  2 08:51:30 np0005465987 nova_compute[230713]: 2025-10-02 12:51:30.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005465987 nova_compute[230713]: 2025-10-02 12:51:30.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005465987 systemd-udevd[298675]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:51:30 np0005465987 systemd-machined[188335]: New machine qemu-82-instance-000000b5.
Oct  2 08:51:30 np0005465987 NetworkManager[44910]: <info>  [1759409490.4338] device (tap0656c688-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:51:30 np0005465987 NetworkManager[44910]: <info>  [1759409490.4344] device (tap0656c688-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:51:30 np0005465987 systemd[1]: Started Virtual Machine qemu-82-instance-000000b5.
Oct  2 08:51:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:51:30Z|00703|binding|INFO|Setting lport 0656c688-251b-4b39-a4de-f3697ea3b306 up in Southbound
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.559 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:50:61 10.100.0.7'], port_security=['fa:16:3e:2d:50:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f5646848-ae0c-448c-9912-d5ee5734bd1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6795e4e-d84d-4947-be00-1b032e1a18b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0656c688-251b-4b39-a4de-f3697ea3b306) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.561 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0656c688-251b-4b39-a4de-f3697ea3b306 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 bound to our chassis#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.563 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48e4ff16-1388-40c7-a27a-83a3b4869808#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.576 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0088f3-5dbe-40cd-8b39-c7e9e8186460]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.578 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48e4ff16-11 in ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.581 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48e4ff16-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.581 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a0487715-a5a1-4c61-affb-bb202c484506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.582 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b06d141c-dc14-4a86-a224-55e9bb6d2c4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.593 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[25d96c6e-11de-4a81-a089-bfcbe74c4d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.619 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1b6e94-bb84-4f73-bda1-37302e2db57d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:30.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.655 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5e19e82a-d505-4a6d-98f3-e4d25bc938bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 NetworkManager[44910]: <info>  [1759409490.6649] manager: (tap48e4ff16-10): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.665 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b312d737-57a9-48dc-a009-5493d81585d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.696 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f949586f-9734-4bfe-b018-037e572f2ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.698 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[78058b2a-e96a-438b-9dd9-5888fab7b0aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 NetworkManager[44910]: <info>  [1759409490.7195] device (tap48e4ff16-10): carrier: link connected
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.725 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a4ef69-8e34-4521-94a8-bd14db3ed893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.741 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[83f38752-2459-4901-8331-8b655459e7bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755929, 'reachable_time': 23795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298726, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.758 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[259ecd56-6a8e-49b1-9b71-35aa001eaaca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:53bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 755929, 'tstamp': 755929}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298727, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.775 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3cd208-b3a2-4b9a-867e-b9741c821050]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755929, 'reachable_time': 23795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298743, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.802 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1af8b653-04c4-4a1e-ae5a-1d4909884faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.858 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[90a21ba7-fd48-4923-a95a-fb0965427bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.861 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.861 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.862 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48e4ff16-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:30 np0005465987 nova_compute[230713]: 2025-10-02 12:51:30.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005465987 kernel: tap48e4ff16-10: entered promiscuous mode
Oct  2 08:51:30 np0005465987 NetworkManager[44910]: <info>  [1759409490.8986] manager: (tap48e4ff16-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Oct  2 08:51:30 np0005465987 nova_compute[230713]: 2025-10-02 12:51:30.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.900 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48e4ff16-10, col_values=(('external_ids', {'iface-id': '1fc80788-89b8-413a-b0b0-d36f1a11a2b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:30 np0005465987 nova_compute[230713]: 2025-10-02 12:51:30.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:51:30Z|00704|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct  2 08:51:30 np0005465987 nova_compute[230713]: 2025-10-02 12:51:30.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.915 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.916 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b3277629-f7aa-4fb3-aff1-9cee5ac13c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.916 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-48e4ff16-1388-40c7-a27a-83a3b4869808
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 48e4ff16-1388-40c7-a27a-83a3b4869808
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:51:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:51:30.917 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'env', 'PROCESS_TAG=haproxy-48e4ff16-1388-40c7-a27a-83a3b4869808', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48e4ff16-1388-40c7-a27a-83a3b4869808.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:51:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:31.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:31 np0005465987 podman[298784]: 2025-10-02 12:51:31.268419116 +0000 UTC m=+0.043938492 container create 3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:51:31 np0005465987 systemd[1]: Started libpod-conmon-3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130.scope.
Oct  2 08:51:31 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:51:31 np0005465987 podman[298784]: 2025-10-02 12:51:31.243869976 +0000 UTC m=+0.019389362 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:51:31 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde6a3844d8e071eaa54a99edba0dcd13d06b9a7c00c3dacf0f62547634b406d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:51:31 np0005465987 podman[298784]: 2025-10-02 12:51:31.363559684 +0000 UTC m=+0.139079070 container init 3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:51:31 np0005465987 podman[298784]: 2025-10-02 12:51:31.368382483 +0000 UTC m=+0.143901859 container start 3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:51:31 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[298799]: [NOTICE]   (298803) : New worker (298805) forked
Oct  2 08:51:31 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[298799]: [NOTICE]   (298803) : Loading success.
Oct  2 08:51:31 np0005465987 nova_compute[230713]: 2025-10-02 12:51:31.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:31 np0005465987 nova_compute[230713]: 2025-10-02 12:51:31.662 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409491.6617558, f5646848-ae0c-448c-9912-d5ee5734bd1e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:31 np0005465987 nova_compute[230713]: 2025-10-02 12:51:31.663 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:51:31 np0005465987 nova_compute[230713]: 2025-10-02 12:51:31.831 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:31 np0005465987 nova_compute[230713]: 2025-10-02 12:51:31.836 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409491.6619904, f5646848-ae0c-448c-9912-d5ee5734bd1e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:31 np0005465987 nova_compute[230713]: 2025-10-02 12:51:31.836 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:51:31 np0005465987 nova_compute[230713]: 2025-10-02 12:51:31.887 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:31 np0005465987 nova_compute[230713]: 2025-10-02 12:51:31.890 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:51:31 np0005465987 nova_compute[230713]: 2025-10-02 12:51:31.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.016 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.172 2 DEBUG nova.compute.manager [req-c98cb46e-4c3d-4f6d-a194-014cd09ee2f3 req-7f6e269b-5ae6-4ff2-89b9-9c56536ef1b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.173 2 DEBUG oslo_concurrency.lockutils [req-c98cb46e-4c3d-4f6d-a194-014cd09ee2f3 req-7f6e269b-5ae6-4ff2-89b9-9c56536ef1b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.173 2 DEBUG oslo_concurrency.lockutils [req-c98cb46e-4c3d-4f6d-a194-014cd09ee2f3 req-7f6e269b-5ae6-4ff2-89b9-9c56536ef1b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.174 2 DEBUG oslo_concurrency.lockutils [req-c98cb46e-4c3d-4f6d-a194-014cd09ee2f3 req-7f6e269b-5ae6-4ff2-89b9-9c56536ef1b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.174 2 DEBUG nova.compute.manager [req-c98cb46e-4c3d-4f6d-a194-014cd09ee2f3 req-7f6e269b-5ae6-4ff2-89b9-9c56536ef1b2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Processing event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.175 2 DEBUG nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.179 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409492.179473, f5646848-ae0c-448c-9912-d5ee5734bd1e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.180 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.182 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.185 2 INFO nova.virt.libvirt.driver [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance spawned successfully.#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.185 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.207 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.212 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.215 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.216 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.216 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.217 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.217 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.217 2 DEBUG nova.virt.libvirt.driver [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.295 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.339 2 INFO nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Took 11.73 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.340 2 DEBUG nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.440 2 INFO nova.compute.manager [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Took 13.97 seconds to build instance.#033[00m
Oct  2 08:51:32 np0005465987 nova_compute[230713]: 2025-10-02 12:51:32.531 2 DEBUG oslo_concurrency.lockutils [None req-8cbed444-1cb8-466d-9211-c0b981411f99 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:32.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:33.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:33 np0005465987 nova_compute[230713]: 2025-10-02 12:51:33.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:33 np0005465987 nova_compute[230713]: 2025-10-02 12:51:33.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:51:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:34.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:34 np0005465987 nova_compute[230713]: 2025-10-02 12:51:34.831 2 DEBUG nova.compute.manager [req-6a0ef22e-2486-4fcb-816a-8cf74c7240a3 req-3e987771-85c7-42d3-a2bf-006e05456c79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:34 np0005465987 nova_compute[230713]: 2025-10-02 12:51:34.831 2 DEBUG oslo_concurrency.lockutils [req-6a0ef22e-2486-4fcb-816a-8cf74c7240a3 req-3e987771-85c7-42d3-a2bf-006e05456c79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:34 np0005465987 nova_compute[230713]: 2025-10-02 12:51:34.832 2 DEBUG oslo_concurrency.lockutils [req-6a0ef22e-2486-4fcb-816a-8cf74c7240a3 req-3e987771-85c7-42d3-a2bf-006e05456c79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:34 np0005465987 nova_compute[230713]: 2025-10-02 12:51:34.832 2 DEBUG oslo_concurrency.lockutils [req-6a0ef22e-2486-4fcb-816a-8cf74c7240a3 req-3e987771-85c7-42d3-a2bf-006e05456c79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:34 np0005465987 nova_compute[230713]: 2025-10-02 12:51:34.832 2 DEBUG nova.compute.manager [req-6a0ef22e-2486-4fcb-816a-8cf74c7240a3 req-3e987771-85c7-42d3-a2bf-006e05456c79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:51:34 np0005465987 nova_compute[230713]: 2025-10-02 12:51:34.832 2 WARNING nova.compute.manager [req-6a0ef22e-2486-4fcb-816a-8cf74c7240a3 req-3e987771-85c7-42d3-a2bf-006e05456c79 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:51:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:51:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:51:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:35 np0005465987 nova_compute[230713]: 2025-10-02 12:51:35.869 2 DEBUG nova.compute.manager [req-275dba22-89cf-4b3c-91a0-df91f7a7f213 req-9093048a-ec1b-4fcf-be9a-b2fbfa5cc5fc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-changed-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:35 np0005465987 nova_compute[230713]: 2025-10-02 12:51:35.870 2 DEBUG nova.compute.manager [req-275dba22-89cf-4b3c-91a0-df91f7a7f213 req-9093048a-ec1b-4fcf-be9a-b2fbfa5cc5fc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Refreshing instance network info cache due to event network-changed-0656c688-251b-4b39-a4de-f3697ea3b306. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:51:35 np0005465987 nova_compute[230713]: 2025-10-02 12:51:35.870 2 DEBUG oslo_concurrency.lockutils [req-275dba22-89cf-4b3c-91a0-df91f7a7f213 req-9093048a-ec1b-4fcf-be9a-b2fbfa5cc5fc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:51:35 np0005465987 nova_compute[230713]: 2025-10-02 12:51:35.870 2 DEBUG oslo_concurrency.lockutils [req-275dba22-89cf-4b3c-91a0-df91f7a7f213 req-9093048a-ec1b-4fcf-be9a-b2fbfa5cc5fc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:51:35 np0005465987 nova_compute[230713]: 2025-10-02 12:51:35.870 2 DEBUG nova.network.neutron [req-275dba22-89cf-4b3c-91a0-df91f7a7f213 req-9093048a-ec1b-4fcf-be9a-b2fbfa5cc5fc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Refreshing network info cache for port 0656c688-251b-4b39-a4de-f3697ea3b306 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:51:36 np0005465987 nova_compute[230713]: 2025-10-02 12:51:36.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:36.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:36 np0005465987 nova_compute[230713]: 2025-10-02 12:51:36.923 2 DEBUG nova.network.neutron [req-275dba22-89cf-4b3c-91a0-df91f7a7f213 req-9093048a-ec1b-4fcf-be9a-b2fbfa5cc5fc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updated VIF entry in instance network info cache for port 0656c688-251b-4b39-a4de-f3697ea3b306. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:51:36 np0005465987 nova_compute[230713]: 2025-10-02 12:51:36.924 2 DEBUG nova.network.neutron [req-275dba22-89cf-4b3c-91a0-df91f7a7f213 req-9093048a-ec1b-4fcf-be9a-b2fbfa5cc5fc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:51:36 np0005465987 nova_compute[230713]: 2025-10-02 12:51:36.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:37 np0005465987 nova_compute[230713]: 2025-10-02 12:51:37.234 2 DEBUG oslo_concurrency.lockutils [req-275dba22-89cf-4b3c-91a0-df91f7a7f213 req-9093048a-ec1b-4fcf-be9a-b2fbfa5cc5fc d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:51:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:38.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:39.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:40.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:41.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:41 np0005465987 nova_compute[230713]: 2025-10-02 12:51:41.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:42 np0005465987 nova_compute[230713]: 2025-10-02 12:51:41.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:42.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:43.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:44.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:51:46Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:50:61 10.100.0.7
Oct  2 08:51:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:51:46Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:50:61 10.100.0.7
Oct  2 08:51:46 np0005465987 nova_compute[230713]: 2025-10-02 12:51:46.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:51:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:46.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:51:47 np0005465987 nova_compute[230713]: 2025-10-02 12:51:47.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:47 np0005465987 podman[298864]: 2025-10-02 12:51:47.860501875 +0000 UTC m=+0.065970855 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:51:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:48.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:50.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:50 np0005465987 podman[298885]: 2025-10-02 12:51:50.840949714 +0000 UTC m=+0.052972315 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:51:50 np0005465987 podman[298886]: 2025-10-02 12:51:50.929029852 +0000 UTC m=+0.134166978 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  2 08:51:50 np0005465987 podman[298884]: 2025-10-02 12:51:50.940948452 +0000 UTC m=+0.158573823 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:51:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:51.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:51 np0005465987 nova_compute[230713]: 2025-10-02 12:51:51.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:52 np0005465987 nova_compute[230713]: 2025-10-02 12:51:52.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:52.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:51:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:53.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:51:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:54.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:55.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:56 np0005465987 nova_compute[230713]: 2025-10-02 12:51:56.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:56.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:57 np0005465987 nova_compute[230713]: 2025-10-02 12:51:57.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:57.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:51:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:58.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:51:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:51:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:59.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:52:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:52:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:52:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:01.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:01 np0005465987 nova_compute[230713]: 2025-10-02 12:52:01.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:02 np0005465987 nova_compute[230713]: 2025-10-02 12:52:02.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:02.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:03.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:04.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:05.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:06 np0005465987 nova_compute[230713]: 2025-10-02 12:52:06.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:06.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:07 np0005465987 nova_compute[230713]: 2025-10-02 12:52:07.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:07.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:08.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:09.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:09 np0005465987 nova_compute[230713]: 2025-10-02 12:52:09.571 2 DEBUG nova.compute.manager [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:09 np0005465987 nova_compute[230713]: 2025-10-02 12:52:09.650 2 INFO nova.compute.manager [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] instance snapshotting#033[00m
Oct  2 08:52:09 np0005465987 nova_compute[230713]: 2025-10-02 12:52:09.999 2 INFO nova.virt.libvirt.driver [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Beginning live snapshot process#033[00m
Oct  2 08:52:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:10.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:11.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:11 np0005465987 nova_compute[230713]: 2025-10-02 12:52:11.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:12 np0005465987 nova_compute[230713]: 2025-10-02 12:52:12.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:12.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:13.146 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:52:13 np0005465987 nova_compute[230713]: 2025-10-02 12:52:13.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:13.148 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:52:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:13.149 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:52:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:13.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:52:13 np0005465987 nova_compute[230713]: 2025-10-02 12:52:13.967 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:14 np0005465987 nova_compute[230713]: 2025-10-02 12:52:14.246 2 DEBUG nova.virt.libvirt.imagebackend [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:52:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:14.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:15.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:16 np0005465987 nova_compute[230713]: 2025-10-02 12:52:16.154 2 DEBUG nova.storage.rbd_utils [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] creating snapshot(eb2565a1811844fba9deff412039a2fc) on rbd image(f5646848-ae0c-448c-9912-d5ee5734bd1e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:52:16 np0005465987 nova_compute[230713]: 2025-10-02 12:52:16.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:16.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:17 np0005465987 nova_compute[230713]: 2025-10-02 12:52:17.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:17.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e379 e379: 3 total, 3 up, 3 in
Oct  2 08:52:17 np0005465987 nova_compute[230713]: 2025-10-02 12:52:17.676 2 DEBUG nova.storage.rbd_utils [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] cloning vms/f5646848-ae0c-448c-9912-d5ee5734bd1e_disk@eb2565a1811844fba9deff412039a2fc to images/c899ce5c-6879-41fc-92e4-79bd307b464d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:52:17 np0005465987 nova_compute[230713]: 2025-10-02 12:52:17.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:17 np0005465987 nova_compute[230713]: 2025-10-02 12:52:17.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:52:17 np0005465987 nova_compute[230713]: 2025-10-02 12:52:17.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:52:17 np0005465987 nova_compute[230713]: 2025-10-02 12:52:17.996 2 DEBUG nova.storage.rbd_utils [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] flattening images/c899ce5c-6879-41fc-92e4-79bd307b464d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:52:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:18.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:18 np0005465987 podman[299054]: 2025-10-02 12:52:18.828412475 +0000 UTC m=+0.053507610 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:52:19 np0005465987 nova_compute[230713]: 2025-10-02 12:52:19.023 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:52:19 np0005465987 nova_compute[230713]: 2025-10-02 12:52:19.023 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:52:19 np0005465987 nova_compute[230713]: 2025-10-02 12:52:19.023 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:52:19 np0005465987 nova_compute[230713]: 2025-10-02 12:52:19.024 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 80938103-e53b-4780-b241-fc1e8a76f526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:19.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:20.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:21 np0005465987 nova_compute[230713]: 2025-10-02 12:52:21.224 2 DEBUG nova.storage.rbd_utils [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] removing snapshot(eb2565a1811844fba9deff412039a2fc) on rbd image(f5646848-ae0c-448c-9912-d5ee5734bd1e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:52:21 np0005465987 nova_compute[230713]: 2025-10-02 12:52:21.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:21 np0005465987 podman[299093]: 2025-10-02 12:52:21.865051194 +0000 UTC m=+0.070243149 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct  2 08:52:21 np0005465987 podman[299092]: 2025-10-02 12:52:21.874017235 +0000 UTC m=+0.081158193 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:52:21 np0005465987 podman[299091]: 2025-10-02 12:52:21.888663069 +0000 UTC m=+0.100238566 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:52:22 np0005465987 nova_compute[230713]: 2025-10-02 12:52:22.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:23.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e380 e380: 3 total, 3 up, 3 in
Oct  2 08:52:24 np0005465987 nova_compute[230713]: 2025-10-02 12:52:24.642 2 DEBUG nova.storage.rbd_utils [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] creating snapshot(snap) on rbd image(c899ce5c-6879-41fc-92e4-79bd307b464d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:52:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:24.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:25.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e381 e381: 3 total, 3 up, 3 in
Oct  2 08:52:26 np0005465987 nova_compute[230713]: 2025-10-02 12:52:26.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:52:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:52:27 np0005465987 nova_compute[230713]: 2025-10-02 12:52:27.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:27.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:27.979 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:27.979 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:27.980 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:28.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:29.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 e382: 3 total, 3 up, 3 in
Oct  2 08:52:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:30.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:52:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:31.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:52:31 np0005465987 nova_compute[230713]: 2025-10-02 12:52:31.571 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updating instance_info_cache with network_info: [{"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:52:31 np0005465987 nova_compute[230713]: 2025-10-02 12:52:31.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.042 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.042 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.043 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.043 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.043 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.043 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.044 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.632 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.632 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.633 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.633 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:52:32 np0005465987 nova_compute[230713]: 2025-10-02 12:52:32.633 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:32.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3443426736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:33 np0005465987 nova_compute[230713]: 2025-10-02 12:52:33.074 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:33.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:33 np0005465987 nova_compute[230713]: 2025-10-02 12:52:33.810 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:52:33 np0005465987 nova_compute[230713]: 2025-10-02 12:52:33.811 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:52:33 np0005465987 nova_compute[230713]: 2025-10-02 12:52:33.815 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:52:33 np0005465987 nova_compute[230713]: 2025-10-02 12:52:33.816 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:52:34 np0005465987 nova_compute[230713]: 2025-10-02 12:52:34.020 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:52:34 np0005465987 nova_compute[230713]: 2025-10-02 12:52:34.022 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3906MB free_disk=20.714889526367188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:52:34 np0005465987 nova_compute[230713]: 2025-10-02 12:52:34.022 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:34 np0005465987 nova_compute[230713]: 2025-10-02 12:52:34.022 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:34.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:34 np0005465987 nova_compute[230713]: 2025-10-02 12:52:34.976 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 80938103-e53b-4780-b241-fc1e8a76f526 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:52:34 np0005465987 nova_compute[230713]: 2025-10-02 12:52:34.976 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance f5646848-ae0c-448c-9912-d5ee5734bd1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:52:34 np0005465987 nova_compute[230713]: 2025-10-02 12:52:34.977 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:52:34 np0005465987 nova_compute[230713]: 2025-10-02 12:52:34.977 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:52:35 np0005465987 nova_compute[230713]: 2025-10-02 12:52:35.062 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:35.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1840656855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:35 np0005465987 nova_compute[230713]: 2025-10-02 12:52:35.482 2 INFO nova.virt.libvirt.driver [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Snapshot image upload complete#033[00m
Oct  2 08:52:35 np0005465987 nova_compute[230713]: 2025-10-02 12:52:35.483 2 INFO nova.compute.manager [None req-912de000-dcdf-414f-b5a5-02d8fead3588 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Took 25.83 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:52:35 np0005465987 nova_compute[230713]: 2025-10-02 12:52:35.489 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:35 np0005465987 nova_compute[230713]: 2025-10-02 12:52:35.495 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:52:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:36 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:52:36 np0005465987 nova_compute[230713]: 2025-10-02 12:52:36.705 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:52:36 np0005465987 nova_compute[230713]: 2025-10-02 12:52:36.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:36.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:37 np0005465987 nova_compute[230713]: 2025-10-02 12:52:37.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:37.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:37 np0005465987 nova_compute[230713]: 2025-10-02 12:52:37.460 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:52:37 np0005465987 nova_compute[230713]: 2025-10-02 12:52:37.460 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:52:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:52:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:38.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:39.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:40 np0005465987 nova_compute[230713]: 2025-10-02 12:52:40.383 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:40 np0005465987 nova_compute[230713]: 2025-10-02 12:52:40.383 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:40 np0005465987 nova_compute[230713]: 2025-10-02 12:52:40.384 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:52:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:40.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:41.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:41 np0005465987 nova_compute[230713]: 2025-10-02 12:52:41.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:42 np0005465987 nova_compute[230713]: 2025-10-02 12:52:42.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:42.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:43.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:52:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:52:44 np0005465987 nova_compute[230713]: 2025-10-02 12:52:44.550 2 DEBUG nova.compute.manager [req-2dd4bca2-8faa-4268-acde-581bca510c2d req-a18472b0-975a-4457-b851-871c44f0337b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received event network-changed-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:44 np0005465987 nova_compute[230713]: 2025-10-02 12:52:44.551 2 DEBUG nova.compute.manager [req-2dd4bca2-8faa-4268-acde-581bca510c2d req-a18472b0-975a-4457-b851-871c44f0337b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Refreshing instance network info cache due to event network-changed-27dd1a62-ec6c-4786-bb38-37a5553ad6e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:52:44 np0005465987 nova_compute[230713]: 2025-10-02 12:52:44.551 2 DEBUG oslo_concurrency.lockutils [req-2dd4bca2-8faa-4268-acde-581bca510c2d req-a18472b0-975a-4457-b851-871c44f0337b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:52:44 np0005465987 nova_compute[230713]: 2025-10-02 12:52:44.551 2 DEBUG oslo_concurrency.lockutils [req-2dd4bca2-8faa-4268-acde-581bca510c2d req-a18472b0-975a-4457-b851-871c44f0337b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:52:44 np0005465987 nova_compute[230713]: 2025-10-02 12:52:44.551 2 DEBUG nova.network.neutron [req-2dd4bca2-8faa-4268-acde-581bca510c2d req-a18472b0-975a-4457-b851-871c44f0337b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Refreshing network info cache for port 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:52:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:52:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:44.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:52:44 np0005465987 nova_compute[230713]: 2025-10-02 12:52:44.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.385 2 DEBUG oslo_concurrency.lockutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "80938103-e53b-4780-b241-fc1e8a76f526" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.385 2 DEBUG oslo_concurrency.lockutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.385 2 DEBUG oslo_concurrency.lockutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "80938103-e53b-4780-b241-fc1e8a76f526-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.386 2 DEBUG oslo_concurrency.lockutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.386 2 DEBUG oslo_concurrency.lockutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.387 2 INFO nova.compute.manager [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Terminating instance#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.387 2 DEBUG nova.compute.manager [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:52:46 np0005465987 kernel: tap27dd1a62-ec (unregistering): left promiscuous mode
Oct  2 08:52:46 np0005465987 NetworkManager[44910]: <info>  [1759409566.6280] device (tap27dd1a62-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:52:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:52:46Z|00705|binding|INFO|Releasing lport 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 from this chassis (sb_readonly=0)
Oct  2 08:52:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:52:46Z|00706|binding|INFO|Setting lport 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 down in Southbound
Oct  2 08:52:46 np0005465987 ovn_controller[129172]: 2025-10-02T12:52:46Z|00707|binding|INFO|Removing iface tap27dd1a62-ec ovn-installed in OVS
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:46 np0005465987 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000b3.scope: Deactivated successfully.
Oct  2 08:52:46 np0005465987 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000b3.scope: Consumed 18.465s CPU time.
Oct  2 08:52:46 np0005465987 systemd-machined[188335]: Machine qemu-81-instance-000000b3 terminated.
Oct  2 08:52:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:46.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.826 2 INFO nova.virt.libvirt.driver [-] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Instance destroyed successfully.#033[00m
Oct  2 08:52:46 np0005465987 nova_compute[230713]: 2025-10-02 12:52:46.826 2 DEBUG nova.objects.instance [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lazy-loading 'resources' on Instance uuid 80938103-e53b-4780-b241-fc1e8a76f526 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.079 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:fa:0e 10.100.0.12'], port_security=['fa:16:3e:0a:fa:0e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '80938103-e53b-4780-b241-fc1e8a76f526', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da548d8-bbbf-425f-9d2c-5b231ba235e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ade962c517a483dbfe4bb13386f0006', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e949d296-273e-41b7-8397-6bc2e09daa3e eab0f3a3-d674-4d86-8dcb-4153be335100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb6054ec-09a9-4745-8f7e-27b822d71fe5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=27dd1a62-ec6c-4786-bb38-37a5553ad6e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.080 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 27dd1a62-ec6c-4786-bb38-37a5553ad6e1 in datapath 9da548d8-bbbf-425f-9d2c-5b231ba235e1 unbound from our chassis#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.082 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9da548d8-bbbf-425f-9d2c-5b231ba235e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.083 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e25a4186-76db-4fc6-8cb2-0a7b09eff18b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.084 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1 namespace which is not needed anymore#033[00m
Oct  2 08:52:47 np0005465987 neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1[297702]: [NOTICE]   (297706) : haproxy version is 2.8.14-c23fe91
Oct  2 08:52:47 np0005465987 neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1[297702]: [NOTICE]   (297706) : path to executable is /usr/sbin/haproxy
Oct  2 08:52:47 np0005465987 neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1[297702]: [WARNING]  (297706) : Exiting Master process...
Oct  2 08:52:47 np0005465987 neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1[297702]: [ALERT]    (297706) : Current worker (297708) exited with code 143 (Terminated)
Oct  2 08:52:47 np0005465987 neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1[297702]: [WARNING]  (297706) : All workers exited. Exiting... (0)
Oct  2 08:52:47 np0005465987 systemd[1]: libpod-b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc.scope: Deactivated successfully.
Oct  2 08:52:47 np0005465987 podman[299430]: 2025-10-02 12:52:47.230910976 +0000 UTC m=+0.046085081 container died b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:52:47 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc-userdata-shm.mount: Deactivated successfully.
Oct  2 08:52:47 np0005465987 systemd[1]: var-lib-containers-storage-overlay-4bbe1399605ef4d20915a75364b1e9b18eb3978ef1b3df88295f9c7b14a78841-merged.mount: Deactivated successfully.
Oct  2 08:52:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:47.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:47 np0005465987 podman[299430]: 2025-10-02 12:52:47.267487929 +0000 UTC m=+0.082662034 container cleanup b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:52:47 np0005465987 systemd[1]: libpod-conmon-b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc.scope: Deactivated successfully.
Oct  2 08:52:47 np0005465987 podman[299460]: 2025-10-02 12:52:47.332891627 +0000 UTC m=+0.041013903 container remove b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.339 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f4076396-4905-4fc3-98cc-c22222fb83d3]: (4, ('Thu Oct  2 12:52:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1 (b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc)\nb38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc\nThu Oct  2 12:52:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1 (b38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc)\nb38246e4592d7ff4d902266e4ddad037fd430ebe08b5aadaedf71994576328dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.342 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[643d4e86-9d90-45dc-96aa-d7b741e353e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.343 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da548d8-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:47 np0005465987 kernel: tap9da548d8-b0: left promiscuous mode
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.364 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[723d329f-30d9-4973-892e-3a50bedfe886]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.381 2 DEBUG oslo_concurrency.lockutils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.382 2 DEBUG oslo_concurrency.lockutils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.389 2 DEBUG nova.virt.libvirt.vif [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:50:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-471874977',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1031871880-access_point-471874977',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1031871880-ac',id=179,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNswVb0qzE60u4BIwTdoxN1wCq1mjTg0LI0f0dRfL9Dy5OWE5514P4onN+QOd1Rw6rOzpCQmFmdmXCTuqTp4M4QoUJGeM1vbbJ7M/8I3p45f/7+fxTSWHkhjk1j/aCR05g==',key_name='tempest-TestSecurityGroupsBasicOps-605389685',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5ade962c517a483dbfe4bb13386f0006',ramdisk_id='',reservation_id='r-g4qiumxb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1031871880',owner_user_name='tempest-TestSecurityGroupsBasicOps-1031871880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:50:42Z,user_data=None,user_id='16730f38111542e58a05fb4deb2b3914',uuid=80938103-e53b-4780-b241-fc1e8a76f526,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.390 2 DEBUG nova.network.os_vif_util [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converting VIF {"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.391 2 DEBUG nova.network.os_vif_util [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0a:fa:0e,bridge_name='br-int',has_traffic_filtering=True,id=27dd1a62-ec6c-4786-bb38-37a5553ad6e1,network=Network(9da548d8-bbbf-425f-9d2c-5b231ba235e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27dd1a62-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.391 2 DEBUG os_vif [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:fa:0e,bridge_name='br-int',has_traffic_filtering=True,id=27dd1a62-ec6c-4786-bb38-37a5553ad6e1,network=Network(9da548d8-bbbf-425f-9d2c-5b231ba235e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27dd1a62-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.394 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27dd1a62-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.398 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9d86b997-187d-4a7a-91ff-0346fc52eeaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.399 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ca79c6ee-9660-4af5-8fe0-5d6a3c697d07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:47 np0005465987 nova_compute[230713]: 2025-10-02 12:52:47.401 2 INFO os_vif [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:fa:0e,bridge_name='br-int',has_traffic_filtering=True,id=27dd1a62-ec6c-4786-bb38-37a5553ad6e1,network=Network(9da548d8-bbbf-425f-9d2c-5b231ba235e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27dd1a62-ec')#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.413 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cd752ecf-7ee7-439e-b934-78f65ffd9236]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750875, 'reachable_time': 30519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299481, 'error': None, 'target': 'ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:47 np0005465987 systemd[1]: run-netns-ovnmeta\x2d9da548d8\x2dbbbf\x2d425f\x2d9d2c\x2d5b231ba235e1.mount: Deactivated successfully.
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.419 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9da548d8-bbbf-425f-9d2c-5b231ba235e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:52:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:52:47.419 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[3336d439-00fa-4fbe-8276-618d9eb43b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:48 np0005465987 nova_compute[230713]: 2025-10-02 12:52:48.033 2 DEBUG nova.network.neutron [req-2dd4bca2-8faa-4268-acde-581bca510c2d req-a18472b0-975a-4457-b851-871c44f0337b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updated VIF entry in instance network info cache for port 27dd1a62-ec6c-4786-bb38-37a5553ad6e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:52:48 np0005465987 nova_compute[230713]: 2025-10-02 12:52:48.034 2 DEBUG nova.network.neutron [req-2dd4bca2-8faa-4268-acde-581bca510c2d req-a18472b0-975a-4457-b851-871c44f0337b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updating instance_info_cache with network_info: [{"id": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "address": "fa:16:3e:0a:fa:0e", "network": {"id": "9da548d8-bbbf-425f-9d2c-5b231ba235e1", "bridge": "br-int", "label": "tempest-network-smoke--587721749", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ade962c517a483dbfe4bb13386f0006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27dd1a62-ec", "ovs_interfaceid": "27dd1a62-ec6c-4786-bb38-37a5553ad6e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:52:48 np0005465987 nova_compute[230713]: 2025-10-02 12:52:48.296 2 DEBUG nova.objects.instance [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'flavor' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:48.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:49.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:49 np0005465987 nova_compute[230713]: 2025-10-02 12:52:49.393 2 DEBUG oslo_concurrency.lockutils [req-2dd4bca2-8faa-4268-acde-581bca510c2d req-a18472b0-975a-4457-b851-871c44f0337b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-80938103-e53b-4780-b241-fc1e8a76f526" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:52:49 np0005465987 nova_compute[230713]: 2025-10-02 12:52:49.722 2 INFO nova.virt.libvirt.driver [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Deleting instance files /var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526_del#033[00m
Oct  2 08:52:49 np0005465987 nova_compute[230713]: 2025-10-02 12:52:49.723 2 INFO nova.virt.libvirt.driver [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Deletion of /var/lib/nova/instances/80938103-e53b-4780-b241-fc1e8a76f526_del complete#033[00m
Oct  2 08:52:49 np0005465987 podman[299501]: 2025-10-02 12:52:49.838296346 +0000 UTC m=+0.050625443 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:52:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:50.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:51.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:51 np0005465987 nova_compute[230713]: 2025-10-02 12:52:51.447 2 DEBUG oslo_concurrency.lockutils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 4.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:51 np0005465987 nova_compute[230713]: 2025-10-02 12:52:51.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:52 np0005465987 nova_compute[230713]: 2025-10-02 12:52:52.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:52.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:52 np0005465987 podman[299523]: 2025-10-02 12:52:52.839325238 +0000 UTC m=+0.052698868 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:52:52 np0005465987 podman[299521]: 2025-10-02 12:52:52.8572486 +0000 UTC m=+0.075349217 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:52:52 np0005465987 podman[299522]: 2025-10-02 12:52:52.857796924 +0000 UTC m=+0.069890750 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:52:53 np0005465987 nova_compute[230713]: 2025-10-02 12:52:53.044 2 INFO nova.compute.manager [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Took 6.66 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:52:53 np0005465987 nova_compute[230713]: 2025-10-02 12:52:53.044 2 DEBUG oslo.service.loopingcall [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:52:53 np0005465987 nova_compute[230713]: 2025-10-02 12:52:53.044 2 DEBUG nova.compute.manager [-] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:52:53 np0005465987 nova_compute[230713]: 2025-10-02 12:52:53.045 2 DEBUG nova.network.neutron [-] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:52:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:53.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:54.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:52:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 51K writes, 195K keys, 51K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 51K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6403 writes, 23K keys, 6403 commit groups, 1.0 writes per commit group, ingest: 25.11 MB, 0.04 MB/s#012Interval WAL: 6403 writes, 2590 syncs, 2.47 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:52:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:55.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:56.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:56 np0005465987 nova_compute[230713]: 2025-10-02 12:52:56.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:52:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:57.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:52:57 np0005465987 nova_compute[230713]: 2025-10-02 12:52:57.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.087 2 DEBUG oslo_concurrency.lockutils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.088 2 DEBUG oslo_concurrency.lockutils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.088 2 INFO nova.compute.manager [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Attaching volume 7708c28d-8dd9-455b-9cf8-550d89fddce1 to /dev/vdb#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.355 2 DEBUG os_brick.utils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.356 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.367 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.367 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[75143654-a40f-4281-802e-5685897fb142]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.368 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.377 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.377 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[27b1a5f1-dac8-4809-b825-c4f2f6f9f24e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.379 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.389 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.389 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb3a693-6636-45ce-b17b-9dab03eff2ca]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.391 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0730c4-8e07-46e6-8f85-128afce6543a]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.391 2 DEBUG oslo_concurrency.processutils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.419 2 DEBUG oslo_concurrency.processutils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.422 2 DEBUG os_brick.initiator.connectors.lightos [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.422 2 DEBUG os_brick.initiator.connectors.lightos [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.422 2 DEBUG os_brick.initiator.connectors.lightos [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.423 2 DEBUG os_brick.utils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.423 2 DEBUG nova.virt.block_device [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating existing volume attachment record: 02900367-394e-4e77-ba2a-547333f78a5a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:52:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:58.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.793 2 DEBUG nova.compute.manager [req-992ff91c-47bc-40de-85db-f38a8798dffa req-a632ca8a-a7ac-4dc2-8b80-a58ea5986de0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received event network-vif-unplugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.794 2 DEBUG oslo_concurrency.lockutils [req-992ff91c-47bc-40de-85db-f38a8798dffa req-a632ca8a-a7ac-4dc2-8b80-a58ea5986de0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80938103-e53b-4780-b241-fc1e8a76f526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.794 2 DEBUG oslo_concurrency.lockutils [req-992ff91c-47bc-40de-85db-f38a8798dffa req-a632ca8a-a7ac-4dc2-8b80-a58ea5986de0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.794 2 DEBUG oslo_concurrency.lockutils [req-992ff91c-47bc-40de-85db-f38a8798dffa req-a632ca8a-a7ac-4dc2-8b80-a58ea5986de0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.795 2 DEBUG nova.compute.manager [req-992ff91c-47bc-40de-85db-f38a8798dffa req-a632ca8a-a7ac-4dc2-8b80-a58ea5986de0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] No waiting events found dispatching network-vif-unplugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:52:58 np0005465987 nova_compute[230713]: 2025-10-02 12:52:58.795 2 DEBUG nova.compute.manager [req-992ff91c-47bc-40de-85db-f38a8798dffa req-a632ca8a-a7ac-4dc2-8b80-a58ea5986de0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received event network-vif-unplugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:52:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:52:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:59.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:00.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:01.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:01 np0005465987 nova_compute[230713]: 2025-10-02 12:53:01.826 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409566.8242157, 80938103-e53b-4780-b241-fc1e8a76f526 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:53:01 np0005465987 nova_compute[230713]: 2025-10-02 12:53:01.827 2 INFO nova.compute.manager [-] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:53:01 np0005465987 nova_compute[230713]: 2025-10-02 12:53:01.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:02 np0005465987 nova_compute[230713]: 2025-10-02 12:53:02.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:02.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:03 np0005465987 nova_compute[230713]: 2025-10-02 12:53:03.199 2 DEBUG nova.compute.manager [None req-7984ee19-7710-4774-b1bf-38ee328de2e2 - - - - - -] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:03.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:03 np0005465987 nova_compute[230713]: 2025-10-02 12:53:03.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:03.363 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:03.364 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:53:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:04.365 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:04.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:05.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.241 2 DEBUG nova.objects.instance [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'flavor' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.776 2 DEBUG nova.compute.manager [req-f0603800-ab8b-4eeb-9b2e-be70e0d60897 req-9bc70355-142b-4ac5-a662-f59e42210404 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received event network-vif-deleted-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.776 2 INFO nova.compute.manager [req-f0603800-ab8b-4eeb-9b2e-be70e0d60897 req-9bc70355-142b-4ac5-a662-f59e42210404 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Neutron deleted interface 27dd1a62-ec6c-4786-bb38-37a5553ad6e1; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.776 2 DEBUG nova.network.neutron [req-f0603800-ab8b-4eeb-9b2e-be70e0d60897 req-9bc70355-142b-4ac5-a662-f59e42210404 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.778 2 DEBUG nova.compute.manager [req-93d0e7eb-7dcb-41e7-b7a3-bacda098243c req-6a40e259-debf-4980-9ae5-afa111bf02f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received event network-vif-plugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.779 2 DEBUG oslo_concurrency.lockutils [req-93d0e7eb-7dcb-41e7-b7a3-bacda098243c req-6a40e259-debf-4980-9ae5-afa111bf02f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80938103-e53b-4780-b241-fc1e8a76f526-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.779 2 DEBUG oslo_concurrency.lockutils [req-93d0e7eb-7dcb-41e7-b7a3-bacda098243c req-6a40e259-debf-4980-9ae5-afa111bf02f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.779 2 DEBUG oslo_concurrency.lockutils [req-93d0e7eb-7dcb-41e7-b7a3-bacda098243c req-6a40e259-debf-4980-9ae5-afa111bf02f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.779 2 DEBUG nova.compute.manager [req-93d0e7eb-7dcb-41e7-b7a3-bacda098243c req-6a40e259-debf-4980-9ae5-afa111bf02f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] No waiting events found dispatching network-vif-plugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.780 2 WARNING nova.compute.manager [req-93d0e7eb-7dcb-41e7-b7a3-bacda098243c req-6a40e259-debf-4980-9ae5-afa111bf02f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Received unexpected event network-vif-plugged-27dd1a62-ec6c-4786-bb38-37a5553ad6e1 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:53:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:06 np0005465987 nova_compute[230713]: 2025-10-02 12:53:06.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:07 np0005465987 nova_compute[230713]: 2025-10-02 12:53:07.042 2 DEBUG nova.network.neutron [-] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:07.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:07 np0005465987 nova_compute[230713]: 2025-10-02 12:53:07.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:08 np0005465987 nova_compute[230713]: 2025-10-02 12:53:08.071 2 DEBUG nova.virt.libvirt.driver [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Attempting to attach volume 7708c28d-8dd9-455b-9cf8-550d89fddce1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:53:08 np0005465987 nova_compute[230713]: 2025-10-02 12:53:08.075 2 DEBUG nova.virt.libvirt.guest [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:53:08 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:53:08 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-7708c28d-8dd9-455b-9cf8-550d89fddce1">
Oct  2 08:53:08 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:53:08 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:53:08 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:53:08 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:53:08 np0005465987 nova_compute[230713]:  <auth username="openstack">
Oct  2 08:53:08 np0005465987 nova_compute[230713]:    <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:53:08 np0005465987 nova_compute[230713]:  </auth>
Oct  2 08:53:08 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:53:08 np0005465987 nova_compute[230713]:  <serial>7708c28d-8dd9-455b-9cf8-550d89fddce1</serial>
Oct  2 08:53:08 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:53:08 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:53:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:09.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:10 np0005465987 nova_compute[230713]: 2025-10-02 12:53:10.237 2 INFO nova.compute.manager [-] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Took 17.19 seconds to deallocate network for instance.#033[00m
Oct  2 08:53:10 np0005465987 nova_compute[230713]: 2025-10-02 12:53:10.242 2 DEBUG nova.compute.manager [req-f0603800-ab8b-4eeb-9b2e-be70e0d60897 req-9bc70355-142b-4ac5-a662-f59e42210404 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80938103-e53b-4780-b241-fc1e8a76f526] Detach interface failed, port_id=27dd1a62-ec6c-4786-bb38-37a5553ad6e1, reason: Instance 80938103-e53b-4780-b241-fc1e8a76f526 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:53:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:10.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:11.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:11 np0005465987 nova_compute[230713]: 2025-10-02 12:53:11.785 2 DEBUG nova.virt.libvirt.driver [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:11 np0005465987 nova_compute[230713]: 2025-10-02 12:53:11.786 2 DEBUG nova.virt.libvirt.driver [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:11 np0005465987 nova_compute[230713]: 2025-10-02 12:53:11.786 2 DEBUG nova.virt.libvirt.driver [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:11 np0005465987 nova_compute[230713]: 2025-10-02 12:53:11.786 2 DEBUG nova.virt.libvirt.driver [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No VIF found with MAC fa:16:3e:2d:50:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:53:11 np0005465987 nova_compute[230713]: 2025-10-02 12:53:11.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:12 np0005465987 nova_compute[230713]: 2025-10-02 12:53:12.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:12 np0005465987 nova_compute[230713]: 2025-10-02 12:53:12.715 2 DEBUG oslo_concurrency.lockutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:12 np0005465987 nova_compute[230713]: 2025-10-02 12:53:12.715 2 DEBUG oslo_concurrency.lockutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:12.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:13.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:13 np0005465987 nova_compute[230713]: 2025-10-02 12:53:13.643 2 DEBUG oslo_concurrency.processutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2364978983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:14 np0005465987 nova_compute[230713]: 2025-10-02 12:53:14.082 2 DEBUG oslo_concurrency.processutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:14 np0005465987 nova_compute[230713]: 2025-10-02 12:53:14.089 2 DEBUG nova.compute.provider_tree [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:53:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:14.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:15.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:15 np0005465987 nova_compute[230713]: 2025-10-02 12:53:15.829 2 DEBUG nova.scheduler.client.report [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:53:15 np0005465987 nova_compute[230713]: 2025-10-02 12:53:15.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:16 np0005465987 nova_compute[230713]: 2025-10-02 12:53:16.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:17.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:17 np0005465987 nova_compute[230713]: 2025-10-02 12:53:17.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:17 np0005465987 nova_compute[230713]: 2025-10-02 12:53:17.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:17 np0005465987 nova_compute[230713]: 2025-10-02 12:53:17.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:53:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:18.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:19.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:20 np0005465987 nova_compute[230713]: 2025-10-02 12:53:20.340 2 DEBUG oslo_concurrency.lockutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 7.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:20.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:20 np0005465987 podman[299636]: 2025-10-02 12:53:20.834527158 +0000 UTC m=+0.052141782 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:53:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:21.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:21 np0005465987 nova_compute[230713]: 2025-10-02 12:53:21.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:22 np0005465987 nova_compute[230713]: 2025-10-02 12:53:22.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:22.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:23.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:23 np0005465987 podman[299659]: 2025-10-02 12:53:23.856528204 +0000 UTC m=+0.066140080 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct  2 08:53:23 np0005465987 podman[299657]: 2025-10-02 12:53:23.863177752 +0000 UTC m=+0.084063361 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:53:23 np0005465987 podman[299658]: 2025-10-02 12:53:23.874379473 +0000 UTC m=+0.086571358 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:53:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:24.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:26 np0005465987 nova_compute[230713]: 2025-10-02 12:53:26.399 2 INFO nova.scheduler.client.report [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Deleted allocations for instance 80938103-e53b-4780-b241-fc1e8a76f526#033[00m
Oct  2 08:53:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:26.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:26 np0005465987 nova_compute[230713]: 2025-10-02 12:53:26.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:27.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:27 np0005465987 nova_compute[230713]: 2025-10-02 12:53:27.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:27.980 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:27.980 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:27.980 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:28 np0005465987 nova_compute[230713]: 2025-10-02 12:53:28.343 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:28 np0005465987 nova_compute[230713]: 2025-10-02 12:53:28.344 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:28 np0005465987 nova_compute[230713]: 2025-10-02 12:53:28.344 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:53:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:28.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:29.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:30 np0005465987 nova_compute[230713]: 2025-10-02 12:53:30.174 2 DEBUG oslo_concurrency.lockutils [None req-5bedba27-f73f-4b3c-84a1-fbca82a78ea4 16730f38111542e58a05fb4deb2b3914 5ade962c517a483dbfe4bb13386f0006 - - default default] Lock "80938103-e53b-4780-b241-fc1e8a76f526" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 43.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:30.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:53:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 13K writes, 67K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1647 writes, 8015 keys, 1647 commit groups, 1.0 writes per commit group, ingest: 16.11 MB, 0.03 MB/s#012Interval WAL: 1647 writes, 1647 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     56.0      1.44              0.25        42    0.034       0      0       0.0       0.0#012  L6      1/0   10.02 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    113.9     97.0      4.17              1.20        41    0.102    279K    22K       0.0       0.0#012 Sum      1/0   10.02 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     84.7     86.5      5.61              1.45        83    0.068    279K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.4    149.0    144.8      0.52              0.23        12    0.044     55K   3123       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    113.9     97.0      4.17              1.20        41    0.102    279K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     56.1      1.44              0.25        41    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.079, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.47 GB write, 0.10 MB/s write, 0.46 GB read, 0.10 MB/s read, 5.6 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 51.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000288 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2983,49.70 MB,16.3502%) FilterBlock(83,787.42 KB,0.25295%) IndexBlock(83,1.31 MB,0.42954%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:53:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:31.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:31 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:31Z|00708|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct  2 08:53:31 np0005465987 nova_compute[230713]: 2025-10-02 12:53:31.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:31 np0005465987 nova_compute[230713]: 2025-10-02 12:53:31.987 2 DEBUG oslo_concurrency.lockutils [None req-e6afc395-07cc-45f9-bf55-f65f786b59cf 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 33.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:32 np0005465987 nova_compute[230713]: 2025-10-02 12:53:32.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:32.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:33.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:34.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:36 np0005465987 nova_compute[230713]: 2025-10-02 12:53:36.582 2 INFO nova.compute.manager [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Rescuing#033[00m
Oct  2 08:53:36 np0005465987 nova_compute[230713]: 2025-10-02 12:53:36.582 2 DEBUG oslo_concurrency.lockutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:36.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:36 np0005465987 nova_compute[230713]: 2025-10-02 12:53:36.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:37.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:37 np0005465987 nova_compute[230713]: 2025-10-02 12:53:37.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:38.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:39.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:40.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:40.865 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:40.866 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:53:40 np0005465987 nova_compute[230713]: 2025-10-02 12:53:40.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.123 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.174 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.174 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.174 2 DEBUG oslo_concurrency.lockutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquired lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.175 2 DEBUG nova.network.neutron [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.176 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.177 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.177 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.177 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.177 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.178 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.178 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.178 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.224 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.224 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.225 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.225 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.225 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:41.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:41 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2715256421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.670 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.832 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.833 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.833 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.991 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.992 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4145MB free_disk=20.760520935058594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:41 np0005465987 nova_compute[230713]: 2025-10-02 12:53:41.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.123 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance f5646848-ae0c-448c-9912-d5ee5734bd1e actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.123 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.123 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.166 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1226854057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.614 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.620 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.643 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.688 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:53:42 np0005465987 nova_compute[230713]: 2025-10-02 12:53:42.689 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:42.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:43.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:53:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:53:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:53:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:43.869 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:45 np0005465987 nova_compute[230713]: 2025-10-02 12:53:45.080 2 DEBUG nova.network.neutron [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:45 np0005465987 nova_compute[230713]: 2025-10-02 12:53:45.236 2 DEBUG oslo_concurrency.lockutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Releasing lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:45.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:45 np0005465987 nova_compute[230713]: 2025-10-02 12:53:45.709 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:53:46 np0005465987 nova_compute[230713]: 2025-10-02 12:53:46.681 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:46.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:46 np0005465987 nova_compute[230713]: 2025-10-02 12:53:46.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:47.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:47 np0005465987 nova_compute[230713]: 2025-10-02 12:53:47.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005465987 kernel: tap0656c688-25 (unregistering): left promiscuous mode
Oct  2 08:53:47 np0005465987 NetworkManager[44910]: <info>  [1759409627.9707] device (tap0656c688-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:53:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:47Z|00709|binding|INFO|Releasing lport 0656c688-251b-4b39-a4de-f3697ea3b306 from this chassis (sb_readonly=0)
Oct  2 08:53:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:47Z|00710|binding|INFO|Setting lport 0656c688-251b-4b39-a4de-f3697ea3b306 down in Southbound
Oct  2 08:53:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:47Z|00711|binding|INFO|Removing iface tap0656c688-25 ovn-installed in OVS
Oct  2 08:53:47 np0005465987 nova_compute[230713]: 2025-10-02 12:53:47.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005465987 nova_compute[230713]: 2025-10-02 12:53:47.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:47.993 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:50:61 10.100.0.7'], port_security=['fa:16:3e:2d:50:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f5646848-ae0c-448c-9912-d5ee5734bd1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6795e4e-d84d-4947-be00-1b032e1a18b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0656c688-251b-4b39-a4de-f3697ea3b306) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:47.994 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0656c688-251b-4b39-a4de-f3697ea3b306 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 unbound from our chassis#033[00m
Oct  2 08:53:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:47.996 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48e4ff16-1388-40c7-a27a-83a3b4869808, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:53:47 np0005465987 nova_compute[230713]: 2025-10-02 12:53:47.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:47.997 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3c01a1e0-4b04-43f1-a972-39ea71530bf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:47.998 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace which is not needed anymore#033[00m
Oct  2 08:53:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:48Z|00712|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:48 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[298799]: [NOTICE]   (298803) : haproxy version is 2.8.14-c23fe91
Oct  2 08:53:48 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[298799]: [NOTICE]   (298803) : path to executable is /usr/sbin/haproxy
Oct  2 08:53:48 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[298799]: [WARNING]  (298803) : Exiting Master process...
Oct  2 08:53:48 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[298799]: [ALERT]    (298803) : Current worker (298805) exited with code 143 (Terminated)
Oct  2 08:53:48 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[298799]: [WARNING]  (298803) : All workers exited. Exiting... (0)
Oct  2 08:53:48 np0005465987 systemd[1]: libpod-3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130.scope: Deactivated successfully.
Oct  2 08:53:48 np0005465987 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Oct  2 08:53:48 np0005465987 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b5.scope: Consumed 19.225s CPU time.
Oct  2 08:53:48 np0005465987 podman[299924]: 2025-10-02 12:53:48.134007996 +0000 UTC m=+0.050118489 container died 3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:53:48 np0005465987 systemd-machined[188335]: Machine qemu-82-instance-000000b5 terminated.
Oct  2 08:53:48 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130-userdata-shm.mount: Deactivated successfully.
Oct  2 08:53:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:48Z|00713|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct  2 08:53:48 np0005465987 systemd[1]: var-lib-containers-storage-overlay-dde6a3844d8e071eaa54a99edba0dcd13d06b9a7c00c3dacf0f62547634b406d-merged.mount: Deactivated successfully.
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:48 np0005465987 podman[299924]: 2025-10-02 12:53:48.244974589 +0000 UTC m=+0.161085092 container cleanup 3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:53:48 np0005465987 systemd[1]: libpod-conmon-3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130.scope: Deactivated successfully.
Oct  2 08:53:48 np0005465987 podman[299967]: 2025-10-02 12:53:48.339017857 +0000 UTC m=+0.072993703 container remove 3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:53:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:48.344 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[da6c5043-e28f-4c8f-993c-374eafd4b99a]: (4, ('Thu Oct  2 12:53:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130)\n3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130\nThu Oct  2 12:53:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130)\n3bdee7ec836f44d90234f6454db40f970af900865cac2846b344279eb1c7d130\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:48.345 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d84200cf-ce11-46f9-b85f-e63b6820151e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:48.347 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:48 np0005465987 kernel: tap48e4ff16-10: left promiscuous mode
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.352 2 DEBUG nova.compute.manager [req-a0e01820-1e10-446f-a60c-404b49715c86 req-8635881b-1a26-41d6-aee0-6ebf25dba933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-unplugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.353 2 DEBUG oslo_concurrency.lockutils [req-a0e01820-1e10-446f-a60c-404b49715c86 req-8635881b-1a26-41d6-aee0-6ebf25dba933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.353 2 DEBUG oslo_concurrency.lockutils [req-a0e01820-1e10-446f-a60c-404b49715c86 req-8635881b-1a26-41d6-aee0-6ebf25dba933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.353 2 DEBUG oslo_concurrency.lockutils [req-a0e01820-1e10-446f-a60c-404b49715c86 req-8635881b-1a26-41d6-aee0-6ebf25dba933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.354 2 DEBUG nova.compute.manager [req-a0e01820-1e10-446f-a60c-404b49715c86 req-8635881b-1a26-41d6-aee0-6ebf25dba933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-unplugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.354 2 WARNING nova.compute.manager [req-a0e01820-1e10-446f-a60c-404b49715c86 req-8635881b-1a26-41d6-aee0-6ebf25dba933 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-unplugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:48.368 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[34b42e95-71a5-4017-a4b3-09f20b3b6e04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:48.393 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[524e2c43-17de-42a3-9635-d116e4a92dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:48.394 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e50fdef8-5289-4a32-b294-008e3cce32be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:48.409 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a8d1bc-3f4c-4e3d-b165-ac4506a27338]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755922, 'reachable_time': 27095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299985, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:48.412 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:53:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:48.412 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[8e67e389-87f1-4f1e-83de-46d3eb1f9a2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:48 np0005465987 systemd[1]: run-netns-ovnmeta\x2d48e4ff16\x2d1388\x2d40c7\x2da27a\x2d83a3b4869808.mount: Deactivated successfully.
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.726 2 INFO nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.732 2 INFO nova.virt.libvirt.driver [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance destroyed successfully.#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.733 2 DEBUG nova.objects.instance [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'numa_topology' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:48 np0005465987 nova_compute[230713]: 2025-10-02 12:53:48.810 2 INFO nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Attempting a stable device rescue#033[00m
Oct  2 08:53:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:49 np0005465987 nova_compute[230713]: 2025-10-02 12:53:49.180 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:53:49 np0005465987 nova_compute[230713]: 2025-10-02 12:53:49.185 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:53:49 np0005465987 nova_compute[230713]: 2025-10-02 12:53:49.185 2 INFO nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Creating image(s)#033[00m
Oct  2 08:53:49 np0005465987 nova_compute[230713]: 2025-10-02 12:53:49.218 2 DEBUG nova.storage.rbd_utils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:49 np0005465987 nova_compute[230713]: 2025-10-02 12:53:49.222 2 DEBUG nova.objects.instance [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:49.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 08:53:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:50.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 08:53:50 np0005465987 nova_compute[230713]: 2025-10-02 12:53:50.986 2 DEBUG nova.storage.rbd_utils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:51 np0005465987 nova_compute[230713]: 2025-10-02 12:53:51.020 2 DEBUG nova.storage.rbd_utils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:51 np0005465987 nova_compute[230713]: 2025-10-02 12:53:51.024 2 DEBUG oslo_concurrency.lockutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "b878f66dab92f55e97cac910a447f3f6fa74c080" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:51 np0005465987 nova_compute[230713]: 2025-10-02 12:53:51.025 2 DEBUG oslo_concurrency.lockutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "b878f66dab92f55e97cac910a447f3f6fa74c080" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:51.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:53:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:53:51 np0005465987 podman[300090]: 2025-10-02 12:53:51.873890962 +0000 UTC m=+0.075564163 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:53:51 np0005465987 nova_compute[230713]: 2025-10-02 12:53:51.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:52 np0005465987 nova_compute[230713]: 2025-10-02 12:53:52.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:53:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:53:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:53.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.077 2 DEBUG nova.virt.libvirt.imagebackend [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image locations are: [{'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/c899ce5c-6879-41fc-92e4-79bd307b464d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/c899ce5c-6879-41fc-92e4-79bd307b464d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.143 2 DEBUG nova.virt.libvirt.imagebackend [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Selected location: {'url': 'rbd://fd4c5763-22d1-50ea-ad0b-96a3dc3040b2/images/c899ce5c-6879-41fc-92e4-79bd307b464d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.143 2 DEBUG nova.storage.rbd_utils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] cloning images/c899ce5c-6879-41fc-92e4-79bd307b464d@snap to None/f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.444 2 DEBUG oslo_concurrency.lockutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "b878f66dab92f55e97cac910a447f3f6fa74c080" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.490 2 DEBUG nova.objects.instance [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'migration_context' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.525 2 DEBUG nova.compute.manager [req-1b0a4c3c-cc17-43d3-93b4-7d7404d414ce req-07e57501-7c93-401c-b797-d52038cfe2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.525 2 DEBUG oslo_concurrency.lockutils [req-1b0a4c3c-cc17-43d3-93b4-7d7404d414ce req-07e57501-7c93-401c-b797-d52038cfe2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.525 2 DEBUG oslo_concurrency.lockutils [req-1b0a4c3c-cc17-43d3-93b4-7d7404d414ce req-07e57501-7c93-401c-b797-d52038cfe2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.526 2 DEBUG oslo_concurrency.lockutils [req-1b0a4c3c-cc17-43d3-93b4-7d7404d414ce req-07e57501-7c93-401c-b797-d52038cfe2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.526 2 DEBUG nova.compute.manager [req-1b0a4c3c-cc17-43d3-93b4-7d7404d414ce req-07e57501-7c93-401c-b797-d52038cfe2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.526 2 WARNING nova.compute.manager [req-1b0a4c3c-cc17-43d3-93b4-7d7404d414ce req-07e57501-7c93-401c-b797-d52038cfe2c3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.553 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.556 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Start _get_guest_xml network_info=[{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "vif_mac": "fa:16:3e:2d:50:61"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c899ce5c-6879-41fc-92e4-79bd307b464d', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '02900367-394e-4e77-ba2a-547333f78a5a', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7708c28d-8dd9-455b-9cf8-550d89fddce1', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7708c28d-8dd9-455b-9cf8-550d89fddce1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f5646848-ae0c-448c-9912-d5ee5734bd1e', 'attached_at': '', 'detached_at': '', 'volume_id': '7708c28d-8dd9-455b-9cf8-550d89fddce1', 'serial': '7708c28d-8dd9-455b-9cf8-550d89fddce1'}, 'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'boot_index': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.556 2 DEBUG nova.objects.instance [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'resources' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.618 2 WARNING nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.630 2 DEBUG nova.virt.libvirt.host [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.631 2 DEBUG nova.virt.libvirt.host [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.636 2 DEBUG nova.virt.libvirt.host [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.636 2 DEBUG nova.virt.libvirt.host [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.637 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.637 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.638 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.638 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.638 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.638 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.639 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.639 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.639 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.639 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.639 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.640 2 DEBUG nova.virt.hardware [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.640 2 DEBUG nova.objects.instance [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:54 np0005465987 nova_compute[230713]: 2025-10-02 12:53:54.675 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:54 np0005465987 podman[300199]: 2025-10-02 12:53:54.851921197 +0000 UTC m=+0.063449616 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:53:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:54 np0005465987 podman[300197]: 2025-10-02 12:53:54.868900604 +0000 UTC m=+0.085545111 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:53:54 np0005465987 podman[300198]: 2025-10-02 12:53:54.874692649 +0000 UTC m=+0.089762754 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct  2 08:53:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:53:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3449421909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:53:55 np0005465987 nova_compute[230713]: 2025-10-02 12:53:55.133 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:55 np0005465987 nova_compute[230713]: 2025-10-02 12:53:55.179 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:55.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:53:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2875896490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:53:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:55 np0005465987 nova_compute[230713]: 2025-10-02 12:53:55.625 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:56 np0005465987 nova_compute[230713]: 2025-10-02 12:53:56.574 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:56.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:53:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/703401347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:53:56 np0005465987 nova_compute[230713]: 2025-10-02 12:53:56.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.014 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.016 2 DEBUG nova.virt.libvirt.vif [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-356002637',display_name='tempest-ServerStableDeviceRescueTest-server-356002637',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-356002637',id=181,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoKV7bc1Ihcwh5u/ucbycc4MJFxvJi6ozOl0T1nn+zqbeWg5iK0RFaya9xHSofolJ2eQYNrdhLMrQSXhcg4FfGkUU62Zm08MSKDrCZr3cYV0q0VS0+lbILv9n1ItWEizA==',key_name='tempest-keypair-310042596',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a442bc513e14406b73e96e70396e6c3',ramdisk_id='',reservation_id='r-z5xm6cn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-454391960',owner_user_name='tempest-ServerStableDeviceRescueTest-454391960-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:52:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6785ffe5d6554514b4ed9fd47665eca0',uuid=f5646848-ae0c-448c-9912-d5ee5734bd1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "vif_mac": "fa:16:3e:2d:50:61"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.017 2 DEBUG nova.network.os_vif_util [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converting VIF {"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "vif_mac": "fa:16:3e:2d:50:61"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.018 2 DEBUG nova.network.os_vif_util [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:50:61,bridge_name='br-int',has_traffic_filtering=True,id=0656c688-251b-4b39-a4de-f3697ea3b306,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0656c688-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.019 2 DEBUG nova.objects.instance [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.172 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <uuid>f5646848-ae0c-448c-9912-d5ee5734bd1e</uuid>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <name>instance-000000b5</name>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-356002637</nova:name>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:53:54</nova:creationTime>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <nova:user uuid="6785ffe5d6554514b4ed9fd47665eca0">tempest-ServerStableDeviceRescueTest-454391960-project-member</nova:user>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <nova:project uuid="6a442bc513e14406b73e96e70396e6c3">tempest-ServerStableDeviceRescueTest-454391960</nova:project>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <nova:port uuid="0656c688-251b-4b39-a4de-f3697ea3b306">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <entry name="serial">f5646848-ae0c-448c-9912-d5ee5734bd1e</entry>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <entry name="uuid">f5646848-ae0c-448c-9912-d5ee5734bd1e</entry>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/f5646848-ae0c-448c-9912-d5ee5734bd1e_disk">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-7708c28d-8dd9-455b-9cf8-550d89fddce1">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <serial>7708c28d-8dd9-455b-9cf8-550d89fddce1</serial>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.rescue">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <target dev="vdc" bus="virtio"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <boot order="1"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:2d:50:61"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <target dev="tap0656c688-25"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/console.log" append="off"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:53:57 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:53:57 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:53:57 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:53:57 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.185 2 INFO nova.virt.libvirt.driver [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance destroyed successfully.#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.346 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.346 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.347 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.347 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.348 2 DEBUG nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] No VIF found with MAC fa:16:3e:2d:50:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.348 2 INFO nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Using config drive#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.376 2 DEBUG nova.storage.rbd_utils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:57.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.509 2 DEBUG nova.objects.instance [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:57 np0005465987 nova_compute[230713]: 2025-10-02 12:53:57.634 2 DEBUG nova.objects.instance [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'keypairs' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.154 2 INFO nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Creating config drive at /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config.rescue#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.160 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5qy7_71y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.292 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5qy7_71y" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.322 2 DEBUG nova.storage.rbd_utils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] rbd image f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.325 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config.rescue f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.753 2 DEBUG oslo_concurrency.processutils [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config.rescue f5646848-ae0c-448c-9912-d5ee5734bd1e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.754 2 INFO nova.virt.libvirt.driver [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Deleting local config drive /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:53:58 np0005465987 kernel: tap0656c688-25: entered promiscuous mode
Oct  2 08:53:58 np0005465987 NetworkManager[44910]: <info>  [1759409638.8033] manager: (tap0656c688-25): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Oct  2 08:53:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:58Z|00714|binding|INFO|Claiming lport 0656c688-251b-4b39-a4de-f3697ea3b306 for this chassis.
Oct  2 08:53:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:58Z|00715|binding|INFO|0656c688-251b-4b39-a4de-f3697ea3b306: Claiming fa:16:3e:2d:50:61 10.100.0.7
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:58 np0005465987 NetworkManager[44910]: <info>  [1759409638.8140] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Oct  2 08:53:58 np0005465987 NetworkManager[44910]: <info>  [1759409638.8145] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Oct  2 08:53:58 np0005465987 systemd-udevd[300414]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:53:58 np0005465987 NetworkManager[44910]: <info>  [1759409638.8452] device (tap0656c688-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:53:58 np0005465987 NetworkManager[44910]: <info>  [1759409638.8459] device (tap0656c688-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:53:58 np0005465987 systemd-machined[188335]: New machine qemu-83-instance-000000b5.
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.861 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:50:61 10.100.0.7'], port_security=['fa:16:3e:2d:50:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f5646848-ae0c-448c-9912-d5ee5734bd1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd6795e4e-d84d-4947-be00-1b032e1a18b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0656c688-251b-4b39-a4de-f3697ea3b306) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.862 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0656c688-251b-4b39-a4de-f3697ea3b306 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 bound to our chassis#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.863 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48e4ff16-1388-40c7-a27a-83a3b4869808#033[00m
Oct  2 08:53:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:58.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.873 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0fb098-24cb-434c-9a31-b5d3e21fd64a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.874 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48e4ff16-11 in ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.876 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48e4ff16-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.876 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[013aeff1-63a6-4362-bea9-3fffe334dd8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.877 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c06587a1-6098-4a4e-a874-11648ea24156]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.886 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f2776a-d561-4c86-85c8-cf49b95ed9a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:58 np0005465987 systemd[1]: Started Virtual Machine qemu-83-instance-000000b5.
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.910 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[332aa6b5-415f-4bb2-a71d-d02c8d4b0d1e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.939 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4a1f8b-8f47-463c-8035-43a72f7473ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:58 np0005465987 NetworkManager[44910]: <info>  [1759409638.9454] manager: (tap48e4ff16-10): new Veth device (/org/freedesktop/NetworkManager/Devices/328)
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.944 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3da962-41d2-492f-afa2-96f003554a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:58Z|00716|binding|INFO|Setting lport 0656c688-251b-4b39-a4de-f3697ea3b306 ovn-installed in OVS
Oct  2 08:53:58 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:58Z|00717|binding|INFO|Setting lport 0656c688-251b-4b39-a4de-f3697ea3b306 up in Southbound
Oct  2 08:53:58 np0005465987 nova_compute[230713]: 2025-10-02 12:53:58.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.983 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a91ca42f-971a-4dd2-b16f-f081044e3437]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:58 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:58.986 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ef119b95-aefd-4722-9430-77c0f3b8c934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:59 np0005465987 NetworkManager[44910]: <info>  [1759409639.0067] device (tap48e4ff16-10): carrier: link connected
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.011 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4d47be-8f97-4fc3-a66a-532684ba7e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.025 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[153c7076-8a94-4109-9348-c73e7390fcb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770758, 'reachable_time': 44772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300448, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.039 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b0619edb-7b81-45fa-ac28-9e094ff28ecd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:53bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770758, 'tstamp': 770758}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300449, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.054 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b98fc25c-5952-4531-9ef4-19fe226205ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770758, 'reachable_time': 44772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300450, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.090 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d53824-e9c0-4c9e-a284-ab305a9fd97b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.137 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f5862a14-f224-4602-854b-b59a665ab3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.138 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.139 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.139 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48e4ff16-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:59 np0005465987 nova_compute[230713]: 2025-10-02 12:53:59.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:59 np0005465987 kernel: tap48e4ff16-10: entered promiscuous mode
Oct  2 08:53:59 np0005465987 nova_compute[230713]: 2025-10-02 12:53:59.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:59 np0005465987 NetworkManager[44910]: <info>  [1759409639.1429] manager: (tap48e4ff16-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.143 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48e4ff16-10, col_values=(('external_ids', {'iface-id': '1fc80788-89b8-413a-b0b0-d36f1a11a2b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:59 np0005465987 nova_compute[230713]: 2025-10-02 12:53:59.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:59 np0005465987 ovn_controller[129172]: 2025-10-02T12:53:59Z|00718|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct  2 08:53:59 np0005465987 nova_compute[230713]: 2025-10-02 12:53:59.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.157 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.158 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ace857-7503-4e9e-966d-6df2b715f966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.159 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-48e4ff16-1388-40c7-a27a-83a3b4869808
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 48e4ff16-1388-40c7-a27a-83a3b4869808
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:53:59 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:53:59.160 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'env', 'PROCESS_TAG=haproxy-48e4ff16-1388-40c7-a27a-83a3b4869808', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48e4ff16-1388-40c7-a27a-83a3b4869808.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:53:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:53:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:59.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:59 np0005465987 podman[300537]: 2025-10-02 12:53:59.516776158 +0000 UTC m=+0.045599162 container create 5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:53:59 np0005465987 systemd[1]: Started libpod-conmon-5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725.scope.
Oct  2 08:53:59 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:53:59 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46bf80cd1a3e7ca66b08ba7befa7a3f9946ba33fd42c6c9320e564a317276f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:53:59 np0005465987 podman[300537]: 2025-10-02 12:53:59.493063177 +0000 UTC m=+0.021886201 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:53:59 np0005465987 podman[300537]: 2025-10-02 12:53:59.598868871 +0000 UTC m=+0.127691895 container init 5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:53:59 np0005465987 podman[300537]: 2025-10-02 12:53:59.608530235 +0000 UTC m=+0.137353239 container start 5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:53:59 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300574]: [NOTICE]   (300580) : New worker (300582) forked
Oct  2 08:53:59 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300574]: [NOTICE]   (300580) : Loading success.
Oct  2 08:54:00 np0005465987 nova_compute[230713]: 2025-10-02 12:54:00.044 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for f5646848-ae0c-448c-9912-d5ee5734bd1e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:54:00 np0005465987 nova_compute[230713]: 2025-10-02 12:54:00.045 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409640.0425427, f5646848-ae0c-448c-9912-d5ee5734bd1e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:00 np0005465987 nova_compute[230713]: 2025-10-02 12:54:00.045 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:54:00 np0005465987 nova_compute[230713]: 2025-10-02 12:54:00.051 2 DEBUG nova.compute.manager [None req-bb480304-9c46-4e57-8fe2-b6e2f7217c44 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:00 np0005465987 nova_compute[230713]: 2025-10-02 12:54:00.915 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:00 np0005465987 nova_compute[230713]: 2025-10-02 12:54:00.918 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:01 np0005465987 nova_compute[230713]: 2025-10-02 12:54:01.262 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409640.0446222, f5646848-ae0c-448c-9912-d5ee5734bd1e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:01 np0005465987 nova_compute[230713]: 2025-10-02 12:54:01.262 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:54:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:01 np0005465987 nova_compute[230713]: 2025-10-02 12:54:01.491 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:01 np0005465987 nova_compute[230713]: 2025-10-02 12:54:01.494 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:01 np0005465987 nova_compute[230713]: 2025-10-02 12:54:01.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:02 np0005465987 nova_compute[230713]: 2025-10-02 12:54:02.097 2 DEBUG nova.compute.manager [req-d269e570-98d4-4e47-8a28-47ee1b26d75c req-a968ca1c-17a9-4f78-868f-30e7e1b2a0ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:02 np0005465987 nova_compute[230713]: 2025-10-02 12:54:02.097 2 DEBUG oslo_concurrency.lockutils [req-d269e570-98d4-4e47-8a28-47ee1b26d75c req-a968ca1c-17a9-4f78-868f-30e7e1b2a0ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:02 np0005465987 nova_compute[230713]: 2025-10-02 12:54:02.097 2 DEBUG oslo_concurrency.lockutils [req-d269e570-98d4-4e47-8a28-47ee1b26d75c req-a968ca1c-17a9-4f78-868f-30e7e1b2a0ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:02 np0005465987 nova_compute[230713]: 2025-10-02 12:54:02.098 2 DEBUG oslo_concurrency.lockutils [req-d269e570-98d4-4e47-8a28-47ee1b26d75c req-a968ca1c-17a9-4f78-868f-30e7e1b2a0ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:02 np0005465987 nova_compute[230713]: 2025-10-02 12:54:02.098 2 DEBUG nova.compute.manager [req-d269e570-98d4-4e47-8a28-47ee1b26d75c req-a968ca1c-17a9-4f78-868f-30e7e1b2a0ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:02 np0005465987 nova_compute[230713]: 2025-10-02 12:54:02.098 2 WARNING nova.compute.manager [req-d269e570-98d4-4e47-8a28-47ee1b26d75c req-a968ca1c-17a9-4f78-868f-30e7e1b2a0ea d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:54:02 np0005465987 nova_compute[230713]: 2025-10-02 12:54:02.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:02.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:03 np0005465987 nova_compute[230713]: 2025-10-02 12:54:03.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:03.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.312 2 INFO nova.compute.manager [None req-dacc824e-62f9-46b5-a275-e2b327fc4477 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Unrescuing#033[00m
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.312 2 DEBUG oslo_concurrency.lockutils [None req-dacc824e-62f9-46b5-a275-e2b327fc4477 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.312 2 DEBUG oslo_concurrency.lockutils [None req-dacc824e-62f9-46b5-a275-e2b327fc4477 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquired lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.313 2 DEBUG nova.network.neutron [None req-dacc824e-62f9-46b5-a275-e2b327fc4477 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.486 2 DEBUG nova.compute.manager [req-262d6ee3-248e-45a3-9af1-f341253cdc1d req-5653dcf1-514d-4220-ad0a-a98e58af4c4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.486 2 DEBUG oslo_concurrency.lockutils [req-262d6ee3-248e-45a3-9af1-f341253cdc1d req-5653dcf1-514d-4220-ad0a-a98e58af4c4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.486 2 DEBUG oslo_concurrency.lockutils [req-262d6ee3-248e-45a3-9af1-f341253cdc1d req-5653dcf1-514d-4220-ad0a-a98e58af4c4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.487 2 DEBUG oslo_concurrency.lockutils [req-262d6ee3-248e-45a3-9af1-f341253cdc1d req-5653dcf1-514d-4220-ad0a-a98e58af4c4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.487 2 DEBUG nova.compute.manager [req-262d6ee3-248e-45a3-9af1-f341253cdc1d req-5653dcf1-514d-4220-ad0a-a98e58af4c4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:04 np0005465987 nova_compute[230713]: 2025-10-02 12:54:04.487 2 WARNING nova.compute.manager [req-262d6ee3-248e-45a3-9af1-f341253cdc1d req-5653dcf1-514d-4220-ad0a-a98e58af4c4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:54:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:04.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:05.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:06.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:06 np0005465987 nova_compute[230713]: 2025-10-02 12:54:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:07.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:07 np0005465987 nova_compute[230713]: 2025-10-02 12:54:07.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:08.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:09.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:10.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:11.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.437 2 DEBUG nova.network.neutron [None req-dacc824e-62f9-46b5-a275-e2b327fc4477 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.495 2 DEBUG oslo_concurrency.lockutils [None req-dacc824e-62f9-46b5-a275-e2b327fc4477 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Releasing lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.496 2 DEBUG nova.objects.instance [None req-dacc824e-62f9-46b5-a275-e2b327fc4477 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'flavor' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.653 2 DEBUG nova.compute.manager [req-07c535d1-b3a2-4dbe-a072-4c7cb44d1ea5 req-a54e9d90-d89f-4152-ab90-01df3b44574e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-changed-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.653 2 DEBUG nova.compute.manager [req-07c535d1-b3a2-4dbe-a072-4c7cb44d1ea5 req-a54e9d90-d89f-4152-ab90-01df3b44574e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Refreshing instance network info cache due to event network-changed-0656c688-251b-4b39-a4de-f3697ea3b306. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.653 2 DEBUG oslo_concurrency.lockutils [req-07c535d1-b3a2-4dbe-a072-4c7cb44d1ea5 req-a54e9d90-d89f-4152-ab90-01df3b44574e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.653 2 DEBUG oslo_concurrency.lockutils [req-07c535d1-b3a2-4dbe-a072-4c7cb44d1ea5 req-a54e9d90-d89f-4152-ab90-01df3b44574e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.654 2 DEBUG nova.network.neutron [req-07c535d1-b3a2-4dbe-a072-4c7cb44d1ea5 req-a54e9d90-d89f-4152-ab90-01df3b44574e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Refreshing network info cache for port 0656c688-251b-4b39-a4de-f3697ea3b306 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:11 np0005465987 kernel: tap0656c688-25 (unregistering): left promiscuous mode
Oct  2 08:54:11 np0005465987 NetworkManager[44910]: <info>  [1759409651.8109] device (tap0656c688-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:54:11 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:11Z|00719|binding|INFO|Releasing lport 0656c688-251b-4b39-a4de-f3697ea3b306 from this chassis (sb_readonly=0)
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:11 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:11Z|00720|binding|INFO|Setting lport 0656c688-251b-4b39-a4de-f3697ea3b306 down in Southbound
Oct  2 08:54:11 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:11Z|00721|binding|INFO|Removing iface tap0656c688-25 ovn-installed in OVS
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:11 np0005465987 nova_compute[230713]: 2025-10-02 12:54:11.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:11 np0005465987 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Oct  2 08:54:11 np0005465987 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b5.scope: Consumed 12.859s CPU time.
Oct  2 08:54:11 np0005465987 systemd-machined[188335]: Machine qemu-83-instance-000000b5 terminated.
Oct  2 08:54:12 np0005465987 nova_compute[230713]: 2025-10-02 12:54:12.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005465987 nova_compute[230713]: 2025-10-02 12:54:12.079 2 INFO nova.virt.libvirt.driver [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance destroyed successfully.#033[00m
Oct  2 08:54:12 np0005465987 nova_compute[230713]: 2025-10-02 12:54:12.080 2 DEBUG nova.objects.instance [None req-dacc824e-62f9-46b5-a275-e2b327fc4477 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'numa_topology' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.365 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:50:61 10.100.0.7'], port_security=['fa:16:3e:2d:50:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f5646848-ae0c-448c-9912-d5ee5734bd1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd6795e4e-d84d-4947-be00-1b032e1a18b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0656c688-251b-4b39-a4de-f3697ea3b306) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.367 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0656c688-251b-4b39-a4de-f3697ea3b306 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 unbound from our chassis#033[00m
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.368 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48e4ff16-1388-40c7-a27a-83a3b4869808, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.369 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a5259c-0a0b-44bb-b1ae-d67a638de717]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.369 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace which is not needed anymore#033[00m
Oct  2 08:54:12 np0005465987 nova_compute[230713]: 2025-10-02 12:54:12.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300574]: [NOTICE]   (300580) : haproxy version is 2.8.14-c23fe91
Oct  2 08:54:12 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300574]: [NOTICE]   (300580) : path to executable is /usr/sbin/haproxy
Oct  2 08:54:12 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300574]: [WARNING]  (300580) : Exiting Master process...
Oct  2 08:54:12 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300574]: [ALERT]    (300580) : Current worker (300582) exited with code 143 (Terminated)
Oct  2 08:54:12 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300574]: [WARNING]  (300580) : All workers exited. Exiting... (0)
Oct  2 08:54:12 np0005465987 systemd[1]: libpod-5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725.scope: Deactivated successfully.
Oct  2 08:54:12 np0005465987 podman[300627]: 2025-10-02 12:54:12.548597532 +0000 UTC m=+0.055983828 container died 5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 08:54:12 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725-userdata-shm.mount: Deactivated successfully.
Oct  2 08:54:12 np0005465987 systemd[1]: var-lib-containers-storage-overlay-e46bf80cd1a3e7ca66b08ba7befa7a3f9946ba33fd42c6c9320e564a317276f7-merged.mount: Deactivated successfully.
Oct  2 08:54:12 np0005465987 podman[300627]: 2025-10-02 12:54:12.602890792 +0000 UTC m=+0.110277068 container cleanup 5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:54:12 np0005465987 systemd[1]: libpod-conmon-5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725.scope: Deactivated successfully.
Oct  2 08:54:12 np0005465987 podman[300656]: 2025-10-02 12:54:12.67532475 +0000 UTC m=+0.045038707 container remove 5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.683 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[05e7a3f7-7114-42db-96c1-e2001fda9414]: (4, ('Thu Oct  2 12:54:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725)\n5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725\nThu Oct  2 12:54:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725)\n5c9bac27382dafc63528828e6ae3e7ef71c08ca6d472fe5e2b608ce77185b725\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.685 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[24e59d2a-ccd4-4f82-a058-4fabf2cee93c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.686 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:12 np0005465987 nova_compute[230713]: 2025-10-02 12:54:12.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005465987 kernel: tap48e4ff16-10: left promiscuous mode
Oct  2 08:54:12 np0005465987 NetworkManager[44910]: <info>  [1759409652.6933] manager: (tap0656c688-25): new Tun device (/org/freedesktop/NetworkManager/Devices/330)
Oct  2 08:54:12 np0005465987 kernel: tap0656c688-25: entered promiscuous mode
Oct  2 08:54:12 np0005465987 systemd-udevd[300594]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:54:12 np0005465987 NetworkManager[44910]: <info>  [1759409652.7044] device (tap0656c688-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:54:12 np0005465987 NetworkManager[44910]: <info>  [1759409652.7053] device (tap0656c688-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:54:12 np0005465987 nova_compute[230713]: 2025-10-02 12:54:12.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:12Z|00722|binding|INFO|Claiming lport 0656c688-251b-4b39-a4de-f3697ea3b306 for this chassis.
Oct  2 08:54:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:12Z|00723|binding|INFO|0656c688-251b-4b39-a4de-f3697ea3b306: Claiming fa:16:3e:2d:50:61 10.100.0.7
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.710 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[eb62d849-4ca5-4d2e-8476-3c9e7070bb2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:12 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:12Z|00724|binding|INFO|Setting lport 0656c688-251b-4b39-a4de-f3697ea3b306 ovn-installed in OVS
Oct  2 08:54:12 np0005465987 nova_compute[230713]: 2025-10-02 12:54:12.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005465987 nova_compute[230713]: 2025-10-02 12:54:12.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005465987 systemd-machined[188335]: New machine qemu-84-instance-000000b5.
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.738 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9aa3d4-bc32-430d-be0b-3c638ce62c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.739 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7232eea1-750b-40eb-b85b-ffa50b0cdb0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:12 np0005465987 systemd[1]: Started Virtual Machine qemu-84-instance-000000b5.
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.752 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb116de-36b3-4837-8b41-67093ac39247]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770750, 'reachable_time': 36527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300686, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.754 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:54:12 np0005465987 systemd[1]: run-netns-ovnmeta\x2d48e4ff16\x2d1388\x2d40c7\x2da27a\x2d83a3b4869808.mount: Deactivated successfully.
Oct  2 08:54:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:12.755 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[99898cc0-5d2d-41c2-ae27-5bb003378041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:12.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:13Z|00725|binding|INFO|Setting lport 0656c688-251b-4b39-a4de-f3697ea3b306 up in Southbound
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.266 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:50:61 10.100.0.7'], port_security=['fa:16:3e:2d:50:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f5646848-ae0c-448c-9912-d5ee5734bd1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd6795e4e-d84d-4947-be00-1b032e1a18b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0656c688-251b-4b39-a4de-f3697ea3b306) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.267 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0656c688-251b-4b39-a4de-f3697ea3b306 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 bound to our chassis#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.268 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48e4ff16-1388-40c7-a27a-83a3b4869808#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.279 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7a53a3d7-117a-47c5-9610-0d5320b47cc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.279 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48e4ff16-11 in ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.281 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48e4ff16-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.282 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[62792ea5-1f4b-44ed-8ec8-bee01e929d18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.282 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[475c843b-dfc9-4a46-a529-5e4afe7cb838]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.292 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[9904e7dc-7601-4854-a497-bd3459b3be6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.309 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff224b0-5f9d-4122-97d5-d2d1587af985]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.343 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfc4568-8cd0-41fb-9471-e08afea23c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.349 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[93b4db95-df1b-4f9b-aef1-9b7e68eccc23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 NetworkManager[44910]: <info>  [1759409653.3519] manager: (tap48e4ff16-10): new Veth device (/org/freedesktop/NetworkManager/Devices/331)
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.384 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba55a48-fa8c-4cc6-92de-039e61cee49b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.388 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[415dc5cc-91a8-4cbd-8bc5-379cf914e1ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:13.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:13 np0005465987 NetworkManager[44910]: <info>  [1759409653.4169] device (tap48e4ff16-10): carrier: link connected
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.423 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7c63c02e-31b9-4bd7-bbff-1f6228688bb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.440 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0c8c17-4df2-40eb-b464-74b6227f94b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772199, 'reachable_time': 18125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300774, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.458 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1f6718-1432-4302-80b8-3d4fe107b63c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:53bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772199, 'tstamp': 772199}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300776, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.474 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8baa929f-2945-4cb1-8bf1-65b28478cd27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48e4ff16-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:53:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772199, 'reachable_time': 18125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300777, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.505 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[80152b6e-dd57-4dfb-9968-4389b225c8c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.565 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1950e473-d5c4-4058-9494-37ac063dbed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.566 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.566 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.566 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48e4ff16-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:13 np0005465987 NetworkManager[44910]: <info>  [1759409653.5693] manager: (tap48e4ff16-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Oct  2 08:54:13 np0005465987 kernel: tap48e4ff16-10: entered promiscuous mode
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.572 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48e4ff16-10, col_values=(('external_ids', {'iface-id': '1fc80788-89b8-413a-b0b0-d36f1a11a2b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:13 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:13Z|00726|binding|INFO|Releasing lport 1fc80788-89b8-413a-b0b0-d36f1a11a2b1 from this chassis (sb_readonly=0)
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.586 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.586 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4c407062-c65f-44cb-bfef-44623c8d4ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.587 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-48e4ff16-1388-40c7-a27a-83a3b4869808
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/48e4ff16-1388-40c7-a27a-83a3b4869808.pid.haproxy
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 48e4ff16-1388-40c7-a27a-83a3b4869808
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:54:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:13.588 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'env', 'PROCESS_TAG=haproxy-48e4ff16-1388-40c7-a27a-83a3b4869808', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48e4ff16-1388-40c7-a27a-83a3b4869808.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.786 2 DEBUG nova.compute.manager [req-aa1e428c-3689-4af9-9a3d-c50d9ad82509 req-e52c2153-d9a1-4aca-b3b0-7505e37d544d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-changed-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.787 2 DEBUG nova.compute.manager [req-aa1e428c-3689-4af9-9a3d-c50d9ad82509 req-e52c2153-d9a1-4aca-b3b0-7505e37d544d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Refreshing instance network info cache due to event network-changed-0656c688-251b-4b39-a4de-f3697ea3b306. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.787 2 DEBUG oslo_concurrency.lockutils [req-aa1e428c-3689-4af9-9a3d-c50d9ad82509 req-e52c2153-d9a1-4aca-b3b0-7505e37d544d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.923 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for f5646848-ae0c-448c-9912-d5ee5734bd1e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.923 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409653.9029016, f5646848-ae0c-448c-9912-d5ee5734bd1e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:13 np0005465987 nova_compute[230713]: 2025-10-02 12:54:13.924 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:54:14 np0005465987 podman[300810]: 2025-10-02 12:54:13.946695243 +0000 UTC m=+0.027591637 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:54:14 np0005465987 nova_compute[230713]: 2025-10-02 12:54:14.116 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:14 np0005465987 nova_compute[230713]: 2025-10-02 12:54:14.120 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:14 np0005465987 podman[300810]: 2025-10-02 12:54:14.157018815 +0000 UTC m=+0.237915189 container create 16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:54:14 np0005465987 nova_compute[230713]: 2025-10-02 12:54:14.202 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:54:14 np0005465987 nova_compute[230713]: 2025-10-02 12:54:14.203 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409653.9032252, f5646848-ae0c-448c-9912-d5ee5734bd1e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:14 np0005465987 nova_compute[230713]: 2025-10-02 12:54:14.203 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:54:14 np0005465987 systemd[1]: Started libpod-conmon-16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3.scope.
Oct  2 08:54:14 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:54:14 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1baf4cebe3630828ef87e94637192d4de3bc15249e21e7c0cfac2d8140d84c76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:54:14 np0005465987 podman[300810]: 2025-10-02 12:54:14.257918774 +0000 UTC m=+0.338815168 container init 16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:54:14 np0005465987 podman[300810]: 2025-10-02 12:54:14.263405343 +0000 UTC m=+0.344301717 container start 16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:54:14 np0005465987 nova_compute[230713]: 2025-10-02 12:54:14.272 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:14 np0005465987 nova_compute[230713]: 2025-10-02 12:54:14.276 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:14 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300842]: [NOTICE]   (300846) : New worker (300848) forked
Oct  2 08:54:14 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300842]: [NOTICE]   (300846) : Loading success.
Oct  2 08:54:14 np0005465987 nova_compute[230713]: 2025-10-02 12:54:14.339 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:54:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:14.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:15.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:15 np0005465987 nova_compute[230713]: 2025-10-02 12:54:15.832 2 DEBUG nova.compute.manager [None req-dacc824e-62f9-46b5-a275-e2b327fc4477 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:15 np0005465987 nova_compute[230713]: 2025-10-02 12:54:15.895 2 DEBUG nova.network.neutron [req-07c535d1-b3a2-4dbe-a072-4c7cb44d1ea5 req-a54e9d90-d89f-4152-ab90-01df3b44574e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updated VIF entry in instance network info cache for port 0656c688-251b-4b39-a4de-f3697ea3b306. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:15 np0005465987 nova_compute[230713]: 2025-10-02 12:54:15.896 2 DEBUG nova.network.neutron [req-07c535d1-b3a2-4dbe-a072-4c7cb44d1ea5 req-a54e9d90-d89f-4152-ab90-01df3b44574e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:15 np0005465987 nova_compute[230713]: 2025-10-02 12:54:15.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:15 np0005465987 nova_compute[230713]: 2025-10-02 12:54:15.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:54:16 np0005465987 nova_compute[230713]: 2025-10-02 12:54:16.000 2 DEBUG nova.compute.manager [req-48102383-9bef-4dd4-a9dd-2f0358815585 req-a02a96bb-952f-4e83-8774-20b477525afb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-unplugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:16 np0005465987 nova_compute[230713]: 2025-10-02 12:54:16.001 2 DEBUG oslo_concurrency.lockutils [req-48102383-9bef-4dd4-a9dd-2f0358815585 req-a02a96bb-952f-4e83-8774-20b477525afb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:16 np0005465987 nova_compute[230713]: 2025-10-02 12:54:16.001 2 DEBUG oslo_concurrency.lockutils [req-48102383-9bef-4dd4-a9dd-2f0358815585 req-a02a96bb-952f-4e83-8774-20b477525afb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:16 np0005465987 nova_compute[230713]: 2025-10-02 12:54:16.002 2 DEBUG oslo_concurrency.lockutils [req-48102383-9bef-4dd4-a9dd-2f0358815585 req-a02a96bb-952f-4e83-8774-20b477525afb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:16 np0005465987 nova_compute[230713]: 2025-10-02 12:54:16.003 2 DEBUG nova.compute.manager [req-48102383-9bef-4dd4-a9dd-2f0358815585 req-a02a96bb-952f-4e83-8774-20b477525afb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-unplugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:16 np0005465987 nova_compute[230713]: 2025-10-02 12:54:16.003 2 WARNING nova.compute.manager [req-48102383-9bef-4dd4-a9dd-2f0358815585 req-a02a96bb-952f-4e83-8774-20b477525afb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-unplugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:54:16 np0005465987 nova_compute[230713]: 2025-10-02 12:54:16.025 2 DEBUG oslo_concurrency.lockutils [req-07c535d1-b3a2-4dbe-a072-4c7cb44d1ea5 req-a54e9d90-d89f-4152-ab90-01df3b44574e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:16 np0005465987 nova_compute[230713]: 2025-10-02 12:54:16.026 2 DEBUG oslo_concurrency.lockutils [req-aa1e428c-3689-4af9-9a3d-c50d9ad82509 req-e52c2153-d9a1-4aca-b3b0-7505e37d544d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:16 np0005465987 nova_compute[230713]: 2025-10-02 12:54:16.027 2 DEBUG nova.network.neutron [req-aa1e428c-3689-4af9-9a3d-c50d9ad82509 req-e52c2153-d9a1-4aca-b3b0-7505e37d544d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Refreshing network info cache for port 0656c688-251b-4b39-a4de-f3697ea3b306 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:16.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:17 np0005465987 nova_compute[230713]: 2025-10-02 12:54:17.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:17.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:17 np0005465987 nova_compute[230713]: 2025-10-02 12:54:17.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.024 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.178 2 DEBUG nova.compute.manager [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.179 2 DEBUG oslo_concurrency.lockutils [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.179 2 DEBUG oslo_concurrency.lockutils [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.180 2 DEBUG oslo_concurrency.lockutils [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.180 2 DEBUG nova.compute.manager [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.180 2 WARNING nova.compute.manager [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.181 2 DEBUG nova.compute.manager [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.181 2 DEBUG oslo_concurrency.lockutils [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.181 2 DEBUG oslo_concurrency.lockutils [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.181 2 DEBUG oslo_concurrency.lockutils [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.182 2 DEBUG nova.compute.manager [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:18 np0005465987 nova_compute[230713]: 2025-10-02 12:54:18.182 2 WARNING nova.compute.manager [req-8bc651e1-c807-428f-87f9-7cc7e4821b46 req-cd2a19c4-5a30-4efe-b1f9-6bcc283997bb d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:54:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:18.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:19 np0005465987 nova_compute[230713]: 2025-10-02 12:54:19.345 2 DEBUG nova.network.neutron [req-aa1e428c-3689-4af9-9a3d-c50d9ad82509 req-e52c2153-d9a1-4aca-b3b0-7505e37d544d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updated VIF entry in instance network info cache for port 0656c688-251b-4b39-a4de-f3697ea3b306. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:19 np0005465987 nova_compute[230713]: 2025-10-02 12:54:19.346 2 DEBUG nova.network.neutron [req-aa1e428c-3689-4af9-9a3d-c50d9ad82509 req-e52c2153-d9a1-4aca-b3b0-7505e37d544d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:19.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:19 np0005465987 nova_compute[230713]: 2025-10-02 12:54:19.529 2 DEBUG oslo_concurrency.lockutils [req-aa1e428c-3689-4af9-9a3d-c50d9ad82509 req-e52c2153-d9a1-4aca-b3b0-7505e37d544d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:19 np0005465987 nova_compute[230713]: 2025-10-02 12:54:19.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:19 np0005465987 nova_compute[230713]: 2025-10-02 12:54:19.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:54:19 np0005465987 nova_compute[230713]: 2025-10-02 12:54:19.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.337 2 DEBUG nova.compute.manager [req-598f9bbb-93f3-4f07-ad5b-cd89685a0abc req-e5578558-9d86-499d-b0c1-817878b94977 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.337 2 DEBUG oslo_concurrency.lockutils [req-598f9bbb-93f3-4f07-ad5b-cd89685a0abc req-e5578558-9d86-499d-b0c1-817878b94977 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.338 2 DEBUG oslo_concurrency.lockutils [req-598f9bbb-93f3-4f07-ad5b-cd89685a0abc req-e5578558-9d86-499d-b0c1-817878b94977 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.338 2 DEBUG oslo_concurrency.lockutils [req-598f9bbb-93f3-4f07-ad5b-cd89685a0abc req-e5578558-9d86-499d-b0c1-817878b94977 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.338 2 DEBUG nova.compute.manager [req-598f9bbb-93f3-4f07-ad5b-cd89685a0abc req-e5578558-9d86-499d-b0c1-817878b94977 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.338 2 WARNING nova.compute.manager [req-598f9bbb-93f3-4f07-ad5b-cd89685a0abc req-e5578558-9d86-499d-b0c1-817878b94977 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.460 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.460 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.461 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:54:20 np0005465987 nova_compute[230713]: 2025-10-02 12:54:20.461 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:20.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:21.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:22 np0005465987 nova_compute[230713]: 2025-10-02 12:54:22.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:22 np0005465987 nova_compute[230713]: 2025-10-02 12:54:22.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:22 np0005465987 podman[300858]: 2025-10-02 12:54:22.863799507 +0000 UTC m=+0.070443515 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:54:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:22.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:23.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:24 np0005465987 nova_compute[230713]: 2025-10-02 12:54:24.605 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [{"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:24 np0005465987 nova_compute[230713]: 2025-10-02 12:54:24.627 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-f5646848-ae0c-448c-9912-d5ee5734bd1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:24 np0005465987 nova_compute[230713]: 2025-10-02 12:54:24.627 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:54:24 np0005465987 nova_compute[230713]: 2025-10-02 12:54:24.627 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:24.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:25.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:25 np0005465987 podman[300879]: 2025-10-02 12:54:25.866015653 +0000 UTC m=+0.086609048 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct  2 08:54:25 np0005465987 podman[300881]: 2025-10-02 12:54:25.872691906 +0000 UTC m=+0.070681201 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:54:25 np0005465987 podman[300880]: 2025-10-02 12:54:25.874263869 +0000 UTC m=+0.086715740 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:54:25 np0005465987 nova_compute[230713]: 2025-10-02 12:54:25.993 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:25 np0005465987 nova_compute[230713]: 2025-10-02 12:54:25.993 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:26.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:26 np0005465987 nova_compute[230713]: 2025-10-02 12:54:26.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:26 np0005465987 nova_compute[230713]: 2025-10-02 12:54:26.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:26 np0005465987 nova_compute[230713]: 2025-10-02 12:54:26.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.028 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.029 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.029 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.029 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.029 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:27.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3357589349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.454 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.564 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.564 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.564 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.699 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.700 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4144MB free_disk=20.717445373535156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.700 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:27 np0005465987 nova_compute[230713]: 2025-10-02 12:54:27.700 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:27 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:27Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:50:61 10.100.0.7
Oct  2 08:54:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:27.981 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:27.982 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:27.982 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.097 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance f5646848-ae0c-448c-9912-d5ee5734bd1e actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.098 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.098 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.206 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1128630450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.623 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.628 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.660 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.714 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.715 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:28 np0005465987 nova_compute[230713]: 2025-10-02 12:54:28.715 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:28.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:29.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:30.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:31.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:32 np0005465987 nova_compute[230713]: 2025-10-02 12:54:32.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:32 np0005465987 nova_compute[230713]: 2025-10-02 12:54:32.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:32.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:33.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:33 np0005465987 nova_compute[230713]: 2025-10-02 12:54:33.738 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:35.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066395569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:35 np0005465987 nova_compute[230713]: 2025-10-02 12:54:35.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:35 np0005465987 nova_compute[230713]: 2025-10-02 12:54:35.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:54:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:36.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:37 np0005465987 nova_compute[230713]: 2025-10-02 12:54:37.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:37.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:37 np0005465987 nova_compute[230713]: 2025-10-02 12:54:37.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:38.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:39.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:40.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:40 np0005465987 nova_compute[230713]: 2025-10-02 12:54:40.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:40 np0005465987 nova_compute[230713]: 2025-10-02 12:54:40.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:54:40 np0005465987 nova_compute[230713]: 2025-10-02 12:54:40.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:40.974 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:40.975 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:54:40 np0005465987 nova_compute[230713]: 2025-10-02 12:54:40.995 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:54:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:41.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:42 np0005465987 nova_compute[230713]: 2025-10-02 12:54:42.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:42 np0005465987 nova_compute[230713]: 2025-10-02 12:54:42.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:42.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:43.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.635467) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683635496, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2389, "num_deletes": 252, "total_data_size": 5807881, "memory_usage": 5902496, "flush_reason": "Manual Compaction"}
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683653133, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 3797588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66414, "largest_seqno": 68798, "table_properties": {"data_size": 3787963, "index_size": 6118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19873, "raw_average_key_size": 20, "raw_value_size": 3768675, "raw_average_value_size": 3877, "num_data_blocks": 267, "num_entries": 972, "num_filter_entries": 972, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409468, "oldest_key_time": 1759409468, "file_creation_time": 1759409683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 17692 microseconds, and 6365 cpu microseconds.
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.653159) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 3797588 bytes OK
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.653173) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.654613) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.654622) EVENT_LOG_v1 {"time_micros": 1759409683654619, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.654635) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 5797494, prev total WAL file size 5797494, number of live WAL files 2.
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.655691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(3708KB)], [135(10MB)]
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683655769, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14301027, "oldest_snapshot_seqno": -1}
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9119 keys, 12395736 bytes, temperature: kUnknown
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683761745, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12395736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12335662, "index_size": 36153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22853, "raw_key_size": 239457, "raw_average_key_size": 26, "raw_value_size": 12174346, "raw_average_value_size": 1335, "num_data_blocks": 1384, "num_entries": 9119, "num_filter_entries": 9119, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.762001) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12395736 bytes
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.765913) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.8 rd, 116.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.0 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(7.0) write-amplify(3.3) OK, records in: 9642, records dropped: 523 output_compression: NoCompression
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.765929) EVENT_LOG_v1 {"time_micros": 1759409683765921, "job": 86, "event": "compaction_finished", "compaction_time_micros": 106068, "compaction_time_cpu_micros": 45877, "output_level": 6, "num_output_files": 1, "total_output_size": 12395736, "num_input_records": 9642, "num_output_records": 9119, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683766612, "job": 86, "event": "table_file_deletion", "file_number": 137}
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409683768387, "job": 86, "event": "table_file_deletion", "file_number": 135}
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.655611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.768436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.768440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.768442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.768444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:54:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:54:43.768445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.203 2 DEBUG oslo_concurrency.lockutils [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.203 2 DEBUG oslo_concurrency.lockutils [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.234 2 INFO nova.compute.manager [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Detaching volume 7708c28d-8dd9-455b-9cf8-550d89fddce1#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.435 2 INFO nova.virt.block_device [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Attempting to driver detach volume 7708c28d-8dd9-455b-9cf8-550d89fddce1 from mountpoint /dev/vdb#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.443 2 DEBUG nova.virt.libvirt.driver [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Attempting to detach device vdb from instance f5646848-ae0c-448c-9912-d5ee5734bd1e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.444 2 DEBUG nova.virt.libvirt.guest [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-7708c28d-8dd9-455b-9cf8-550d89fddce1">
Oct  2 08:54:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <serial>7708c28d-8dd9-455b-9cf8-550d89fddce1</serial>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:54:44 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.451 2 INFO nova.virt.libvirt.driver [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Successfully detached device vdb from instance f5646848-ae0c-448c-9912-d5ee5734bd1e from the persistent domain config.#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.452 2 DEBUG nova.virt.libvirt.driver [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f5646848-ae0c-448c-9912-d5ee5734bd1e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.453 2 DEBUG nova.virt.libvirt.guest [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-7708c28d-8dd9-455b-9cf8-550d89fddce1">
Oct  2 08:54:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  </source>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <serial>7708c28d-8dd9-455b-9cf8-550d89fddce1</serial>
Oct  2 08:54:44 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:54:44 np0005465987 nova_compute[230713]: </disk>
Oct  2 08:54:44 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.556 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759409684.5553172, f5646848-ae0c-448c-9912-d5ee5734bd1e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.557 2 DEBUG nova.virt.libvirt.driver [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f5646848-ae0c-448c-9912-d5ee5734bd1e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.558 2 INFO nova.virt.libvirt.driver [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Successfully detached device vdb from instance f5646848-ae0c-448c-9912-d5ee5734bd1e from the live domain config.#033[00m
Oct  2 08:54:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:44.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:44 np0005465987 nova_compute[230713]: 2025-10-02 12:54:44.987 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:45 np0005465987 nova_compute[230713]: 2025-10-02 12:54:45.078 2 DEBUG nova.objects.instance [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'flavor' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:45 np0005465987 nova_compute[230713]: 2025-10-02 12:54:45.155 2 DEBUG oslo_concurrency.lockutils [None req-353df715-f2b9-4950-a87a-565a64376da2 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:45.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e383 e383: 3 total, 3 up, 3 in
Oct  2 08:54:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:46.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:47 np0005465987 nova_compute[230713]: 2025-10-02 12:54:47.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:47.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:47 np0005465987 nova_compute[230713]: 2025-10-02 12:54:47.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.295 2 DEBUG oslo_concurrency.lockutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.296 2 DEBUG oslo_concurrency.lockutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.296 2 DEBUG oslo_concurrency.lockutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.296 2 DEBUG oslo_concurrency.lockutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.296 2 DEBUG oslo_concurrency.lockutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.297 2 INFO nova.compute.manager [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Terminating instance#033[00m
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.298 2 DEBUG nova.compute.manager [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:54:48 np0005465987 kernel: tap0656c688-25 (unregistering): left promiscuous mode
Oct  2 08:54:48 np0005465987 NetworkManager[44910]: <info>  [1759409688.3609] device (tap0656c688-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:48Z|00727|binding|INFO|Releasing lport 0656c688-251b-4b39-a4de-f3697ea3b306 from this chassis (sb_readonly=0)
Oct  2 08:54:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:48Z|00728|binding|INFO|Setting lport 0656c688-251b-4b39-a4de-f3697ea3b306 down in Southbound
Oct  2 08:54:48 np0005465987 ovn_controller[129172]: 2025-10-02T12:54:48Z|00729|binding|INFO|Removing iface tap0656c688-25 ovn-installed in OVS
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:48 np0005465987 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Oct  2 08:54:48 np0005465987 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b5.scope: Consumed 14.355s CPU time.
Oct  2 08:54:48 np0005465987 systemd-machined[188335]: Machine qemu-84-instance-000000b5 terminated.
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.568 2 INFO nova.virt.libvirt.driver [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Instance destroyed successfully.#033[00m
Oct  2 08:54:48 np0005465987 nova_compute[230713]: 2025-10-02 12:54:48.569 2 DEBUG nova.objects.instance [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lazy-loading 'resources' on Instance uuid f5646848-ae0c-448c-9912-d5ee5734bd1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:48.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:49 np0005465987 nova_compute[230713]: 2025-10-02 12:54:49.029 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:49.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:49.767 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:50:61 10.100.0.7'], port_security=['fa:16:3e:2d:50:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f5646848-ae0c-448c-9912-d5ee5734bd1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48e4ff16-1388-40c7-a27a-83a3b4869808', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a442bc513e14406b73e96e70396e6c3', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd6795e4e-d84d-4947-be00-1b032e1a18b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2acef0-eb35-44b4-ad52-c0266ea4784a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0656c688-251b-4b39-a4de-f3697ea3b306) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:49.769 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0656c688-251b-4b39-a4de-f3697ea3b306 in datapath 48e4ff16-1388-40c7-a27a-83a3b4869808 unbound from our chassis#033[00m
Oct  2 08:54:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:49.772 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48e4ff16-1388-40c7-a27a-83a3b4869808, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:54:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:49.774 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e38b6df8-bbaa-4803-9d78-4f8b89c05c84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:49.775 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 namespace which is not needed anymore#033[00m
Oct  2 08:54:49 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300842]: [NOTICE]   (300846) : haproxy version is 2.8.14-c23fe91
Oct  2 08:54:49 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300842]: [NOTICE]   (300846) : path to executable is /usr/sbin/haproxy
Oct  2 08:54:49 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300842]: [WARNING]  (300846) : Exiting Master process...
Oct  2 08:54:49 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300842]: [ALERT]    (300846) : Current worker (300848) exited with code 143 (Terminated)
Oct  2 08:54:49 np0005465987 neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808[300842]: [WARNING]  (300846) : All workers exited. Exiting... (0)
Oct  2 08:54:49 np0005465987 systemd[1]: libpod-16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3.scope: Deactivated successfully.
Oct  2 08:54:49 np0005465987 podman[301018]: 2025-10-02 12:54:49.968839712 +0000 UTC m=+0.056412509 container died 16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:54:49 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:49.978 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:50 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3-userdata-shm.mount: Deactivated successfully.
Oct  2 08:54:50 np0005465987 systemd[1]: var-lib-containers-storage-overlay-1baf4cebe3630828ef87e94637192d4de3bc15249e21e7c0cfac2d8140d84c76-merged.mount: Deactivated successfully.
Oct  2 08:54:50 np0005465987 podman[301018]: 2025-10-02 12:54:50.024755886 +0000 UTC m=+0.112328703 container cleanup 16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:54:50 np0005465987 systemd[1]: libpod-conmon-16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3.scope: Deactivated successfully.
Oct  2 08:54:50 np0005465987 podman[301047]: 2025-10-02 12:54:50.10216212 +0000 UTC m=+0.046502677 container remove 16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:54:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:50.110 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d164d025-4616-490c-aab2-b42542f68692]: (4, ('Thu Oct  2 12:54:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3)\n16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3\nThu Oct  2 12:54:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 (16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3)\n16e8df809f5989cb3d0086efdf1efad4d1532d94b8ac61fef6d790be4ac48ed3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:50.112 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ec6a73-2857-4e9b-af82-e2d75e04a927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:50.113 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48e4ff16-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005465987 kernel: tap48e4ff16-10: left promiscuous mode
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:50.139 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5cdf681f-14a5-4744-a3e3-d2d418f4edcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:50.163 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4e366346-efab-4461-8ff3-14985e2682f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:50.164 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec1ce0d-73fd-472c-b4e5-45afb0fab878]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:50.182 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5744a1-14e1-4d08-94a5-4405f99a37b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772191, 'reachable_time': 29539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301067, 'error': None, 'target': 'ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:50 np0005465987 systemd[1]: run-netns-ovnmeta\x2d48e4ff16\x2d1388\x2d40c7\x2da27a\x2d83a3b4869808.mount: Deactivated successfully.
Oct  2 08:54:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:50.188 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48e4ff16-1388-40c7-a27a-83a3b4869808 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:54:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:54:50.189 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[d88c1d0c-c477-41a5-b583-0d4ba25866b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.785 2 DEBUG nova.virt.libvirt.vif [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-356002637',display_name='tempest-ServerStableDeviceRescueTest-server-356002637',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-356002637',id=181,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoKV7bc1Ihcwh5u/ucbycc4MJFxvJi6ozOl0T1nn+zqbeWg5iK0RFaya9xHSofolJ2eQYNrdhLMrQSXhcg4FfGkUU62Zm08MSKDrCZr3cYV0q0VS0+lbILv9n1ItWEizA==',key_name='tempest-keypair-310042596',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a442bc513e14406b73e96e70396e6c3',ramdisk_id='',reservation_id='r-z5xm6cn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-454391960',owner_user_name='tempest-ServerStableDeviceRescueTest-454391960-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6785ffe5d6554514b4ed9fd47665eca0',uuid=f5646848-ae0c-448c-9912-d5ee5734bd1e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.785 2 DEBUG nova.network.os_vif_util [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converting VIF {"id": "0656c688-251b-4b39-a4de-f3697ea3b306", "address": "fa:16:3e:2d:50:61", "network": {"id": "48e4ff16-1388-40c7-a27a-83a3b4869808", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-271672558-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a442bc513e14406b73e96e70396e6c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0656c688-25", "ovs_interfaceid": "0656c688-251b-4b39-a4de-f3697ea3b306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.786 2 DEBUG nova.network.os_vif_util [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:50:61,bridge_name='br-int',has_traffic_filtering=True,id=0656c688-251b-4b39-a4de-f3697ea3b306,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0656c688-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.786 2 DEBUG os_vif [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:50:61,bridge_name='br-int',has_traffic_filtering=True,id=0656c688-251b-4b39-a4de-f3697ea3b306,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0656c688-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0656c688-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.801 2 INFO os_vif [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:50:61,bridge_name='br-int',has_traffic_filtering=True,id=0656c688-251b-4b39-a4de-f3697ea3b306,network=Network(48e4ff16-1388-40c7-a27a-83a3b4869808),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0656c688-25')#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.824 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Triggering sync for uuid f5646848-ae0c-448c-9912-d5ee5734bd1e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:54:50 np0005465987 nova_compute[230713]: 2025-10-02 12:54:50.825 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:50.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:51 np0005465987 nova_compute[230713]: 2025-10-02 12:54:51.225 2 DEBUG nova.compute.manager [req-2761471e-9395-4535-99cc-24bd107733d0 req-6476d887-7b70-42f5-87ac-24e229233532 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-unplugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:51 np0005465987 nova_compute[230713]: 2025-10-02 12:54:51.225 2 DEBUG oslo_concurrency.lockutils [req-2761471e-9395-4535-99cc-24bd107733d0 req-6476d887-7b70-42f5-87ac-24e229233532 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:51 np0005465987 nova_compute[230713]: 2025-10-02 12:54:51.226 2 DEBUG oslo_concurrency.lockutils [req-2761471e-9395-4535-99cc-24bd107733d0 req-6476d887-7b70-42f5-87ac-24e229233532 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:51 np0005465987 nova_compute[230713]: 2025-10-02 12:54:51.226 2 DEBUG oslo_concurrency.lockutils [req-2761471e-9395-4535-99cc-24bd107733d0 req-6476d887-7b70-42f5-87ac-24e229233532 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:51 np0005465987 nova_compute[230713]: 2025-10-02 12:54:51.226 2 DEBUG nova.compute.manager [req-2761471e-9395-4535-99cc-24bd107733d0 req-6476d887-7b70-42f5-87ac-24e229233532 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-unplugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:51 np0005465987 nova_compute[230713]: 2025-10-02 12:54:51.226 2 DEBUG nova.compute.manager [req-2761471e-9395-4535-99cc-24bd107733d0 req-6476d887-7b70-42f5-87ac-24e229233532 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-unplugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:54:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:51.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:52 np0005465987 nova_compute[230713]: 2025-10-02 12:54:52.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.324 2 INFO nova.virt.libvirt.driver [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Deleting instance files /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e_del#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.324 2 INFO nova.virt.libvirt.driver [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Deletion of /var/lib/nova/instances/f5646848-ae0c-448c-9912-d5ee5734bd1e_del complete#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.341 2 DEBUG nova.compute.manager [req-305795fe-6381-4a71-a1c0-e2af0cb78049 req-92d1bd53-82e0-4bfd-98c7-d132a97cfd36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.341 2 DEBUG oslo_concurrency.lockutils [req-305795fe-6381-4a71-a1c0-e2af0cb78049 req-92d1bd53-82e0-4bfd-98c7-d132a97cfd36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.341 2 DEBUG oslo_concurrency.lockutils [req-305795fe-6381-4a71-a1c0-e2af0cb78049 req-92d1bd53-82e0-4bfd-98c7-d132a97cfd36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.342 2 DEBUG oslo_concurrency.lockutils [req-305795fe-6381-4a71-a1c0-e2af0cb78049 req-92d1bd53-82e0-4bfd-98c7-d132a97cfd36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.342 2 DEBUG nova.compute.manager [req-305795fe-6381-4a71-a1c0-e2af0cb78049 req-92d1bd53-82e0-4bfd-98c7-d132a97cfd36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] No waiting events found dispatching network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.342 2 WARNING nova.compute.manager [req-305795fe-6381-4a71-a1c0-e2af0cb78049 req-92d1bd53-82e0-4bfd-98c7-d132a97cfd36 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received unexpected event network-vif-plugged-0656c688-251b-4b39-a4de-f3697ea3b306 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.437 2 INFO nova.compute.manager [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Took 5.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.438 2 DEBUG oslo.service.loopingcall [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.438 2 DEBUG nova.compute.manager [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:54:53 np0005465987 nova_compute[230713]: 2025-10-02 12:54:53.438 2 DEBUG nova.network.neutron [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:54:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:53.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:53 np0005465987 podman[301218]: 2025-10-02 12:54:53.846635522 +0000 UTC m=+0.062072934 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:54:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e384 e384: 3 total, 3 up, 3 in
Oct  2 08:54:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:54:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:54:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:54:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:54:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:54:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:54 np0005465987 nova_compute[230713]: 2025-10-02 12:54:54.992 2 DEBUG nova.network.neutron [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:55 np0005465987 nova_compute[230713]: 2025-10-02 12:54:55.027 2 INFO nova.compute.manager [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Took 1.59 seconds to deallocate network for instance.#033[00m
Oct  2 08:54:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:54:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019123386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:54:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:54:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019123386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:54:55 np0005465987 nova_compute[230713]: 2025-10-02 12:54:55.192 2 DEBUG oslo_concurrency.lockutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:55 np0005465987 nova_compute[230713]: 2025-10-02 12:54:55.192 2 DEBUG oslo_concurrency.lockutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:55 np0005465987 nova_compute[230713]: 2025-10-02 12:54:55.239 2 DEBUG oslo_concurrency.processutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:54:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:55.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:54:55 np0005465987 nova_compute[230713]: 2025-10-02 12:54:55.591 2 DEBUG nova.compute.manager [req-32dceb1c-194c-409b-ae96-414d9884e8af req-c9545726-a1c9-4862-81d8-aa939455230c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Received event network-vif-deleted-0656c688-251b-4b39-a4de-f3697ea3b306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/835688787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:55 np0005465987 nova_compute[230713]: 2025-10-02 12:54:55.651 2 DEBUG oslo_concurrency.processutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:55 np0005465987 nova_compute[230713]: 2025-10-02 12:54:55.661 2 DEBUG nova.compute.provider_tree [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:55 np0005465987 nova_compute[230713]: 2025-10-02 12:54:55.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:55 np0005465987 nova_compute[230713]: 2025-10-02 12:54:55.834 2 DEBUG nova.scheduler.client.report [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:56 np0005465987 nova_compute[230713]: 2025-10-02 12:54:56.439 2 DEBUG oslo_concurrency.lockutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:56 np0005465987 nova_compute[230713]: 2025-10-02 12:54:56.465 2 INFO nova.scheduler.client.report [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Deleted allocations for instance f5646848-ae0c-448c-9912-d5ee5734bd1e#033[00m
Oct  2 08:54:56 np0005465987 nova_compute[230713]: 2025-10-02 12:54:56.539 2 DEBUG oslo_concurrency.lockutils [None req-0ba715e5-c6e2-4eb7-83c8-14c690050890 6785ffe5d6554514b4ed9fd47665eca0 6a442bc513e14406b73e96e70396e6c3 - - default default] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:56 np0005465987 nova_compute[230713]: 2025-10-02 12:54:56.540 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 5.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:56 np0005465987 nova_compute[230713]: 2025-10-02 12:54:56.540 2 INFO nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct  2 08:54:56 np0005465987 nova_compute[230713]: 2025-10-02 12:54:56.541 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "f5646848-ae0c-448c-9912-d5ee5734bd1e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:56 np0005465987 podman[301263]: 2025-10-02 12:54:56.845475397 +0000 UTC m=+0.056417949 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:54:56 np0005465987 podman[301264]: 2025-10-02 12:54:56.85470949 +0000 UTC m=+0.063034000 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:54:56 np0005465987 podman[301262]: 2025-10-02 12:54:56.863553023 +0000 UTC m=+0.078445474 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 08:54:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:56.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:57 np0005465987 nova_compute[230713]: 2025-10-02 12:54:57.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:57 np0005465987 nova_compute[230713]: 2025-10-02 12:54:57.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:57 np0005465987 nova_compute[230713]: 2025-10-02 12:54:57.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:57.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:58.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:54:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1507127177' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:54:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:54:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1507127177' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:54:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:54:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:59.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e385 e385: 3 total, 3 up, 3 in
Oct  2 08:55:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:00 np0005465987 nova_compute[230713]: 2025-10-02 12:55:00.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:00.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:01.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:55:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:55:02 np0005465987 nova_compute[230713]: 2025-10-02 12:55:02.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:02.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:03.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:03 np0005465987 nova_compute[230713]: 2025-10-02 12:55:03.568 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409688.5662072, f5646848-ae0c-448c-9912-d5ee5734bd1e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:03 np0005465987 nova_compute[230713]: 2025-10-02 12:55:03.568 2 INFO nova.compute.manager [-] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:55:03 np0005465987 nova_compute[230713]: 2025-10-02 12:55:03.642 2 DEBUG nova.compute.manager [None req-88ddac01-b527-45a3-b00d-36db824a7e7f - - - - - -] [instance: f5646848-ae0c-448c-9912-d5ee5734bd1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:04.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:05.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:05 np0005465987 nova_compute[230713]: 2025-10-02 12:55:05.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e386 e386: 3 total, 3 up, 3 in
Oct  2 08:55:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:06.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:07 np0005465987 nova_compute[230713]: 2025-10-02 12:55:07.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:07.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:08.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e387 e387: 3 total, 3 up, 3 in
Oct  2 08:55:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:09.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:10 np0005465987 nova_compute[230713]: 2025-10-02 12:55:10.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:10.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:11.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:12 np0005465987 nova_compute[230713]: 2025-10-02 12:55:12.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e388 e388: 3 total, 3 up, 3 in
Oct  2 08:55:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:12.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:13.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e389 e389: 3 total, 3 up, 3 in
Oct  2 08:55:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:14.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:15.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:15 np0005465987 nova_compute[230713]: 2025-10-02 12:55:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:16 np0005465987 nova_compute[230713]: 2025-10-02 12:55:16.136 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:16 np0005465987 nova_compute[230713]: 2025-10-02 12:55:16.137 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:16 np0005465987 nova_compute[230713]: 2025-10-02 12:55:16.567 2 DEBUG nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:55:16 np0005465987 nova_compute[230713]: 2025-10-02 12:55:16.851 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:16 np0005465987 nova_compute[230713]: 2025-10-02 12:55:16.852 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:16 np0005465987 nova_compute[230713]: 2025-10-02 12:55:16.857 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:55:16 np0005465987 nova_compute[230713]: 2025-10-02 12:55:16.858 2 INFO nova.compute.claims [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:55:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:16.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:17 np0005465987 nova_compute[230713]: 2025-10-02 12:55:17.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:17 np0005465987 nova_compute[230713]: 2025-10-02 12:55:17.083 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:17.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/91340182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:17 np0005465987 nova_compute[230713]: 2025-10-02 12:55:17.543 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:17 np0005465987 nova_compute[230713]: 2025-10-02 12:55:17.549 2 DEBUG nova.compute.provider_tree [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:55:17 np0005465987 nova_compute[230713]: 2025-10-02 12:55:17.585 2 DEBUG nova.scheduler.client.report [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:55:17 np0005465987 nova_compute[230713]: 2025-10-02 12:55:17.652 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:17 np0005465987 nova_compute[230713]: 2025-10-02 12:55:17.653 2 DEBUG nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:55:17 np0005465987 nova_compute[230713]: 2025-10-02 12:55:17.735 2 DEBUG nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:55:17 np0005465987 nova_compute[230713]: 2025-10-02 12:55:17.735 2 DEBUG nova.network.neutron [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:55:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87200434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:18 np0005465987 nova_compute[230713]: 2025-10-02 12:55:18.305 2 DEBUG nova.policy [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:55:18 np0005465987 nova_compute[230713]: 2025-10-02 12:55:18.357 2 INFO nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:55:18 np0005465987 nova_compute[230713]: 2025-10-02 12:55:18.532 2 DEBUG nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:55:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:18.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.418 2 DEBUG nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.423 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.424 2 INFO nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Creating image(s)#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.451 2 DEBUG nova.storage.rbd_utils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 e390: 3 total, 3 up, 3 in
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.477 2 DEBUG nova.storage.rbd_utils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.502 2 DEBUG nova.storage.rbd_utils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.505 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:19.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.543 2 DEBUG nova.network.neutron [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Successfully created port: 76ca6742-e0ea-45a7-bf82-750d5106e5b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.571 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.572 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.573 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.573 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.598 2 DEBUG nova.storage.rbd_utils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.601 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:19 np0005465987 nova_compute[230713]: 2025-10-02 12:55:19.955 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:20 np0005465987 nova_compute[230713]: 2025-10-02 12:55:20.035 2 DEBUG nova.storage.rbd_utils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] resizing rbd image 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:55:20 np0005465987 nova_compute[230713]: 2025-10-02 12:55:20.161 2 DEBUG nova.objects.instance [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'migration_context' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:20 np0005465987 nova_compute[230713]: 2025-10-02 12:55:20.512 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:55:20 np0005465987 nova_compute[230713]: 2025-10-02 12:55:20.512 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Ensure instance console log exists: /var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:55:20 np0005465987 nova_compute[230713]: 2025-10-02 12:55:20.513 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:20 np0005465987 nova_compute[230713]: 2025-10-02 12:55:20.513 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:20 np0005465987 nova_compute[230713]: 2025-10-02 12:55:20.513 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:20 np0005465987 nova_compute[230713]: 2025-10-02 12:55:20.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:21.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:21.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:21 np0005465987 nova_compute[230713]: 2025-10-02 12:55:21.760 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:21 np0005465987 nova_compute[230713]: 2025-10-02 12:55:21.761 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:55:21 np0005465987 nova_compute[230713]: 2025-10-02 12:55:21.761 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:55:21 np0005465987 nova_compute[230713]: 2025-10-02 12:55:21.877 2 DEBUG nova.network.neutron [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Successfully updated port: 76ca6742-e0ea-45a7-bf82-750d5106e5b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.216 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.216 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.217 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.463 2 DEBUG nova.compute.manager [req-53161992-e9a7-4939-88df-582e89922eff req-ce4a9da2-4691-45f1-a851-60d26ff839fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-changed-76ca6742-e0ea-45a7-bf82-750d5106e5b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.463 2 DEBUG nova.compute.manager [req-53161992-e9a7-4939-88df-582e89922eff req-ce4a9da2-4691-45f1-a851-60d26ff839fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing instance network info cache due to event network-changed-76ca6742-e0ea-45a7-bf82-750d5106e5b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.464 2 DEBUG oslo_concurrency.lockutils [req-53161992-e9a7-4939-88df-582e89922eff req-ce4a9da2-4691-45f1-a851-60d26ff839fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.464 2 DEBUG oslo_concurrency.lockutils [req-53161992-e9a7-4939-88df-582e89922eff req-ce4a9da2-4691-45f1-a851-60d26ff839fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.464 2 DEBUG nova.network.neutron [req-53161992-e9a7-4939-88df-582e89922eff req-ce4a9da2-4691-45f1-a851-60d26ff839fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing network info cache for port 76ca6742-e0ea-45a7-bf82-750d5106e5b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:55:22 np0005465987 nova_compute[230713]: 2025-10-02 12:55:22.674 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:23 np0005465987 nova_compute[230713]: 2025-10-02 12:55:23.121 2 DEBUG nova.network.neutron [req-53161992-e9a7-4939-88df-582e89922eff req-ce4a9da2-4691-45f1-a851-60d26ff839fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:55:23 np0005465987 nova_compute[230713]: 2025-10-02 12:55:23.414 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:23 np0005465987 nova_compute[230713]: 2025-10-02 12:55:23.435 2 DEBUG nova.network.neutron [req-53161992-e9a7-4939-88df-582e89922eff req-ce4a9da2-4691-45f1-a851-60d26ff839fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:23 np0005465987 nova_compute[230713]: 2025-10-02 12:55:23.503 2 DEBUG oslo_concurrency.lockutils [req-53161992-e9a7-4939-88df-582e89922eff req-ce4a9da2-4691-45f1-a851-60d26ff839fa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:23 np0005465987 nova_compute[230713]: 2025-10-02 12:55:23.503 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:23 np0005465987 nova_compute[230713]: 2025-10-02 12:55:23.504 2 DEBUG nova.network.neutron [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:55:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:23.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:23 np0005465987 nova_compute[230713]: 2025-10-02 12:55:23.730 2 DEBUG nova.network.neutron [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:55:24 np0005465987 podman[301567]: 2025-10-02 12:55:24.861847278 +0000 UTC m=+0.072543822 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:55:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:25.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:25.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:25 np0005465987 nova_compute[230713]: 2025-10-02 12:55:25.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.334 2 DEBUG nova.network.neutron [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.460 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.460 2 DEBUG nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Instance network_info: |[{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.463 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Start _get_guest_xml network_info=[{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.467 2 WARNING nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.471 2 DEBUG nova.virt.libvirt.host [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.472 2 DEBUG nova.virt.libvirt.host [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.475 2 DEBUG nova.virt.libvirt.host [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.475 2 DEBUG nova.virt.libvirt.host [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.476 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.477 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.477 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.478 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.478 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.478 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.478 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.478 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.479 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.479 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.479 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.479 2 DEBUG nova.virt.hardware [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.482 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1688538003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.936 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.960 2 DEBUG nova.storage.rbd_utils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:26 np0005465987 nova_compute[230713]: 2025-10-02 12:55:26.963 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:27.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2586245211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.395 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.397 2 DEBUG nova.virt.libvirt.vif [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1692291686',display_name='tempest-TestNetworkBasicOps-server-1692291686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1692291686',id=186,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZWIx9o6kvPe+g4xv7jAf5RmegZo+XaIBjQcv6mFDdVE2Gj2eWqFYANF8qBhZ2trYTB0D6p6t6i6if67qDSCR68FKh99aOZV/40OkaYK0vaRDFLVdlWm6tj/SR2I1Vx6w==',key_name='tempest-TestNetworkBasicOps-590509202',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-uec0sg7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:18Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2dff32de-ac4f-401a-9e6a-9446d4a0faaa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.397 2 DEBUG nova.network.os_vif_util [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.398 2 DEBUG nova.network.os_vif_util [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:61:8f,bridge_name='br-int',has_traffic_filtering=True,id=76ca6742-e0ea-45a7-bf82-750d5106e5b8,network=Network(67d7f48d-b1d1-47d6-8eed-15e85a093f32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ca6742-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.399 2 DEBUG nova.objects.instance [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:27.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.547 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <uuid>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</uuid>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <name>instance-000000ba</name>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestNetworkBasicOps-server-1692291686</nova:name>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:55:26</nova:creationTime>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <nova:port uuid="76ca6742-e0ea-45a7-bf82-750d5106e5b8">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <entry name="serial">2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <entry name="uuid">2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:95:61:8f"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <target dev="tap76ca6742-e0"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log" append="off"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:55:27 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:55:27 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:55:27 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:55:27 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.549 2 DEBUG nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Preparing to wait for external event network-vif-plugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.549 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.549 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.549 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.550 2 DEBUG nova.virt.libvirt.vif [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1692291686',display_name='tempest-TestNetworkBasicOps-server-1692291686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1692291686',id=186,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZWIx9o6kvPe+g4xv7jAf5RmegZo+XaIBjQcv6mFDdVE2Gj2eWqFYANF8qBhZ2trYTB0D6p6t6i6if67qDSCR68FKh99aOZV/40OkaYK0vaRDFLVdlWm6tj/SR2I1Vx6w==',key_name='tempest-TestNetworkBasicOps-590509202',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-uec0sg7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:18Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2dff32de-ac4f-401a-9e6a-9446d4a0faaa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.550 2 DEBUG nova.network.os_vif_util [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.551 2 DEBUG nova.network.os_vif_util [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:61:8f,bridge_name='br-int',has_traffic_filtering=True,id=76ca6742-e0ea-45a7-bf82-750d5106e5b8,network=Network(67d7f48d-b1d1-47d6-8eed-15e85a093f32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ca6742-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.552 2 DEBUG os_vif [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:61:8f,bridge_name='br-int',has_traffic_filtering=True,id=76ca6742-e0ea-45a7-bf82-750d5106e5b8,network=Network(67d7f48d-b1d1-47d6-8eed-15e85a093f32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ca6742-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.555 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76ca6742-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76ca6742-e0, col_values=(('external_ids', {'iface-id': '76ca6742-e0ea-45a7-bf82-750d5106e5b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:61:8f', 'vm-uuid': '2dff32de-ac4f-401a-9e6a-9446d4a0faaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:27 np0005465987 NetworkManager[44910]: <info>  [1759409727.5585] manager: (tap76ca6742-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.563 2 INFO os_vif [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:61:8f,bridge_name='br-int',has_traffic_filtering=True,id=76ca6742-e0ea-45a7-bf82-750d5106e5b8,network=Network(67d7f48d-b1d1-47d6-8eed-15e85a093f32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ca6742-e0')#033[00m
Oct  2 08:55:27 np0005465987 podman[301653]: 2025-10-02 12:55:27.651407869 +0000 UTC m=+0.050094315 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:55:27 np0005465987 podman[301652]: 2025-10-02 12:55:27.685138475 +0000 UTC m=+0.083450881 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:55:27 np0005465987 podman[301654]: 2025-10-02 12:55:27.685224527 +0000 UTC m=+0.081009364 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.723 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.723 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.724 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:95:61:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.724 2 INFO nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Using config drive#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.746 2 DEBUG nova.storage.rbd_utils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:27.982 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:27.982 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:27.982 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.998 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.998 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.999 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.999 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:55:27 np0005465987 nova_compute[230713]: 2025-10-02 12:55:27.999 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.197 2 INFO nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Creating config drive at /var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/disk.config#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.205 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5nwngta2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.341 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5nwngta2" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.366 2 DEBUG nova.storage.rbd_utils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.370 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/disk.config 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/264599544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.442 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.514 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.515 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.647 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.649 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4289MB free_disk=20.901065826416016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.649 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.650 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.749 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 2dff32de-ac4f-401a-9e6a-9446d4a0faaa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.750 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.750 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.769 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.802 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.802 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.820 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.845 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:55:28 np0005465987 nova_compute[230713]: 2025-10-02 12:55:28.878 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:29.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:29 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4181416561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:29 np0005465987 nova_compute[230713]: 2025-10-02 12:55:29.293 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:29 np0005465987 nova_compute[230713]: 2025-10-02 12:55:29.299 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:55:29 np0005465987 nova_compute[230713]: 2025-10-02 12:55:29.362 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:55:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:29.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:29 np0005465987 nova_compute[230713]: 2025-10-02 12:55:29.528 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:55:29 np0005465987 nova_compute[230713]: 2025-10-02 12:55:29.528 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.359 2 DEBUG oslo_concurrency.processutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/disk.config 2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.990s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.360 2 INFO nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Deleting local config drive /var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/disk.config because it was imported into RBD.#033[00m
Oct  2 08:55:30 np0005465987 kernel: tap76ca6742-e0: entered promiscuous mode
Oct  2 08:55:30 np0005465987 NetworkManager[44910]: <info>  [1759409730.4071] manager: (tap76ca6742-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Oct  2 08:55:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:55:30Z|00730|binding|INFO|Claiming lport 76ca6742-e0ea-45a7-bf82-750d5106e5b8 for this chassis.
Oct  2 08:55:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:55:30Z|00731|binding|INFO|76ca6742-e0ea-45a7-bf82-750d5106e5b8: Claiming fa:16:3e:95:61:8f 10.100.0.6
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005465987 systemd-machined[188335]: New machine qemu-85-instance-000000ba.
Oct  2 08:55:30 np0005465987 systemd-udevd[301830]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:55:30 np0005465987 NetworkManager[44910]: <info>  [1759409730.4536] device (tap76ca6742-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:55:30 np0005465987 NetworkManager[44910]: <info>  [1759409730.4545] device (tap76ca6742-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005465987 systemd[1]: Started Virtual Machine qemu-85-instance-000000ba.
Oct  2 08:55:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:55:30Z|00732|binding|INFO|Setting lport 76ca6742-e0ea-45a7-bf82-750d5106e5b8 ovn-installed in OVS
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.528 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.528 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:55:30Z|00733|binding|INFO|Setting lport 76ca6742-e0ea-45a7-bf82-750d5106e5b8 up in Southbound
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.538 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:61:8f 10.100.0.6'], port_security=['fa:16:3e:95:61:8f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2dff32de-ac4f-401a-9e6a-9446d4a0faaa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67d7f48d-b1d1-47d6-8eed-15e85a093f32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cdfdfec-43b4-428e-8545-edd01c9392fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa8b8d0e-a3e0-4fe3-a765-c419014cae9f, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=76ca6742-e0ea-45a7-bf82-750d5106e5b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.540 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 76ca6742-e0ea-45a7-bf82-750d5106e5b8 in datapath 67d7f48d-b1d1-47d6-8eed-15e85a093f32 bound to our chassis#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.541 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67d7f48d-b1d1-47d6-8eed-15e85a093f32#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.553 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5389feee-5b9f-47e4-bc7d-e111ddbb99eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.553 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67d7f48d-b1 in ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.555 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67d7f48d-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.556 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1c762816-2a68-408b-8c1b-4304696f0507]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.556 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e933994a-a665-4acb-80f5-438d695e083b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.568 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[fa23d350-1e49-4c5f-9477-b607d21680b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.593 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e3df3a83-cc43-41db-9ecd-67fe6556e1d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.626 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[91a192ec-3894-47e0-be61-0801c80f7a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 systemd-udevd[301832]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:55:30 np0005465987 NetworkManager[44910]: <info>  [1759409730.6320] manager: (tap67d7f48d-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/335)
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.633 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3f1166-78eb-47d7-8993-776d593202e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.671 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b97594-4e4a-4327-9cca-7b8371e24192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.673 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a8da6d02-cd3f-4c99-a1db-84a7c37d0b9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 NetworkManager[44910]: <info>  [1759409730.6952] device (tap67d7f48d-b0): carrier: link connected
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.702 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b9e885-3663-4b96-b1fc-ed269f33b4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.717 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dc31e593-0fc6-4303-bb8c-28ba632e6ae0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67d7f48d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:4c:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779926, 'reachable_time': 30184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301863, 'error': None, 'target': 'ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.731 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c640fa5d-b706-473a-b8a3-47d1cb9e0845]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:4c3b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779926, 'tstamp': 779926}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301864, 'error': None, 'target': 'ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.750 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8421d045-cc3d-4581-b73f-9f7b1c694257]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67d7f48d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:4c:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779926, 'reachable_time': 30184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301865, 'error': None, 'target': 'ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.783 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[14bdc41c-4c66-4091-ab49-7979c34fd731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.840 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3daefbf0-ca1b-4c23-8fcf-f9530a5734f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.843 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67d7f48d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.843 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.844 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67d7f48d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005465987 kernel: tap67d7f48d-b0: entered promiscuous mode
Oct  2 08:55:30 np0005465987 NetworkManager[44910]: <info>  [1759409730.8480] manager: (tap67d7f48d-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.856 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67d7f48d-b0, col_values=(('external_ids', {'iface-id': '834e2700-8bec-46ce-90f8-a8053806269a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:55:30Z|00734|binding|INFO|Releasing lport 834e2700-8bec-46ce-90f8-a8053806269a from this chassis (sb_readonly=0)
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.863 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67d7f48d-b1d1-47d6-8eed-15e85a093f32.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67d7f48d-b1d1-47d6-8eed-15e85a093f32.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:55:30 np0005465987 nova_compute[230713]: 2025-10-02 12:55:30.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.873 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4271cf55-8f54-4d98-ac9b-2824a0a91613]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.875 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-67d7f48d-b1d1-47d6-8eed-15e85a093f32
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/67d7f48d-b1d1-47d6-8eed-15e85a093f32.pid.haproxy
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 67d7f48d-b1d1-47d6-8eed-15e85a093f32
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:55:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:30.877 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32', 'env', 'PROCESS_TAG=haproxy-67d7f48d-b1d1-47d6-8eed-15e85a093f32', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67d7f48d-b1d1-47d6-8eed-15e85a093f32.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:55:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:31.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:31 np0005465987 podman[301914]: 2025-10-02 12:55:31.282065439 +0000 UTC m=+0.042738004 container create c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:55:31 np0005465987 systemd[1]: Started libpod-conmon-c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5.scope.
Oct  2 08:55:31 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:55:31 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1a612d2b278e64e5aebe07682c069ac2862420b06611b845774ad2a62d5ee66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:55:31 np0005465987 podman[301914]: 2025-10-02 12:55:31.259224402 +0000 UTC m=+0.019896987 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:55:31 np0005465987 podman[301914]: 2025-10-02 12:55:31.362551247 +0000 UTC m=+0.123223832 container init c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:55:31 np0005465987 podman[301914]: 2025-10-02 12:55:31.367317768 +0000 UTC m=+0.127990333 container start c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:55:31 np0005465987 neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32[301929]: [NOTICE]   (301933) : New worker (301935) forked
Oct  2 08:55:31 np0005465987 neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32[301929]: [NOTICE]   (301933) : Loading success.
Oct  2 08:55:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:31.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:31 np0005465987 nova_compute[230713]: 2025-10-02 12:55:31.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.255 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409732.2548459, 2dff32de-ac4f-401a-9e6a-9446d4a0faaa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.256 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] VM Started (Lifecycle Event)#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.435 2 DEBUG nova.compute.manager [req-9157bed6-16bb-45d2-b50b-e7ddc468bff9 req-ea042830-ee86-4a33-8e0f-e9d4acd4ae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-plugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.436 2 DEBUG oslo_concurrency.lockutils [req-9157bed6-16bb-45d2-b50b-e7ddc468bff9 req-ea042830-ee86-4a33-8e0f-e9d4acd4ae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.436 2 DEBUG oslo_concurrency.lockutils [req-9157bed6-16bb-45d2-b50b-e7ddc468bff9 req-ea042830-ee86-4a33-8e0f-e9d4acd4ae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.437 2 DEBUG oslo_concurrency.lockutils [req-9157bed6-16bb-45d2-b50b-e7ddc468bff9 req-ea042830-ee86-4a33-8e0f-e9d4acd4ae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.437 2 DEBUG nova.compute.manager [req-9157bed6-16bb-45d2-b50b-e7ddc468bff9 req-ea042830-ee86-4a33-8e0f-e9d4acd4ae98 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Processing event network-vif-plugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.438 2 DEBUG nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.442 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.446 2 INFO nova.virt.libvirt.driver [-] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Instance spawned successfully.#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.447 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.548 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.554 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.578 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.578 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.578 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.579 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.579 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.579 2 DEBUG nova.virt.libvirt.driver [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.832 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.833 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409732.2550354, 2dff32de-ac4f-401a-9e6a-9446d4a0faaa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.833 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.898 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.902 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409732.4416454, 2dff32de-ac4f-401a-9e6a-9446d4a0faaa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.902 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.934 2 INFO nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Took 13.51 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.934 2 DEBUG nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.951 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:32 np0005465987 nova_compute[230713]: 2025-10-02 12:55:32.955 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:33.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:33 np0005465987 nova_compute[230713]: 2025-10-02 12:55:33.162 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:55:33 np0005465987 nova_compute[230713]: 2025-10-02 12:55:33.254 2 INFO nova.compute.manager [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Took 16.43 seconds to build instance.#033[00m
Oct  2 08:55:33 np0005465987 nova_compute[230713]: 2025-10-02 12:55:33.479 2 DEBUG oslo_concurrency.lockutils [None req-5594f278-353c-4300-8f02-63261858caed fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:33.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:34 np0005465987 nova_compute[230713]: 2025-10-02 12:55:34.748 2 DEBUG nova.compute.manager [req-74f64ad5-a737-4ef9-8e08-cf6e2e4b7e57 req-8e8b2e0a-db6f-4858-9cda-f5a727b28917 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-plugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:34 np0005465987 nova_compute[230713]: 2025-10-02 12:55:34.749 2 DEBUG oslo_concurrency.lockutils [req-74f64ad5-a737-4ef9-8e08-cf6e2e4b7e57 req-8e8b2e0a-db6f-4858-9cda-f5a727b28917 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:34 np0005465987 nova_compute[230713]: 2025-10-02 12:55:34.749 2 DEBUG oslo_concurrency.lockutils [req-74f64ad5-a737-4ef9-8e08-cf6e2e4b7e57 req-8e8b2e0a-db6f-4858-9cda-f5a727b28917 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:34 np0005465987 nova_compute[230713]: 2025-10-02 12:55:34.750 2 DEBUG oslo_concurrency.lockutils [req-74f64ad5-a737-4ef9-8e08-cf6e2e4b7e57 req-8e8b2e0a-db6f-4858-9cda-f5a727b28917 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:34 np0005465987 nova_compute[230713]: 2025-10-02 12:55:34.750 2 DEBUG nova.compute.manager [req-74f64ad5-a737-4ef9-8e08-cf6e2e4b7e57 req-8e8b2e0a-db6f-4858-9cda-f5a727b28917 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] No waiting events found dispatching network-vif-plugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:34 np0005465987 nova_compute[230713]: 2025-10-02 12:55:34.750 2 WARNING nova.compute.manager [req-74f64ad5-a737-4ef9-8e08-cf6e2e4b7e57 req-8e8b2e0a-db6f-4858-9cda-f5a727b28917 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received unexpected event network-vif-plugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:55:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:35.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:35.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:35 np0005465987 nova_compute[230713]: 2025-10-02 12:55:35.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:35 np0005465987 nova_compute[230713]: 2025-10-02 12:55:35.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:55:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:37.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:37 np0005465987 nova_compute[230713]: 2025-10-02 12:55:37.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:37.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:37 np0005465987 nova_compute[230713]: 2025-10-02 12:55:37.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:39.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:39.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:41.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:41 np0005465987 NetworkManager[44910]: <info>  [1759409741.0934] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Oct  2 08:55:41 np0005465987 NetworkManager[44910]: <info>  [1759409741.0951] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Oct  2 08:55:41 np0005465987 nova_compute[230713]: 2025-10-02 12:55:41.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:41 np0005465987 nova_compute[230713]: 2025-10-02 12:55:41.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:41 np0005465987 ovn_controller[129172]: 2025-10-02T12:55:41Z|00735|binding|INFO|Releasing lport 834e2700-8bec-46ce-90f8-a8053806269a from this chassis (sb_readonly=0)
Oct  2 08:55:41 np0005465987 nova_compute[230713]: 2025-10-02 12:55:41.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:41.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:42 np0005465987 nova_compute[230713]: 2025-10-02 12:55:42.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:42 np0005465987 nova_compute[230713]: 2025-10-02 12:55:42.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:42.335 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:42.336 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:55:42 np0005465987 nova_compute[230713]: 2025-10-02 12:55:42.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:42 np0005465987 nova_compute[230713]: 2025-10-02 12:55:42.762 2 DEBUG nova.compute.manager [req-c79b86ef-7b6a-41c6-814b-eaba7a0b7fc8 req-983acea2-738b-4315-bf75-cdca236080c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-changed-76ca6742-e0ea-45a7-bf82-750d5106e5b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:42 np0005465987 nova_compute[230713]: 2025-10-02 12:55:42.762 2 DEBUG nova.compute.manager [req-c79b86ef-7b6a-41c6-814b-eaba7a0b7fc8 req-983acea2-738b-4315-bf75-cdca236080c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing instance network info cache due to event network-changed-76ca6742-e0ea-45a7-bf82-750d5106e5b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:55:42 np0005465987 nova_compute[230713]: 2025-10-02 12:55:42.763 2 DEBUG oslo_concurrency.lockutils [req-c79b86ef-7b6a-41c6-814b-eaba7a0b7fc8 req-983acea2-738b-4315-bf75-cdca236080c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:42 np0005465987 nova_compute[230713]: 2025-10-02 12:55:42.763 2 DEBUG oslo_concurrency.lockutils [req-c79b86ef-7b6a-41c6-814b-eaba7a0b7fc8 req-983acea2-738b-4315-bf75-cdca236080c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:42 np0005465987 nova_compute[230713]: 2025-10-02 12:55:42.763 2 DEBUG nova.network.neutron [req-c79b86ef-7b6a-41c6-814b-eaba7a0b7fc8 req-983acea2-738b-4315-bf75-cdca236080c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing network info cache for port 76ca6742-e0ea-45a7-bf82-750d5106e5b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:55:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:43.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:43.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:44 np0005465987 nova_compute[230713]: 2025-10-02 12:55:44.256 2 DEBUG nova.network.neutron [req-c79b86ef-7b6a-41c6-814b-eaba7a0b7fc8 req-983acea2-738b-4315-bf75-cdca236080c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updated VIF entry in instance network info cache for port 76ca6742-e0ea-45a7-bf82-750d5106e5b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:55:44 np0005465987 nova_compute[230713]: 2025-10-02 12:55:44.257 2 DEBUG nova.network.neutron [req-c79b86ef-7b6a-41c6-814b-eaba7a0b7fc8 req-983acea2-738b-4315-bf75-cdca236080c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:44 np0005465987 nova_compute[230713]: 2025-10-02 12:55:44.447 2 DEBUG oslo_concurrency.lockutils [req-c79b86ef-7b6a-41c6-814b-eaba7a0b7fc8 req-983acea2-738b-4315-bf75-cdca236080c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:44 np0005465987 nova_compute[230713]: 2025-10-02 12:55:44.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:45.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:45.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:47.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:47 np0005465987 nova_compute[230713]: 2025-10-02 12:55:47.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:55:47Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:61:8f 10.100.0.6
Oct  2 08:55:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:55:47Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:61:8f 10.100.0.6
Oct  2 08:55:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:47.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:47 np0005465987 nova_compute[230713]: 2025-10-02 12:55:47.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:49.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:49.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:55:50.338 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:51.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:51.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:52 np0005465987 nova_compute[230713]: 2025-10-02 12:55:52.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:52 np0005465987 nova_compute[230713]: 2025-10-02 12:55:52.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:53.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:53.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:53 np0005465987 nova_compute[230713]: 2025-10-02 12:55:53.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:55.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:55 np0005465987 nova_compute[230713]: 2025-10-02 12:55:55.103 2 INFO nova.compute.manager [None req-81428125-1932-49f5-82a1-d90051ffe585 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Get console output#033[00m
Oct  2 08:55:55 np0005465987 nova_compute[230713]: 2025-10-02 12:55:55.108 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:55:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:55.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:55 np0005465987 podman[301969]: 2025-10-02 12:55:55.852554147 +0000 UTC m=+0.057305953 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:55:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:57.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:57 np0005465987 nova_compute[230713]: 2025-10-02 12:55:57.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:57 np0005465987 nova_compute[230713]: 2025-10-02 12:55:57.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:57 np0005465987 podman[301991]: 2025-10-02 12:55:57.848822342 +0000 UTC m=+0.060203833 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:55:57 np0005465987 podman[301989]: 2025-10-02 12:55:57.86183655 +0000 UTC m=+0.081321612 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:55:57 np0005465987 podman[301990]: 2025-10-02 12:55:57.867927177 +0000 UTC m=+0.083008659 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:55:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:55:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:59.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:55:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:55:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:59.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:01.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:01 np0005465987 nova_compute[230713]: 2025-10-02 12:56:01.815 2 DEBUG oslo_concurrency.lockutils [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "interface-2dff32de-ac4f-401a-9e6a-9446d4a0faaa-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:01 np0005465987 nova_compute[230713]: 2025-10-02 12:56:01.816 2 DEBUG oslo_concurrency.lockutils [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "interface-2dff32de-ac4f-401a-9e6a-9446d4a0faaa-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:01 np0005465987 nova_compute[230713]: 2025-10-02 12:56:01.816 2 DEBUG nova.objects.instance [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'flavor' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:02 np0005465987 nova_compute[230713]: 2025-10-02 12:56:02.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:56:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:56:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:56:02 np0005465987 nova_compute[230713]: 2025-10-02 12:56:02.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:02 np0005465987 nova_compute[230713]: 2025-10-02 12:56:02.723 2 DEBUG nova.objects.instance [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_requests' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:02 np0005465987 nova_compute[230713]: 2025-10-02 12:56:02.900 2 DEBUG nova.network.neutron [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:56:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:03.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:03 np0005465987 nova_compute[230713]: 2025-10-02 12:56:03.190 2 DEBUG nova.policy [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:56:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:03.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:05 np0005465987 nova_compute[230713]: 2025-10-02 12:56:05.049 2 DEBUG nova.network.neutron [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Successfully created port: 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:56:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:05.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:07.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:07 np0005465987 nova_compute[230713]: 2025-10-02 12:56:07.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:07 np0005465987 nova_compute[230713]: 2025-10-02 12:56:07.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:07.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:09.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:56:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:56:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:09.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:11.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:11 np0005465987 nova_compute[230713]: 2025-10-02 12:56:11.164 2 DEBUG nova.network.neutron [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Successfully updated port: 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:56:11 np0005465987 nova_compute[230713]: 2025-10-02 12:56:11.206 2 DEBUG oslo_concurrency.lockutils [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:11 np0005465987 nova_compute[230713]: 2025-10-02 12:56:11.207 2 DEBUG oslo_concurrency.lockutils [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:11 np0005465987 nova_compute[230713]: 2025-10-02 12:56:11.207 2 DEBUG nova.network.neutron [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:56:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:11.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:11 np0005465987 nova_compute[230713]: 2025-10-02 12:56:11.725 2 DEBUG nova.compute.manager [req-336abcd9-90ba-41c9-9961-06caff93be7c req-3a15def7-a3ec-4df6-b70c-7cd1d5cb18ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-changed-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:11 np0005465987 nova_compute[230713]: 2025-10-02 12:56:11.725 2 DEBUG nova.compute.manager [req-336abcd9-90ba-41c9-9961-06caff93be7c req-3a15def7-a3ec-4df6-b70c-7cd1d5cb18ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing instance network info cache due to event network-changed-4a77b29a-85b0-4bc7-b38c-8915de0b74d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:56:11 np0005465987 nova_compute[230713]: 2025-10-02 12:56:11.726 2 DEBUG oslo_concurrency.lockutils [req-336abcd9-90ba-41c9-9961-06caff93be7c req-3a15def7-a3ec-4df6-b70c-7cd1d5cb18ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:12 np0005465987 nova_compute[230713]: 2025-10-02 12:56:12.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:12 np0005465987 nova_compute[230713]: 2025-10-02 12:56:12.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:15.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:15.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:17.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:17 np0005465987 nova_compute[230713]: 2025-10-02 12:56:17.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:17 np0005465987 nova_compute[230713]: 2025-10-02 12:56:17.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:17.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:17 np0005465987 nova_compute[230713]: 2025-10-02 12:56:17.993 2 DEBUG nova.network.neutron [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.027 2 DEBUG oslo_concurrency.lockutils [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.027 2 DEBUG oslo_concurrency.lockutils [req-336abcd9-90ba-41c9-9961-06caff93be7c req-3a15def7-a3ec-4df6-b70c-7cd1d5cb18ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.028 2 DEBUG nova.network.neutron [req-336abcd9-90ba-41c9-9961-06caff93be7c req-3a15def7-a3ec-4df6-b70c-7cd1d5cb18ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing network info cache for port 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.030 2 DEBUG nova.virt.libvirt.vif [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1692291686',display_name='tempest-TestNetworkBasicOps-server-1692291686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1692291686',id=186,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZWIx9o6kvPe+g4xv7jAf5RmegZo+XaIBjQcv6mFDdVE2Gj2eWqFYANF8qBhZ2trYTB0D6p6t6i6if67qDSCR68FKh99aOZV/40OkaYK0vaRDFLVdlWm6tj/SR2I1Vx6w==',key_name='tempest-TestNetworkBasicOps-590509202',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-uec0sg7u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:33Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2dff32de-ac4f-401a-9e6a-9446d4a0faaa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.030 2 DEBUG nova.network.os_vif_util [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.031 2 DEBUG nova.network.os_vif_util [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.031 2 DEBUG os_vif [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a77b29a-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a77b29a-85, col_values=(('external_ids', {'iface-id': '4a77b29a-85b0-4bc7-b38c-8915de0b74d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:71:42', 'vm-uuid': '2dff32de-ac4f-401a-9e6a-9446d4a0faaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 NetworkManager[44910]: <info>  [1759409779.0375] manager: (tap4a77b29a-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.047 2 INFO os_vif [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85')#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.048 2 DEBUG nova.virt.libvirt.vif [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1692291686',display_name='tempest-TestNetworkBasicOps-server-1692291686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1692291686',id=186,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZWIx9o6kvPe+g4xv7jAf5RmegZo+XaIBjQcv6mFDdVE2Gj2eWqFYANF8qBhZ2trYTB0D6p6t6i6if67qDSCR68FKh99aOZV/40OkaYK0vaRDFLVdlWm6tj/SR2I1Vx6w==',key_name='tempest-TestNetworkBasicOps-590509202',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-uec0sg7u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:33Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2dff32de-ac4f-401a-9e6a-9446d4a0faaa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.048 2 DEBUG nova.network.os_vif_util [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.049 2 DEBUG nova.network.os_vif_util [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.052 2 DEBUG nova.virt.libvirt.guest [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] attach device xml: <interface type="ethernet">
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:46:71:42"/>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <target dev="tap4a77b29a-85"/>
Oct  2 08:56:19 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:56:19 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:56:19 np0005465987 kernel: tap4a77b29a-85: entered promiscuous mode
Oct  2 08:56:19 np0005465987 NetworkManager[44910]: <info>  [1759409779.0688] manager: (tap4a77b29a-85): new Tun device (/org/freedesktop/NetworkManager/Devices/340)
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:19Z|00736|binding|INFO|Claiming lport 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 for this chassis.
Oct  2 08:56:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:19Z|00737|binding|INFO|4a77b29a-85b0-4bc7-b38c-8915de0b74d3: Claiming fa:16:3e:46:71:42 10.100.0.21
Oct  2 08:56:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:19.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:19 np0005465987 systemd-udevd[302240]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:56:19 np0005465987 NetworkManager[44910]: <info>  [1759409779.1177] device (tap4a77b29a-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:56:19 np0005465987 NetworkManager[44910]: <info>  [1759409779.1190] device (tap4a77b29a-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:19Z|00738|binding|INFO|Setting lport 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 ovn-installed in OVS
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:19Z|00739|binding|INFO|Setting lport 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 up in Southbound
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.229 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:71:42 10.100.0.21'], port_security=['fa:16:3e:46:71:42 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '2dff32de-ac4f-401a-9e6a-9446d4a0faaa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48b23f60-a626-4a95-b154-a764454c451b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c114e1a6-21d7-49a2-a13f-595584b99547', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3dbfe445-7f25-42ca-8688-0a8d6c43ed3f, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4a77b29a-85b0-4bc7-b38c-8915de0b74d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.230 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 in datapath 48b23f60-a626-4a95-b154-a764454c451b bound to our chassis#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.231 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48b23f60-a626-4a95-b154-a764454c451b#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.242 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ea19e385-d70b-490e-a1ec-c1e04ee718cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.243 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48b23f60-a1 in ovnmeta-48b23f60-a626-4a95-b154-a764454c451b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.245 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48b23f60-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.245 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ec1470d7-9e96-4866-b8b0-5ad08f3c0a8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.246 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[86695c60-04a2-4123-8055-8b80e374cad8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.257 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[7e160823-3f8b-4f3b-9f40-21a792c0cf6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.270 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ba4d78-2ec7-4fa2-92e1-be0f491cc3fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.299 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef597df-6985-49e3-bd48-1b2c7136f794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.306 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f0e7c8-ff98-4fac-8c59-ae72eb9cf081]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 NetworkManager[44910]: <info>  [1759409779.3072] manager: (tap48b23f60-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/341)
Oct  2 08:56:19 np0005465987 systemd-udevd[302242]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.310 2 DEBUG nova.virt.libvirt.driver [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.312 2 DEBUG nova.virt.libvirt.driver [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.312 2 DEBUG nova.virt.libvirt.driver [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:95:61:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.312 2 DEBUG nova.virt.libvirt.driver [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:46:71:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.346 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a01ccb3b-3499-4c45-aef5-731fe5f856af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.348 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5203443a-b42a-4bfa-bc83-e012d6a3355d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.359 2 DEBUG nova.virt.libvirt.guest [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <nova:name>tempest-TestNetworkBasicOps-server-1692291686</nova:name>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:56:19</nova:creationTime>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    <nova:port uuid="76ca6742-e0ea-45a7-bf82-750d5106e5b8">
Oct  2 08:56:19 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    <nova:port uuid="4a77b29a-85b0-4bc7-b38c-8915de0b74d3">
Oct  2 08:56:19 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:56:19 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:56:19 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:56:19 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:56:19 np0005465987 NetworkManager[44910]: <info>  [1759409779.3703] device (tap48b23f60-a0): carrier: link connected
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.377 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4672b42d-f74d-4b34-a97d-5c51b42a71a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.393 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[11d36aa3-913c-4bc6-917a-5b3b223ad99d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48b23f60-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:da:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784794, 'reachable_time': 33381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302269, 'error': None, 'target': 'ovnmeta-48b23f60-a626-4a95-b154-a764454c451b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.408 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a92ba6-5885-4823-b801-a1e1acc7f099]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:da6c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 784794, 'tstamp': 784794}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302270, 'error': None, 'target': 'ovnmeta-48b23f60-a626-4a95-b154-a764454c451b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.424 2 DEBUG oslo_concurrency.lockutils [None req-5f14f8be-f82a-4664-a0fa-58c3e9449aeb fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "interface-2dff32de-ac4f-401a-9e6a-9446d4a0faaa-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 17.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.424 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[13f23893-31e1-43d8-976b-6f75b0e8f748]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48b23f60-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:da:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784794, 'reachable_time': 33381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302271, 'error': None, 'target': 'ovnmeta-48b23f60-a626-4a95-b154-a764454c451b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.453 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c46a803e-2788-47d0-b1a5-3eb1653efd5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.506 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[31faf97a-3fe5-4052-8b43-6abdae3e35ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.508 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48b23f60-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.508 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.508 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48b23f60-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005465987 kernel: tap48b23f60-a0: entered promiscuous mode
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 NetworkManager[44910]: <info>  [1759409779.5116] manager: (tap48b23f60-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.515 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48b23f60-a0, col_values=(('external_ids', {'iface-id': '72e5faac-9d73-42b2-89ed-1f386e556cc7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:19Z|00740|binding|INFO|Releasing lport 72e5faac-9d73-42b2-89ed-1f386e556cc7 from this chassis (sb_readonly=0)
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.520 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48b23f60-a626-4a95-b154-a764454c451b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48b23f60-a626-4a95-b154-a764454c451b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.522 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7eb2a1-a43a-4f7d-8267-969ac82b3e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.523 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-48b23f60-a626-4a95-b154-a764454c451b
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/48b23f60-a626-4a95-b154-a764454c451b.pid.haproxy
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 48b23f60-a626-4a95-b154-a764454c451b
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:56:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:19.523 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48b23f60-a626-4a95-b154-a764454c451b', 'env', 'PROCESS_TAG=haproxy-48b23f60-a626-4a95-b154-a764454c451b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48b23f60-a626-4a95-b154-a764454c451b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:19.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.809 2 DEBUG nova.compute.manager [req-298d6b5a-1559-4715-9dec-bc5fa78471db req-9e4621e7-4f9c-4297-b21e-95d092288961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-plugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.810 2 DEBUG oslo_concurrency.lockutils [req-298d6b5a-1559-4715-9dec-bc5fa78471db req-9e4621e7-4f9c-4297-b21e-95d092288961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.810 2 DEBUG oslo_concurrency.lockutils [req-298d6b5a-1559-4715-9dec-bc5fa78471db req-9e4621e7-4f9c-4297-b21e-95d092288961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.811 2 DEBUG oslo_concurrency.lockutils [req-298d6b5a-1559-4715-9dec-bc5fa78471db req-9e4621e7-4f9c-4297-b21e-95d092288961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.811 2 DEBUG nova.compute.manager [req-298d6b5a-1559-4715-9dec-bc5fa78471db req-9e4621e7-4f9c-4297-b21e-95d092288961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] No waiting events found dispatching network-vif-plugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:19 np0005465987 nova_compute[230713]: 2025-10-02 12:56:19.812 2 WARNING nova.compute.manager [req-298d6b5a-1559-4715-9dec-bc5fa78471db req-9e4621e7-4f9c-4297-b21e-95d092288961 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received unexpected event network-vif-plugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:56:19 np0005465987 podman[302303]: 2025-10-02 12:56:19.960690168 +0000 UTC m=+0.053895279 container create ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:56:20 np0005465987 systemd[1]: Started libpod-conmon-ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac.scope.
Oct  2 08:56:20 np0005465987 podman[302303]: 2025-10-02 12:56:19.935746334 +0000 UTC m=+0.028951355 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:56:20 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:56:20 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c6737b21010c1116c286191d6117131a26dcb192e31944fe995753afa08193/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:56:20 np0005465987 podman[302303]: 2025-10-02 12:56:20.053929276 +0000 UTC m=+0.147134357 container init ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:56:20 np0005465987 podman[302303]: 2025-10-02 12:56:20.065518225 +0000 UTC m=+0.158723246 container start ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:56:20 np0005465987 neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b[302318]: [NOTICE]   (302322) : New worker (302324) forked
Oct  2 08:56:20 np0005465987 neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b[302318]: [NOTICE]   (302322) : Loading success.
Oct  2 08:56:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:21Z|00741|binding|INFO|Releasing lport 72e5faac-9d73-42b2-89ed-1f386e556cc7 from this chassis (sb_readonly=0)
Oct  2 08:56:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:21Z|00742|binding|INFO|Releasing lport 834e2700-8bec-46ce-90f8-a8053806269a from this chassis (sb_readonly=0)
Oct  2 08:56:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:21.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.386 2 DEBUG nova.network.neutron [req-336abcd9-90ba-41c9-9961-06caff93be7c req-3a15def7-a3ec-4df6-b70c-7cd1d5cb18ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updated VIF entry in instance network info cache for port 4a77b29a-85b0-4bc7-b38c-8915de0b74d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.387 2 DEBUG nova.network.neutron [req-336abcd9-90ba-41c9-9961-06caff93be7c req-3a15def7-a3ec-4df6-b70c-7cd1d5cb18ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.468 2 DEBUG oslo_concurrency.lockutils [req-336abcd9-90ba-41c9-9961-06caff93be7c req-3a15def7-a3ec-4df6-b70c-7cd1d5cb18ed d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:21.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.968 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.971 2 DEBUG nova.compute.manager [req-51d541af-f1b0-41a9-86dd-b227c7886bef req-32ffeaaf-85ae-45f3-a1a2-ad3bc71ef963 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-plugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.971 2 DEBUG oslo_concurrency.lockutils [req-51d541af-f1b0-41a9-86dd-b227c7886bef req-32ffeaaf-85ae-45f3-a1a2-ad3bc71ef963 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.972 2 DEBUG oslo_concurrency.lockutils [req-51d541af-f1b0-41a9-86dd-b227c7886bef req-32ffeaaf-85ae-45f3-a1a2-ad3bc71ef963 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.972 2 DEBUG oslo_concurrency.lockutils [req-51d541af-f1b0-41a9-86dd-b227c7886bef req-32ffeaaf-85ae-45f3-a1a2-ad3bc71ef963 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.973 2 DEBUG nova.compute.manager [req-51d541af-f1b0-41a9-86dd-b227c7886bef req-32ffeaaf-85ae-45f3-a1a2-ad3bc71ef963 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] No waiting events found dispatching network-vif-plugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:21 np0005465987 nova_compute[230713]: 2025-10-02 12:56:21.973 2 WARNING nova.compute.manager [req-51d541af-f1b0-41a9-86dd-b227c7886bef req-32ffeaaf-85ae-45f3-a1a2-ad3bc71ef963 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received unexpected event network-vif-plugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:56:22 np0005465987 nova_compute[230713]: 2025-10-02 12:56:22.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:22Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:71:42 10.100.0.21
Oct  2 08:56:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:22Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:71:42 10.100.0.21
Oct  2 08:56:22 np0005465987 nova_compute[230713]: 2025-10-02 12:56:22.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:22 np0005465987 nova_compute[230713]: 2025-10-02 12:56:22.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:22 np0005465987 nova_compute[230713]: 2025-10-02 12:56:22.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:56:22 np0005465987 nova_compute[230713]: 2025-10-02 12:56:22.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:56:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:23.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:23 np0005465987 nova_compute[230713]: 2025-10-02 12:56:23.604 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:23 np0005465987 nova_compute[230713]: 2025-10-02 12:56:23.604 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:23 np0005465987 nova_compute[230713]: 2025-10-02 12:56:23.604 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:56:23 np0005465987 nova_compute[230713]: 2025-10-02 12:56:23.604 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:23.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:24 np0005465987 nova_compute[230713]: 2025-10-02 12:56:24.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:25.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:25.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:26 np0005465987 podman[302334]: 2025-10-02 12:56:26.874219537 +0000 UTC m=+0.080062328 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:56:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:27.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:27 np0005465987 nova_compute[230713]: 2025-10-02 12:56:27.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:27.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:27.983 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:27.983 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:27.984 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:28 np0005465987 podman[302354]: 2025-10-02 12:56:28.861513794 +0000 UTC m=+0.067735599 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:56:28 np0005465987 podman[302355]: 2025-10-02 12:56:28.875129978 +0000 UTC m=+0.080914281 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct  2 08:56:28 np0005465987 podman[302353]: 2025-10-02 12:56:28.909372598 +0000 UTC m=+0.119128640 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:56:29 np0005465987 nova_compute[230713]: 2025-10-02 12:56:29.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:29.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:29.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.555 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.602 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.603 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.603 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.604 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.604 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.605 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.669 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.669 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.670 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.670 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:56:30 np0005465987 nova_compute[230713]: 2025-10-02 12:56:30.670 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:31.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:31 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3045358306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.122 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.223 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.224 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.388 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.390 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4159MB free_disk=20.921737670898438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.390 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.390 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:31.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.685 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 2dff32de-ac4f-401a-9e6a-9446d4a0faaa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.686 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.686 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:56:31 np0005465987 nova_compute[230713]: 2025-10-02 12:56:31.915 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:32 np0005465987 nova_compute[230713]: 2025-10-02 12:56:32.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3892657803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:32 np0005465987 nova_compute[230713]: 2025-10-02 12:56:32.407 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:32 np0005465987 nova_compute[230713]: 2025-10-02 12:56:32.415 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:56:32 np0005465987 nova_compute[230713]: 2025-10-02 12:56:32.430 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:56:32 np0005465987 nova_compute[230713]: 2025-10-02 12:56:32.450 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:56:32 np0005465987 nova_compute[230713]: 2025-10-02 12:56:32.451 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:33.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:33.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:34 np0005465987 nova_compute[230713]: 2025-10-02 12:56:34.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:35.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:35 np0005465987 nova_compute[230713]: 2025-10-02 12:56:35.813 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:37.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:37 np0005465987 nova_compute[230713]: 2025-10-02 12:56:37.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:37 np0005465987 nova_compute[230713]: 2025-10-02 12:56:37.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:37 np0005465987 nova_compute[230713]: 2025-10-02 12:56:37.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:56:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:39.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:39 np0005465987 nova_compute[230713]: 2025-10-02 12:56:39.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.563422) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799563493, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1535, "num_deletes": 263, "total_data_size": 3321189, "memory_usage": 3358016, "flush_reason": "Manual Compaction"}
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799575680, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 2178036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68803, "largest_seqno": 70333, "table_properties": {"data_size": 2171518, "index_size": 3652, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14285, "raw_average_key_size": 20, "raw_value_size": 2158198, "raw_average_value_size": 3056, "num_data_blocks": 160, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409684, "oldest_key_time": 1759409684, "file_creation_time": 1759409799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 12338 microseconds, and 6279 cpu microseconds.
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.575768) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 2178036 bytes OK
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.575789) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.577500) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.577521) EVENT_LOG_v1 {"time_micros": 1759409799577515, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.577539) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 3314033, prev total WAL file size 3314033, number of live WAL files 2.
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.578675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353036' seq:72057594037927935, type:22 .. '6C6F676D0032373630' seq:0, type:0; will stop at (end)
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(2126KB)], [138(11MB)]
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799578755, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 14573772, "oldest_snapshot_seqno": -1}
Oct  2 08:56:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:39.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9283 keys, 14414282 bytes, temperature: kUnknown
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799646223, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 14414282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14350818, "index_size": 39159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 243950, "raw_average_key_size": 26, "raw_value_size": 14184452, "raw_average_value_size": 1528, "num_data_blocks": 1509, "num_entries": 9283, "num_filter_entries": 9283, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.646458) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 14414282 bytes
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.647733) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.7 rd, 213.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.8 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(13.3) write-amplify(6.6) OK, records in: 9825, records dropped: 542 output_compression: NoCompression
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.647748) EVENT_LOG_v1 {"time_micros": 1759409799647740, "job": 88, "event": "compaction_finished", "compaction_time_micros": 67552, "compaction_time_cpu_micros": 31976, "output_level": 6, "num_output_files": 1, "total_output_size": 14414282, "num_input_records": 9825, "num_output_records": 9283, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799648120, "job": 88, "event": "table_file_deletion", "file_number": 140}
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409799650116, "job": 88, "event": "table_file_deletion", "file_number": 138}
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.578533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.650163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.650169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.650171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.650172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:39 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:56:39.650174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:40 np0005465987 nova_compute[230713]: 2025-10-02 12:56:40.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:41.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:41.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:42 np0005465987 nova_compute[230713]: 2025-10-02 12:56:42.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:43.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:43.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:44 np0005465987 nova_compute[230713]: 2025-10-02 12:56:44.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:45.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:45.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:47.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:47 np0005465987 nova_compute[230713]: 2025-10-02 12:56:47.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:47Z|00743|binding|INFO|Releasing lport 72e5faac-9d73-42b2-89ed-1f386e556cc7 from this chassis (sb_readonly=0)
Oct  2 08:56:47 np0005465987 ovn_controller[129172]: 2025-10-02T12:56:47Z|00744|binding|INFO|Releasing lport 834e2700-8bec-46ce-90f8-a8053806269a from this chassis (sb_readonly=0)
Oct  2 08:56:47 np0005465987 nova_compute[230713]: 2025-10-02 12:56:47.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:47.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:49 np0005465987 nova_compute[230713]: 2025-10-02 12:56:49.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:49.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:49.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:49 np0005465987 nova_compute[230713]: 2025-10-02 12:56:49.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:50 np0005465987 nova_compute[230713]: 2025-10-02 12:56:50.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:50.691 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:50 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:50.693 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:56:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:51.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:51.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:52 np0005465987 nova_compute[230713]: 2025-10-02 12:56:52.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:52 np0005465987 nova_compute[230713]: 2025-10-02 12:56:52.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:53.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:54 np0005465987 nova_compute[230713]: 2025-10-02 12:56:54.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:56:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1893935533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:56:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:56:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1893935533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:56:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:55.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:56:56.694 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:56:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:57.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:56:57 np0005465987 nova_compute[230713]: 2025-10-02 12:56:57.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:56:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:57.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:56:57 np0005465987 podman[302463]: 2025-10-02 12:56:57.843456476 +0000 UTC m=+0.057149988 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:56:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:59.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:59 np0005465987 nova_compute[230713]: 2025-10-02 12:56:59.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:56:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:59.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:59 np0005465987 podman[302483]: 2025-10-02 12:56:59.865201702 +0000 UTC m=+0.067835902 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:56:59 np0005465987 podman[302482]: 2025-10-02 12:56:59.865191192 +0000 UTC m=+0.080084059 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:56:59 np0005465987 podman[302484]: 2025-10-02 12:56:59.878004833 +0000 UTC m=+0.084090949 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:57:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:01.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:01.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:02 np0005465987 nova_compute[230713]: 2025-10-02 12:57:02.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:03 np0005465987 nova_compute[230713]: 2025-10-02 12:57:03.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:04 np0005465987 nova_compute[230713]: 2025-10-02 12:57:04.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:04 np0005465987 nova_compute[230713]: 2025-10-02 12:57:04.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:04 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Oct  2 08:57:04 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:04.711033) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:57:04 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Oct  2 08:57:04 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409824711072, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 474, "num_deletes": 250, "total_data_size": 616470, "memory_usage": 625328, "flush_reason": "Manual Compaction"}
Oct  2 08:57:04 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Oct  2 08:57:04 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409824899422, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 313927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70338, "largest_seqno": 70807, "table_properties": {"data_size": 311514, "index_size": 512, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6595, "raw_average_key_size": 20, "raw_value_size": 306608, "raw_average_value_size": 946, "num_data_blocks": 23, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409799, "oldest_key_time": 1759409799, "file_creation_time": 1759409824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:57:04 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 188432 microseconds, and 1728 cpu microseconds.
Oct  2 08:57:04 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:04.899460) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 313927 bytes OK
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:04.899484) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.052509) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.052540) EVENT_LOG_v1 {"time_micros": 1759409825052534, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.052560) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 613610, prev total WAL file size 660037, number of live WAL files 2.
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.053191) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323630' seq:72057594037927935, type:22 .. '6D6772737461740032353131' seq:0, type:0; will stop at (end)
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(306KB)], [141(13MB)]
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409825053254, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 14728209, "oldest_snapshot_seqno": -1}
Oct  2 08:57:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:05.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9104 keys, 10985214 bytes, temperature: kUnknown
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409825425844, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 10985214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10927566, "index_size": 33791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 240421, "raw_average_key_size": 26, "raw_value_size": 10768894, "raw_average_value_size": 1182, "num_data_blocks": 1286, "num_entries": 9104, "num_filter_entries": 9104, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.426334) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 10985214 bytes
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.428569) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.5 rd, 29.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.7 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(81.9) write-amplify(35.0) OK, records in: 9607, records dropped: 503 output_compression: NoCompression
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.428608) EVENT_LOG_v1 {"time_micros": 1759409825428589, "job": 90, "event": "compaction_finished", "compaction_time_micros": 372732, "compaction_time_cpu_micros": 28183, "output_level": 6, "num_output_files": 1, "total_output_size": 10985214, "num_input_records": 9607, "num_output_records": 9104, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409825429047, "job": 90, "event": "table_file_deletion", "file_number": 143}
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409825436514, "job": 90, "event": "table_file_deletion", "file_number": 141}
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.053006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.436603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.436610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.436612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.436614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:57:05.436615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:07 np0005465987 nova_compute[230713]: 2025-10-02 12:57:07.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:07.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:07.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:09.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:09 np0005465987 nova_compute[230713]: 2025-10-02 12:57:09.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:09.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:57:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:57:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:57:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:57:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:11.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:11 np0005465987 nova_compute[230713]: 2025-10-02 12:57:11.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:57:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:57:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:57:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:11.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:11 np0005465987 nova_compute[230713]: 2025-10-02 12:57:11.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:12 np0005465987 nova_compute[230713]: 2025-10-02 12:57:12.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:13.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:13.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:14 np0005465987 nova_compute[230713]: 2025-10-02 12:57:14.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:15.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:15.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:17 np0005465987 nova_compute[230713]: 2025-10-02 12:57:17.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:17.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:17.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:57:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:57:17 np0005465987 nova_compute[230713]: 2025-10-02 12:57:17.792 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "3f92060f-3dad-43b7-96d0-802c353d9e47" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:17 np0005465987 nova_compute[230713]: 2025-10-02 12:57:17.793 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:17 np0005465987 nova_compute[230713]: 2025-10-02 12:57:17.812 2 DEBUG nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:57:17 np0005465987 nova_compute[230713]: 2025-10-02 12:57:17.946 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:17 np0005465987 nova_compute[230713]: 2025-10-02 12:57:17.946 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:17 np0005465987 nova_compute[230713]: 2025-10-02 12:57:17.952 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:57:17 np0005465987 nova_compute[230713]: 2025-10-02 12:57:17.952 2 INFO nova.compute.claims [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.091 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:18 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/359895377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.578 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.584 2 DEBUG nova.compute.provider_tree [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.615 2 DEBUG nova.scheduler.client.report [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.669 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.670 2 DEBUG nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.738 2 DEBUG nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.738 2 DEBUG nova.network.neutron [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.762 2 INFO nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.787 2 DEBUG nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.921 2 DEBUG nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.922 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.922 2 INFO nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Creating image(s)#033[00m
Oct  2 08:57:18 np0005465987 nova_compute[230713]: 2025-10-02 12:57:18.962 2 DEBUG nova.storage.rbd_utils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 3f92060f-3dad-43b7-96d0-802c353d9e47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.000 2 DEBUG nova.storage.rbd_utils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 3f92060f-3dad-43b7-96d0-802c353d9e47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.030 2 DEBUG nova.storage.rbd_utils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 3f92060f-3dad-43b7-96d0-802c353d9e47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.034 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.098 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.099 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.100 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.100 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.137 2 DEBUG nova.storage.rbd_utils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 3f92060f-3dad-43b7-96d0-802c353d9e47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.141 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 3f92060f-3dad-43b7-96d0-802c353d9e47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 08:57:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:19.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.542 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 3f92060f-3dad-43b7-96d0-802c353d9e47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.609 2 DEBUG nova.storage.rbd_utils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] resizing rbd image 3f92060f-3dad-43b7-96d0-802c353d9e47_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:57:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:19.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.708 2 DEBUG nova.objects.instance [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f92060f-3dad-43b7-96d0-802c353d9e47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.724 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.724 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Ensure instance console log exists: /var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.725 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.725 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:19 np0005465987 nova_compute[230713]: 2025-10-02 12:57:19.725 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:21.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:21.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:21 np0005465987 nova_compute[230713]: 2025-10-02 12:57:21.814 2 DEBUG nova.network.neutron [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Successfully created port: df2532e0-88f2-438c-88b9-08f163fc19c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.125 2 DEBUG nova.compute.manager [req-371db995-e8e5-449e-af6e-5f3eec425629 req-49b3de9c-73e4-46c2-8b9b-2ec08ae009cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-changed-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.125 2 DEBUG nova.compute.manager [req-371db995-e8e5-449e-af6e-5f3eec425629 req-49b3de9c-73e4-46c2-8b9b-2ec08ae009cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing instance network info cache due to event network-changed-4a77b29a-85b0-4bc7-b38c-8915de0b74d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.126 2 DEBUG oslo_concurrency.lockutils [req-371db995-e8e5-449e-af6e-5f3eec425629 req-49b3de9c-73e4-46c2-8b9b-2ec08ae009cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.126 2 DEBUG oslo_concurrency.lockutils [req-371db995-e8e5-449e-af6e-5f3eec425629 req-49b3de9c-73e4-46c2-8b9b-2ec08ae009cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.126 2 DEBUG nova.network.neutron [req-371db995-e8e5-449e-af6e-5f3eec425629 req-49b3de9c-73e4-46c2-8b9b-2ec08ae009cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing network info cache for port 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:57:22 np0005465987 nova_compute[230713]: 2025-10-02 12:57:22.993 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:57:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:23.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:23 np0005465987 nova_compute[230713]: 2025-10-02 12:57:23.231 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:57:23 np0005465987 nova_compute[230713]: 2025-10-02 12:57:23.504 2 DEBUG nova.network.neutron [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Successfully updated port: df2532e0-88f2-438c-88b9-08f163fc19c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:57:23 np0005465987 nova_compute[230713]: 2025-10-02 12:57:23.527 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "refresh_cache-3f92060f-3dad-43b7-96d0-802c353d9e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:57:23 np0005465987 nova_compute[230713]: 2025-10-02 12:57:23.527 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquired lock "refresh_cache-3f92060f-3dad-43b7-96d0-802c353d9e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:57:23 np0005465987 nova_compute[230713]: 2025-10-02 12:57:23.527 2 DEBUG nova.network.neutron [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:57:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:23.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.052 2 DEBUG nova.network.neutron [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.063 2 DEBUG nova.network.neutron [req-371db995-e8e5-449e-af6e-5f3eec425629 req-49b3de9c-73e4-46c2-8b9b-2ec08ae009cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updated VIF entry in instance network info cache for port 4a77b29a-85b0-4bc7-b38c-8915de0b74d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.063 2 DEBUG nova.network.neutron [req-371db995-e8e5-449e-af6e-5f3eec425629 req-49b3de9c-73e4-46c2-8b9b-2ec08ae009cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.088 2 DEBUG oslo_concurrency.lockutils [req-371db995-e8e5-449e-af6e-5f3eec425629 req-49b3de9c-73e4-46c2-8b9b-2ec08ae009cf d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.089 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.089 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.090 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.129 2 DEBUG nova.compute.manager [req-1c0d444d-4657-46f4-ac21-aae850b46875 req-e3e28c89-624e-4e5a-954a-a3f372359209 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Received event network-changed-df2532e0-88f2-438c-88b9-08f163fc19c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.129 2 DEBUG nova.compute.manager [req-1c0d444d-4657-46f4-ac21-aae850b46875 req-e3e28c89-624e-4e5a-954a-a3f372359209 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Refreshing instance network info cache due to event network-changed-df2532e0-88f2-438c-88b9-08f163fc19c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.129 2 DEBUG oslo_concurrency.lockutils [req-1c0d444d-4657-46f4-ac21-aae850b46875 req-e3e28c89-624e-4e5a-954a-a3f372359209 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-3f92060f-3dad-43b7-96d0-802c353d9e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:57:24 np0005465987 nova_compute[230713]: 2025-10-02 12:57:24.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:25.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:25.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.759 2 DEBUG nova.network.neutron [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Updating instance_info_cache with network_info: [{"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.826 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Releasing lock "refresh_cache-3f92060f-3dad-43b7-96d0-802c353d9e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.826 2 DEBUG nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Instance network_info: |[{"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.826 2 DEBUG oslo_concurrency.lockutils [req-1c0d444d-4657-46f4-ac21-aae850b46875 req-e3e28c89-624e-4e5a-954a-a3f372359209 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-3f92060f-3dad-43b7-96d0-802c353d9e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.827 2 DEBUG nova.network.neutron [req-1c0d444d-4657-46f4-ac21-aae850b46875 req-e3e28c89-624e-4e5a-954a-a3f372359209 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Refreshing network info cache for port df2532e0-88f2-438c-88b9-08f163fc19c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.829 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Start _get_guest_xml network_info=[{"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.834 2 WARNING nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.838 2 DEBUG nova.virt.libvirt.host [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.838 2 DEBUG nova.virt.libvirt.host [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.840 2 DEBUG nova.virt.libvirt.host [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.841 2 DEBUG nova.virt.libvirt.host [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.842 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.842 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.843 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.843 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.843 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.843 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.844 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.844 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.844 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.844 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.845 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.845 2 DEBUG nova.virt.hardware [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:57:25 np0005465987 nova_compute[230713]: 2025-10-02 12:57:25.847 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:57:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2776539741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.284 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.321 2 DEBUG nova.storage.rbd_utils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 3f92060f-3dad-43b7-96d0-802c353d9e47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.325 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:57:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1647208712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.729 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.731 2 DEBUG nova.virt.libvirt.vif [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-796727322',display_name='tempest-TestServerMultinode-server-796727322',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-796727322',id=189,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8668725b86704fdcacbb467738b51154',ramdisk_id='',reservation_id='r-7dnz4pyu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1785572191',owner_user_name='tempest-TestServerMultinode-1785572191-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:18Z,user_data=None,user_id='de066041e985417da95924c04915bd11',uuid=3f92060f-3dad-43b7-96d0-802c353d9e47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.731 2 DEBUG nova.network.os_vif_util [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converting VIF {"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.732 2 DEBUG nova.network.os_vif_util [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ba:7b,bridge_name='br-int',has_traffic_filtering=True,id=df2532e0-88f2-438c-88b9-08f163fc19c8,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf2532e0-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.733 2 DEBUG nova.objects.instance [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f92060f-3dad-43b7-96d0-802c353d9e47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.760 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <uuid>3f92060f-3dad-43b7-96d0-802c353d9e47</uuid>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <name>instance-000000bd</name>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestServerMultinode-server-796727322</nova:name>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:57:25</nova:creationTime>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <nova:user uuid="de066041e985417da95924c04915bd11">tempest-TestServerMultinode-1785572191-project-admin</nova:user>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <nova:project uuid="8668725b86704fdcacbb467738b51154">tempest-TestServerMultinode-1785572191</nova:project>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <nova:port uuid="df2532e0-88f2-438c-88b9-08f163fc19c8">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <entry name="serial">3f92060f-3dad-43b7-96d0-802c353d9e47</entry>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <entry name="uuid">3f92060f-3dad-43b7-96d0-802c353d9e47</entry>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/3f92060f-3dad-43b7-96d0-802c353d9e47_disk">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/3f92060f-3dad-43b7-96d0-802c353d9e47_disk.config">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:aa:ba:7b"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <target dev="tapdf2532e0-88"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47/console.log" append="off"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:57:26 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:57:26 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:57:26 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:57:26 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.761 2 DEBUG nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Preparing to wait for external event network-vif-plugged-df2532e0-88f2-438c-88b9-08f163fc19c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.761 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.761 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.762 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.762 2 DEBUG nova.virt.libvirt.vif [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:57:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-796727322',display_name='tempest-TestServerMultinode-server-796727322',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-796727322',id=189,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8668725b86704fdcacbb467738b51154',ramdisk_id='',reservation_id='r-7dnz4pyu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1785572191',owner_user_name='tempest-TestServerMultinode-1785572191-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:57:18Z,user_data=None,user_id='de066041e985417da95924c04915bd11',uuid=3f92060f-3dad-43b7-96d0-802c353d9e47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.763 2 DEBUG nova.network.os_vif_util [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converting VIF {"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.763 2 DEBUG nova.network.os_vif_util [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ba:7b,bridge_name='br-int',has_traffic_filtering=True,id=df2532e0-88f2-438c-88b9-08f163fc19c8,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf2532e0-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.763 2 DEBUG os_vif [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ba:7b,bridge_name='br-int',has_traffic_filtering=True,id=df2532e0-88f2-438c-88b9-08f163fc19c8,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf2532e0-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.765 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf2532e0-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.770 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf2532e0-88, col_values=(('external_ids', {'iface-id': 'df2532e0-88f2-438c-88b9-08f163fc19c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:ba:7b', 'vm-uuid': '3f92060f-3dad-43b7-96d0-802c353d9e47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:26 np0005465987 NetworkManager[44910]: <info>  [1759409846.8166] manager: (tapdf2532e0-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.822 2 INFO os_vif [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ba:7b,bridge_name='br-int',has_traffic_filtering=True,id=df2532e0-88f2-438c-88b9-08f163fc19c8,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf2532e0-88')#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.912 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.913 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.913 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] No VIF found with MAC fa:16:3e:aa:ba:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.913 2 INFO nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Using config drive#033[00m
Oct  2 08:57:26 np0005465987 nova_compute[230713]: 2025-10-02 12:57:26.937 2 DEBUG nova.storage.rbd_utils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 3f92060f-3dad-43b7-96d0-802c353d9e47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:27 np0005465987 nova_compute[230713]: 2025-10-02 12:57:27.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:27.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:27.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:27 np0005465987 nova_compute[230713]: 2025-10-02 12:57:27.838 2 INFO nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Creating config drive at /var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47/disk.config#033[00m
Oct  2 08:57:27 np0005465987 nova_compute[230713]: 2025-10-02 12:57:27.843 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjakwpryt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:27 np0005465987 nova_compute[230713]: 2025-10-02 12:57:27.978 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjakwpryt" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:27.984 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:27.985 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:27.985 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.008 2 DEBUG nova.storage.rbd_utils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] rbd image 3f92060f-3dad-43b7-96d0-802c353d9e47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.011 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47/disk.config 3f92060f-3dad-43b7-96d0-802c353d9e47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.328 2 DEBUG nova.network.neutron [req-1c0d444d-4657-46f4-ac21-aae850b46875 req-e3e28c89-624e-4e5a-954a-a3f372359209 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Updated VIF entry in instance network info cache for port df2532e0-88f2-438c-88b9-08f163fc19c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.329 2 DEBUG nova.network.neutron [req-1c0d444d-4657-46f4-ac21-aae850b46875 req-e3e28c89-624e-4e5a-954a-a3f372359209 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Updating instance_info_cache with network_info: [{"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.368 2 DEBUG oslo_concurrency.lockutils [req-1c0d444d-4657-46f4-ac21-aae850b46875 req-e3e28c89-624e-4e5a-954a-a3f372359209 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-3f92060f-3dad-43b7-96d0-802c353d9e47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:57:28 np0005465987 podman[303159]: 2025-10-02 12:57:28.834495578 +0000 UTC m=+0.052281166 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.905 2 DEBUG oslo_concurrency.processutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47/disk.config 3f92060f-3dad-43b7-96d0-802c353d9e47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.894s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.906 2 INFO nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Deleting local config drive /var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47/disk.config because it was imported into RBD.#033[00m
Oct  2 08:57:28 np0005465987 NetworkManager[44910]: <info>  [1759409848.9579] manager: (tapdf2532e0-88): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Oct  2 08:57:28 np0005465987 kernel: tapdf2532e0-88: entered promiscuous mode
Oct  2 08:57:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:28Z|00745|binding|INFO|Claiming lport df2532e0-88f2-438c-88b9-08f163fc19c8 for this chassis.
Oct  2 08:57:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:28Z|00746|binding|INFO|df2532e0-88f2-438c-88b9-08f163fc19c8: Claiming fa:16:3e:aa:ba:7b 10.100.0.14
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:28Z|00747|binding|INFO|Setting lport df2532e0-88f2-438c-88b9-08f163fc19c8 ovn-installed in OVS
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:28 np0005465987 nova_compute[230713]: 2025-10-02 12:57:28.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:28 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:28Z|00748|binding|INFO|Setting lport df2532e0-88f2-438c-88b9-08f163fc19c8 up in Southbound
Oct  2 08:57:28 np0005465987 systemd-udevd[303191]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:57:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:28.986 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ba:7b 10.100.0.14'], port_security=['fa:16:3e:aa:ba:7b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3f92060f-3dad-43b7-96d0-802c353d9e47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8668725b86704fdcacbb467738b51154', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9426c89d-e30e-4342-a8bd-1975c70a0c71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e55754b-f304-4904-b3bf-7f80f94cdc02, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=df2532e0-88f2-438c-88b9-08f163fc19c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:28.987 138325 INFO neutron.agent.ovn.metadata.agent [-] Port df2532e0-88f2-438c-88b9-08f163fc19c8 in datapath 9b3e5364-0567-4be5-b771-728ed7dd0ab7 bound to our chassis#033[00m
Oct  2 08:57:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:28.989 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b3e5364-0567-4be5-b771-728ed7dd0ab7#033[00m
Oct  2 08:57:29 np0005465987 NetworkManager[44910]: <info>  [1759409849.0017] device (tapdf2532e0-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:57:29 np0005465987 NetworkManager[44910]: <info>  [1759409849.0025] device (tapdf2532e0-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.001 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d63d1a-b09f-46b8-94bf-441c8eace7f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.002 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9b3e5364-01 in ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.004 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9b3e5364-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.004 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7df92de7-b9be-4d3c-a304-37b324caf36f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.006 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[726abc92-2bf3-43f6-88ac-e1a82a0e9a24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 systemd-machined[188335]: New machine qemu-86-instance-000000bd.
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.015 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[54074bc2-e5aa-45d3-9750-93b98d522713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 systemd[1]: Started Virtual Machine qemu-86-instance-000000bd.
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.039 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa90d55-3d66-4ccc-81c8-6863dfdbd12e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.067 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3054834f-0861-400b-9ad9-de414414a633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.073 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d51f9a-64e3-4749-8aff-2ca065218f17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 systemd-udevd[303195]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:57:29 np0005465987 NetworkManager[44910]: <info>  [1759409849.0761] manager: (tap9b3e5364-00): new Veth device (/org/freedesktop/NetworkManager/Devices/345)
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.108 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9053a2f8-83c2-4463-b1bb-c52081f25e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.111 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a56f2c27-3cde-45df-826a-b9ee1d863aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 NetworkManager[44910]: <info>  [1759409849.1306] device (tap9b3e5364-00): carrier: link connected
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.134 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fd13eb-dbc7-4593-b595-1ebb3b564be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.149 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac9ddd6-d626-4b24-910a-18f28bee3cf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b3e5364-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:74:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791770, 'reachable_time': 36341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303225, 'error': None, 'target': 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.164 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2697b2-b4c2-400e-91ab-e5575813dad2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:7417'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 791770, 'tstamp': 791770}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303226, 'error': None, 'target': 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.179 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f335e902-605d-4955-8861-9938c2b4d156]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b3e5364-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:74:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791770, 'reachable_time': 36341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303227, 'error': None, 'target': 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.204 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d7cc0f33-8cbe-4fc9-af01-54271f7cdf77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:29.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.272 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[eb57daef-6216-40d1-82bd-13b1cee3a8eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.273 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b3e5364-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.273 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.273 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b3e5364-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:29 np0005465987 kernel: tap9b3e5364-00: entered promiscuous mode
Oct  2 08:57:29 np0005465987 NetworkManager[44910]: <info>  [1759409849.2945] manager: (tap9b3e5364-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.297 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b3e5364-00, col_values=(('external_ids', {'iface-id': '6e4127f6-98a7-4fa7-9ba7-1b632af1bcf6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:29Z|00749|binding|INFO|Releasing lport 6e4127f6-98a7-4fa7-9ba7-1b632af1bcf6 from this chassis (sb_readonly=0)
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.314 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9b3e5364-0567-4be5-b771-728ed7dd0ab7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9b3e5364-0567-4be5-b771-728ed7dd0ab7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.315 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f8869d59-8309-43eb-99cc-a8716d82ae71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.315 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-9b3e5364-0567-4be5-b771-728ed7dd0ab7
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/9b3e5364-0567-4be5-b771-728ed7dd0ab7.pid.haproxy
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 9b3e5364-0567-4be5-b771-728ed7dd0ab7
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:57:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:29.316 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'env', 'PROCESS_TAG=haproxy-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9b3e5364-0567-4be5-b771-728ed7dd0ab7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.423 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.468 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.469 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.470 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.470 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.528 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.529 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.529 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.529 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.530 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:29 np0005465987 podman[303302]: 2025-10-02 12:57:29.679117483 +0000 UTC m=+0.060876951 container create 3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:57:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:57:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:29.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:57:29 np0005465987 systemd[1]: Started libpod-conmon-3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1.scope.
Oct  2 08:57:29 np0005465987 podman[303302]: 2025-10-02 12:57:29.638948781 +0000 UTC m=+0.020708249 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:57:29 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:57:29 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119613016e1040b70a289182f1f0ea23c6fa2bd9a2aefb5a07dd3e5955545c13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:57:29 np0005465987 podman[303302]: 2025-10-02 12:57:29.779059145 +0000 UTC m=+0.160818623 container init 3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:57:29 np0005465987 podman[303302]: 2025-10-02 12:57:29.784651269 +0000 UTC m=+0.166410727 container start 3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:57:29 np0005465987 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[303336]: [NOTICE]   (303340) : New worker (303342) forked
Oct  2 08:57:29 np0005465987 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[303336]: [NOTICE]   (303340) : Loading success.
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.941 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409849.9409935, 3f92060f-3dad-43b7-96d0-802c353d9e47 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.942 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] VM Started (Lifecycle Event)#033[00m
Oct  2 08:57:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:29 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2604117681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.983 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:29 np0005465987 nova_compute[230713]: 2025-10-02 12:57:29.995 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.000 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409849.942276, 3f92060f-3dad-43b7-96d0-802c353d9e47 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.000 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.046 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.049 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.075 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:57:30 np0005465987 podman[303355]: 2025-10-02 12:57:30.09668323 +0000 UTC m=+0.059478033 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.103 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.104 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.109 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.110 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:30 np0005465987 podman[303356]: 2025-10-02 12:57:30.13056604 +0000 UTC m=+0.090640388 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:57:30 np0005465987 podman[303354]: 2025-10-02 12:57:30.142697903 +0000 UTC m=+0.107295786 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.173 2 DEBUG nova.compute.manager [req-452eaf5a-1c63-464e-a7ab-1bd687bc27ad req-1e50560a-110f-4238-b859-cae572737019 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Received event network-vif-plugged-df2532e0-88f2-438c-88b9-08f163fc19c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.174 2 DEBUG oslo_concurrency.lockutils [req-452eaf5a-1c63-464e-a7ab-1bd687bc27ad req-1e50560a-110f-4238-b859-cae572737019 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.174 2 DEBUG oslo_concurrency.lockutils [req-452eaf5a-1c63-464e-a7ab-1bd687bc27ad req-1e50560a-110f-4238-b859-cae572737019 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.174 2 DEBUG oslo_concurrency.lockutils [req-452eaf5a-1c63-464e-a7ab-1bd687bc27ad req-1e50560a-110f-4238-b859-cae572737019 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.174 2 DEBUG nova.compute.manager [req-452eaf5a-1c63-464e-a7ab-1bd687bc27ad req-1e50560a-110f-4238-b859-cae572737019 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Processing event network-vif-plugged-df2532e0-88f2-438c-88b9-08f163fc19c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.175 2 DEBUG nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.178 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409850.1776683, 3f92060f-3dad-43b7-96d0-802c353d9e47 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.178 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.180 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.183 2 INFO nova.virt.libvirt.driver [-] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Instance spawned successfully.#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.183 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.238 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.244 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.245 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.245 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.246 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.247 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.247 2 DEBUG nova.virt.libvirt.driver [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.250 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.303 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.316 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.317 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4032MB free_disk=20.876270294189453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.318 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.318 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.396 2 INFO nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Took 11.47 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.397 2 DEBUG nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.450 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 2dff32de-ac4f-401a-9e6a-9446d4a0faaa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.450 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 3f92060f-3dad-43b7-96d0-802c353d9e47 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.451 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.451 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.529 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.561 2 INFO nova.compute.manager [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Took 12.65 seconds to build instance.#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.589 2 DEBUG oslo_concurrency.lockutils [None req-011f7b32-669a-4c5b-9135-0ca6d4928a1e de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2108199638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.978 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:30 np0005465987 nova_compute[230713]: 2025-10-02 12:57:30.982 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:57:31 np0005465987 nova_compute[230713]: 2025-10-02 12:57:31.095 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:57:31 np0005465987 nova_compute[230713]: 2025-10-02 12:57:31.154 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:57:31 np0005465987 nova_compute[230713]: 2025-10-02 12:57:31.154 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:31.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:31 np0005465987 nova_compute[230713]: 2025-10-02 12:57:31.649 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:31 np0005465987 nova_compute[230713]: 2025-10-02 12:57:31.650 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:31 np0005465987 nova_compute[230713]: 2025-10-02 12:57:31.650 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:31 np0005465987 nova_compute[230713]: 2025-10-02 12:57:31.650 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:31.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:31 np0005465987 nova_compute[230713]: 2025-10-02 12:57:31.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:32 np0005465987 nova_compute[230713]: 2025-10-02 12:57:32.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:32 np0005465987 nova_compute[230713]: 2025-10-02 12:57:32.306 2 DEBUG nova.compute.manager [req-d51f4f2f-1bdf-4eda-9da6-a75f007809f4 req-7380e229-5f80-4718-a031-d03aa225bb4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Received event network-vif-plugged-df2532e0-88f2-438c-88b9-08f163fc19c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:32 np0005465987 nova_compute[230713]: 2025-10-02 12:57:32.306 2 DEBUG oslo_concurrency.lockutils [req-d51f4f2f-1bdf-4eda-9da6-a75f007809f4 req-7380e229-5f80-4718-a031-d03aa225bb4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:32 np0005465987 nova_compute[230713]: 2025-10-02 12:57:32.307 2 DEBUG oslo_concurrency.lockutils [req-d51f4f2f-1bdf-4eda-9da6-a75f007809f4 req-7380e229-5f80-4718-a031-d03aa225bb4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:32 np0005465987 nova_compute[230713]: 2025-10-02 12:57:32.307 2 DEBUG oslo_concurrency.lockutils [req-d51f4f2f-1bdf-4eda-9da6-a75f007809f4 req-7380e229-5f80-4718-a031-d03aa225bb4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:32 np0005465987 nova_compute[230713]: 2025-10-02 12:57:32.307 2 DEBUG nova.compute.manager [req-d51f4f2f-1bdf-4eda-9da6-a75f007809f4 req-7380e229-5f80-4718-a031-d03aa225bb4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] No waiting events found dispatching network-vif-plugged-df2532e0-88f2-438c-88b9-08f163fc19c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:32 np0005465987 nova_compute[230713]: 2025-10-02 12:57:32.307 2 WARNING nova.compute.manager [req-d51f4f2f-1bdf-4eda-9da6-a75f007809f4 req-7380e229-5f80-4718-a031-d03aa225bb4f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Received unexpected event network-vif-plugged-df2532e0-88f2-438c-88b9-08f163fc19c8 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:57:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:33.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:33.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:33 np0005465987 nova_compute[230713]: 2025-10-02 12:57:33.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:35.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:35.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:36 np0005465987 nova_compute[230713]: 2025-10-02 12:57:36.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:37 np0005465987 nova_compute[230713]: 2025-10-02 12:57:37.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:37.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:37.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:38 np0005465987 nova_compute[230713]: 2025-10-02 12:57:38.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:38 np0005465987 nova_compute[230713]: 2025-10-02 12:57:38.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:57:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:39.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:39.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:41.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:41.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:41 np0005465987 nova_compute[230713]: 2025-10-02 12:57:41.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:42 np0005465987 nova_compute[230713]: 2025-10-02 12:57:42.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:42Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:ba:7b 10.100.0.14
Oct  2 08:57:42 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:42Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:ba:7b 10.100.0.14
Oct  2 08:57:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:43.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:57:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:43.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:57:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:45.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:46 np0005465987 nova_compute[230713]: 2025-10-02 12:57:46.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:47 np0005465987 nova_compute[230713]: 2025-10-02 12:57:47.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:47.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:49.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:49.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:51.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:57:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:51.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:57:51 np0005465987 nova_compute[230713]: 2025-10-02 12:57:51.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:52 np0005465987 nova_compute[230713]: 2025-10-02 12:57:52.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/688148475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:53.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.408 2 DEBUG oslo_concurrency.lockutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "3f92060f-3dad-43b7-96d0-802c353d9e47" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.409 2 DEBUG oslo_concurrency.lockutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.409 2 DEBUG oslo_concurrency.lockutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.409 2 DEBUG oslo_concurrency.lockutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.409 2 DEBUG oslo_concurrency.lockutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.410 2 INFO nova.compute.manager [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Terminating instance#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.411 2 DEBUG nova.compute.manager [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:57:53 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Oct  2 08:57:53 np0005465987 kernel: tapdf2532e0-88 (unregistering): left promiscuous mode
Oct  2 08:57:53 np0005465987 NetworkManager[44910]: <info>  [1759409873.4602] device (tapdf2532e0-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:57:53 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:53Z|00750|binding|INFO|Releasing lport df2532e0-88f2-438c-88b9-08f163fc19c8 from this chassis (sb_readonly=0)
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:53 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:53Z|00751|binding|INFO|Setting lport df2532e0-88f2-438c-88b9-08f163fc19c8 down in Southbound
Oct  2 08:57:53 np0005465987 ovn_controller[129172]: 2025-10-02T12:57:53Z|00752|binding|INFO|Removing iface tapdf2532e0-88 ovn-installed in OVS
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.478 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ba:7b 10.100.0.14'], port_security=['fa:16:3e:aa:ba:7b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3f92060f-3dad-43b7-96d0-802c353d9e47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8668725b86704fdcacbb467738b51154', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9426c89d-e30e-4342-a8bd-1975c70a0c71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e55754b-f304-4904-b3bf-7f80f94cdc02, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=df2532e0-88f2-438c-88b9-08f163fc19c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.480 138325 INFO neutron.agent.ovn.metadata.agent [-] Port df2532e0-88f2-438c-88b9-08f163fc19c8 in datapath 9b3e5364-0567-4be5-b771-728ed7dd0ab7 unbound from our chassis#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.481 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b3e5364-0567-4be5-b771-728ed7dd0ab7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.482 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d280d50c-42d9-4131-ba92-b15206eb463a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.483 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 namespace which is not needed anymore#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:53 np0005465987 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000bd.scope: Deactivated successfully.
Oct  2 08:57:53 np0005465987 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000bd.scope: Consumed 14.065s CPU time.
Oct  2 08:57:53 np0005465987 systemd-machined[188335]: Machine qemu-86-instance-000000bd terminated.
Oct  2 08:57:53 np0005465987 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[303336]: [NOTICE]   (303340) : haproxy version is 2.8.14-c23fe91
Oct  2 08:57:53 np0005465987 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[303336]: [NOTICE]   (303340) : path to executable is /usr/sbin/haproxy
Oct  2 08:57:53 np0005465987 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[303336]: [WARNING]  (303340) : Exiting Master process...
Oct  2 08:57:53 np0005465987 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[303336]: [ALERT]    (303340) : Current worker (303342) exited with code 143 (Terminated)
Oct  2 08:57:53 np0005465987 neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7[303336]: [WARNING]  (303340) : All workers exited. Exiting... (0)
Oct  2 08:57:53 np0005465987 systemd[1]: libpod-3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1.scope: Deactivated successfully.
Oct  2 08:57:53 np0005465987 podman[303457]: 2025-10-02 12:57:53.608340127 +0000 UTC m=+0.043164795 container died 3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:57:53 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1-userdata-shm.mount: Deactivated successfully.
Oct  2 08:57:53 np0005465987 systemd[1]: var-lib-containers-storage-overlay-119613016e1040b70a289182f1f0ea23c6fa2bd9a2aefb5a07dd3e5955545c13-merged.mount: Deactivated successfully.
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.647 2 INFO nova.virt.libvirt.driver [-] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Instance destroyed successfully.#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.648 2 DEBUG nova.objects.instance [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lazy-loading 'resources' on Instance uuid 3f92060f-3dad-43b7-96d0-802c353d9e47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:53 np0005465987 podman[303457]: 2025-10-02 12:57:53.659985005 +0000 UTC m=+0.094809693 container cleanup 3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:57:53 np0005465987 systemd[1]: libpod-conmon-3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1.scope: Deactivated successfully.
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.678 2 DEBUG nova.virt.libvirt.vif [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:57:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-796727322',display_name='tempest-TestServerMultinode-server-796727322',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-796727322',id=189,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8668725b86704fdcacbb467738b51154',ramdisk_id='',reservation_id='r-7dnz4pyu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1785572191',owner_user_name='tempest-TestServerMultinode-1785572191-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:30Z,user_data=None,user_id='de066041e985417da95924c04915bd11',uuid=3f92060f-3dad-43b7-96d0-802c353d9e47,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.678 2 DEBUG nova.network.os_vif_util [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converting VIF {"id": "df2532e0-88f2-438c-88b9-08f163fc19c8", "address": "fa:16:3e:aa:ba:7b", "network": {"id": "9b3e5364-0567-4be5-b771-728ed7dd0ab7", "bridge": "br-int", "label": "tempest-TestServerMultinode-1015877619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e08031c685814456aa6c43bcc8f98574", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf2532e0-88", "ovs_interfaceid": "df2532e0-88f2-438c-88b9-08f163fc19c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.679 2 DEBUG nova.network.os_vif_util [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ba:7b,bridge_name='br-int',has_traffic_filtering=True,id=df2532e0-88f2-438c-88b9-08f163fc19c8,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf2532e0-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.679 2 DEBUG os_vif [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ba:7b,bridge_name='br-int',has_traffic_filtering=True,id=df2532e0-88f2-438c-88b9-08f163fc19c8,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf2532e0-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf2532e0-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.687 2 INFO os_vif [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ba:7b,bridge_name='br-int',has_traffic_filtering=True,id=df2532e0-88f2-438c-88b9-08f163fc19c8,network=Network(9b3e5364-0567-4be5-b771-728ed7dd0ab7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf2532e0-88')#033[00m
Oct  2 08:57:53 np0005465987 podman[303498]: 2025-10-02 12:57:53.722628943 +0000 UTC m=+0.039531005 container remove 3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.728 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3cbf18-3fac-4f4e-8ca5-894d9b2f0f35]: (4, ('Thu Oct  2 12:57:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 (3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1)\n3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1\nThu Oct  2 12:57:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 (3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1)\n3f048e736deb8b6f4692aae9f69586e90397770e19ea838ac0b176799ed61ce1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.730 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1d1277-a130-43c4-9b96-e5e100c8c900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.730 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b3e5364-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:53 np0005465987 kernel: tap9b3e5364-00: left promiscuous mode
Oct  2 08:57:53 np0005465987 nova_compute[230713]: 2025-10-02 12:57:53.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.748 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcd27f2-0024-43d9-a6f9-1a76b79da5cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:53.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.774 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[13958ec0-390e-42a0-b079-8c3070629167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.775 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[41b35e9e-f185-4980-ba0c-0355fbe70a76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.790 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6c27c1fe-21fa-4ac5-9ab1-bf5e58a047f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791763, 'reachable_time': 29572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303531, 'error': None, 'target': 'ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:53 np0005465987 systemd[1]: run-netns-ovnmeta\x2d9b3e5364\x2d0567\x2d4be5\x2db771\x2d728ed7dd0ab7.mount: Deactivated successfully.
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.794 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9b3e5364-0567-4be5-b771-728ed7dd0ab7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:57:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:53.795 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf2083d-ed31-4b4c-8305-a39ffe2e8d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:54 np0005465987 nova_compute[230713]: 2025-10-02 12:57:54.535 2 INFO nova.virt.libvirt.driver [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Deleting instance files /var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47_del#033[00m
Oct  2 08:57:54 np0005465987 nova_compute[230713]: 2025-10-02 12:57:54.536 2 INFO nova.virt.libvirt.driver [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Deletion of /var/lib/nova/instances/3f92060f-3dad-43b7-96d0-802c353d9e47_del complete#033[00m
Oct  2 08:57:54 np0005465987 nova_compute[230713]: 2025-10-02 12:57:54.607 2 INFO nova.compute.manager [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:57:54 np0005465987 nova_compute[230713]: 2025-10-02 12:57:54.608 2 DEBUG oslo.service.loopingcall [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:57:54 np0005465987 nova_compute[230713]: 2025-10-02 12:57:54.608 2 DEBUG nova.compute.manager [-] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:57:54 np0005465987 nova_compute[230713]: 2025-10-02 12:57:54.608 2 DEBUG nova.network.neutron [-] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:57:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:55.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:55 np0005465987 nova_compute[230713]: 2025-10-02 12:57:55.483 2 DEBUG nova.compute.manager [req-8a8e0f4a-b2a1-4003-9de2-d0dc3fd1aeed req-e8e9b1e5-ec69-47c8-9c53-415b53a6bc55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Received event network-vif-unplugged-df2532e0-88f2-438c-88b9-08f163fc19c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:55 np0005465987 nova_compute[230713]: 2025-10-02 12:57:55.484 2 DEBUG oslo_concurrency.lockutils [req-8a8e0f4a-b2a1-4003-9de2-d0dc3fd1aeed req-e8e9b1e5-ec69-47c8-9c53-415b53a6bc55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:55 np0005465987 nova_compute[230713]: 2025-10-02 12:57:55.484 2 DEBUG oslo_concurrency.lockutils [req-8a8e0f4a-b2a1-4003-9de2-d0dc3fd1aeed req-e8e9b1e5-ec69-47c8-9c53-415b53a6bc55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:55 np0005465987 nova_compute[230713]: 2025-10-02 12:57:55.485 2 DEBUG oslo_concurrency.lockutils [req-8a8e0f4a-b2a1-4003-9de2-d0dc3fd1aeed req-e8e9b1e5-ec69-47c8-9c53-415b53a6bc55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:55 np0005465987 nova_compute[230713]: 2025-10-02 12:57:55.486 2 DEBUG nova.compute.manager [req-8a8e0f4a-b2a1-4003-9de2-d0dc3fd1aeed req-e8e9b1e5-ec69-47c8-9c53-415b53a6bc55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] No waiting events found dispatching network-vif-unplugged-df2532e0-88f2-438c-88b9-08f163fc19c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:55 np0005465987 nova_compute[230713]: 2025-10-02 12:57:55.487 2 DEBUG nova.compute.manager [req-8a8e0f4a-b2a1-4003-9de2-d0dc3fd1aeed req-e8e9b1e5-ec69-47c8-9c53-415b53a6bc55 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Received event network-vif-unplugged-df2532e0-88f2-438c-88b9-08f163fc19c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:57:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:55.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:57:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:57.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:57:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:57.674 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:57:57.675 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.680 2 DEBUG nova.compute.manager [req-6b254f9a-4018-4c7e-b8d4-e0c1fd560abb req-8bd19386-06e3-4567-a049-059b0675413a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Received event network-vif-plugged-df2532e0-88f2-438c-88b9-08f163fc19c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.681 2 DEBUG oslo_concurrency.lockutils [req-6b254f9a-4018-4c7e-b8d4-e0c1fd560abb req-8bd19386-06e3-4567-a049-059b0675413a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.681 2 DEBUG oslo_concurrency.lockutils [req-6b254f9a-4018-4c7e-b8d4-e0c1fd560abb req-8bd19386-06e3-4567-a049-059b0675413a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.681 2 DEBUG oslo_concurrency.lockutils [req-6b254f9a-4018-4c7e-b8d4-e0c1fd560abb req-8bd19386-06e3-4567-a049-059b0675413a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.682 2 DEBUG nova.compute.manager [req-6b254f9a-4018-4c7e-b8d4-e0c1fd560abb req-8bd19386-06e3-4567-a049-059b0675413a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] No waiting events found dispatching network-vif-plugged-df2532e0-88f2-438c-88b9-08f163fc19c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.682 2 WARNING nova.compute.manager [req-6b254f9a-4018-4c7e-b8d4-e0c1fd560abb req-8bd19386-06e3-4567-a049-059b0675413a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Received unexpected event network-vif-plugged-df2532e0-88f2-438c-88b9-08f163fc19c8 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:57:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:57.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.795 2 DEBUG nova.network.neutron [-] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.831 2 INFO nova.compute.manager [-] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Took 3.22 seconds to deallocate network for instance.#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.881 2 DEBUG oslo_concurrency.lockutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.882 2 DEBUG oslo_concurrency.lockutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:57 np0005465987 nova_compute[230713]: 2025-10-02 12:57:57.949 2 DEBUG oslo_concurrency.processutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2346521657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:58 np0005465987 nova_compute[230713]: 2025-10-02 12:57:58.364 2 DEBUG oslo_concurrency.processutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:58 np0005465987 nova_compute[230713]: 2025-10-02 12:57:58.370 2 DEBUG nova.compute.provider_tree [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:57:58 np0005465987 nova_compute[230713]: 2025-10-02 12:57:58.388 2 DEBUG nova.scheduler.client.report [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:57:58 np0005465987 nova_compute[230713]: 2025-10-02 12:57:58.413 2 DEBUG oslo_concurrency.lockutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:58 np0005465987 nova_compute[230713]: 2025-10-02 12:57:58.445 2 INFO nova.scheduler.client.report [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Deleted allocations for instance 3f92060f-3dad-43b7-96d0-802c353d9e47#033[00m
Oct  2 08:57:58 np0005465987 nova_compute[230713]: 2025-10-02 12:57:58.564 2 DEBUG oslo_concurrency.lockutils [None req-9a3fc6c2-2c94-477a-a52e-733f41313027 de066041e985417da95924c04915bd11 8668725b86704fdcacbb467738b51154 - - default default] Lock "3f92060f-3dad-43b7-96d0-802c353d9e47" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:58 np0005465987 nova_compute[230713]: 2025-10-02 12:57:58.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:57:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:59.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:57:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:57:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:59.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:59 np0005465987 nova_compute[230713]: 2025-10-02 12:57:59.826 2 DEBUG nova.compute.manager [req-0e7c571b-aba6-4050-828e-3c379b1e2853 req-555cd0c2-d0b4-4ed7-946b-405d4dd0b204 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Received event network-vif-deleted-df2532e0-88f2-438c-88b9-08f163fc19c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:59 np0005465987 podman[303555]: 2025-10-02 12:57:59.831047789 +0000 UTC m=+0.045384646 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct  2 08:58:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:00 np0005465987 podman[303576]: 2025-10-02 12:58:00.840929739 +0000 UTC m=+0.060864471 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:58:00 np0005465987 podman[303577]: 2025-10-02 12:58:00.867829208 +0000 UTC m=+0.085539459 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:58:00 np0005465987 podman[303575]: 2025-10-02 12:58:00.882739666 +0000 UTC m=+0.106887893 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Oct  2 08:58:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:01.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:01.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:02 np0005465987 nova_compute[230713]: 2025-10-02 12:58:02.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:03.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:03 np0005465987 nova_compute[230713]: 2025-10-02 12:58:03.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:03.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:04.678 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:05.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.626268) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885626303, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 921, "num_deletes": 251, "total_data_size": 1773496, "memory_usage": 1792992, "flush_reason": "Manual Compaction"}
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885633915, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1158332, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70812, "largest_seqno": 71728, "table_properties": {"data_size": 1154084, "index_size": 1899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9719, "raw_average_key_size": 19, "raw_value_size": 1145551, "raw_average_value_size": 2342, "num_data_blocks": 84, "num_entries": 489, "num_filter_entries": 489, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409824, "oldest_key_time": 1759409824, "file_creation_time": 1759409885, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 7676 microseconds, and 3387 cpu microseconds.
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.633944) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1158332 bytes OK
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.633959) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.635470) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.635481) EVENT_LOG_v1 {"time_micros": 1759409885635478, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.635496) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1768850, prev total WAL file size 1768850, number of live WAL files 2.
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.636206) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1131KB)], [144(10MB)]
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885636274, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 12143546, "oldest_snapshot_seqno": -1}
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 9077 keys, 10251651 bytes, temperature: kUnknown
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885696675, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 10251651, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10194914, "index_size": 32944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 240594, "raw_average_key_size": 26, "raw_value_size": 10037373, "raw_average_value_size": 1105, "num_data_blocks": 1244, "num_entries": 9077, "num_filter_entries": 9077, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759409885, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.696971) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10251651 bytes
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.697881) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.7 rd, 169.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(19.3) write-amplify(8.9) OK, records in: 9593, records dropped: 516 output_compression: NoCompression
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.697897) EVENT_LOG_v1 {"time_micros": 1759409885697890, "job": 92, "event": "compaction_finished", "compaction_time_micros": 60499, "compaction_time_cpu_micros": 33927, "output_level": 6, "num_output_files": 1, "total_output_size": 10251651, "num_input_records": 9593, "num_output_records": 9077, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885698202, "job": 92, "event": "table_file_deletion", "file_number": 146}
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409885700285, "job": 92, "event": "table_file_deletion", "file_number": 144}
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.636080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.700423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.700428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.700430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.700431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:58:05 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-12:58:05.700432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:58:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:05Z|00753|binding|INFO|Releasing lport 72e5faac-9d73-42b2-89ed-1f386e556cc7 from this chassis (sb_readonly=0)
Oct  2 08:58:05 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:05Z|00754|binding|INFO|Releasing lport 834e2700-8bec-46ce-90f8-a8053806269a from this chassis (sb_readonly=0)
Oct  2 08:58:05 np0005465987 nova_compute[230713]: 2025-10-02 12:58:05.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:07 np0005465987 nova_compute[230713]: 2025-10-02 12:58:07.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:07.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:07.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:08 np0005465987 nova_compute[230713]: 2025-10-02 12:58:08.644 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409873.6431065, 3f92060f-3dad-43b7-96d0-802c353d9e47 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:08 np0005465987 nova_compute[230713]: 2025-10-02 12:58:08.644 2 INFO nova.compute.manager [-] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:58:08 np0005465987 nova_compute[230713]: 2025-10-02 12:58:08.759 2 DEBUG nova.compute.manager [None req-a5557bb6-7ca9-43e2-b2a7-b43a68b9bf35 - - - - - -] [instance: 3f92060f-3dad-43b7-96d0-802c353d9e47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:08 np0005465987 nova_compute[230713]: 2025-10-02 12:58:08.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:09.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:09.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:11.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:11 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:11Z|00755|binding|INFO|Releasing lport 72e5faac-9d73-42b2-89ed-1f386e556cc7 from this chassis (sb_readonly=0)
Oct  2 08:58:11 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:11Z|00756|binding|INFO|Releasing lport 834e2700-8bec-46ce-90f8-a8053806269a from this chassis (sb_readonly=0)
Oct  2 08:58:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:11.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:11 np0005465987 nova_compute[230713]: 2025-10-02 12:58:11.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:12 np0005465987 nova_compute[230713]: 2025-10-02 12:58:12.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:13.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:13 np0005465987 nova_compute[230713]: 2025-10-02 12:58:13.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:13.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:15.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:15.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.706 2 DEBUG oslo_concurrency.lockutils [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "interface-2dff32de-ac4f-401a-9e6a-9446d4a0faaa-4a77b29a-85b0-4bc7-b38c-8915de0b74d3" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.707 2 DEBUG oslo_concurrency.lockutils [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "interface-2dff32de-ac4f-401a-9e6a-9446d4a0faaa-4a77b29a-85b0-4bc7-b38c-8915de0b74d3" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.723 2 DEBUG nova.objects.instance [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'flavor' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.742 2 DEBUG nova.virt.libvirt.vif [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1692291686',display_name='tempest-TestNetworkBasicOps-server-1692291686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1692291686',id=186,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZWIx9o6kvPe+g4xv7jAf5RmegZo+XaIBjQcv6mFDdVE2Gj2eWqFYANF8qBhZ2trYTB0D6p6t6i6if67qDSCR68FKh99aOZV/40OkaYK0vaRDFLVdlWm6tj/SR2I1Vx6w==',key_name='tempest-TestNetworkBasicOps-590509202',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-uec0sg7u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:33Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2dff32de-ac4f-401a-9e6a-9446d4a0faaa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.742 2 DEBUG nova.network.os_vif_util [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.743 2 DEBUG nova.network.os_vif_util [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.747 2 DEBUG nova.virt.libvirt.guest [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.750 2 DEBUG nova.virt.libvirt.guest [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.752 2 DEBUG nova.virt.libvirt.driver [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Attempting to detach device tap4a77b29a-85 from instance 2dff32de-ac4f-401a-9e6a-9446d4a0faaa from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.752 2 DEBUG nova.virt.libvirt.guest [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:46:71:42"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <target dev="tap4a77b29a-85"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.758 2 DEBUG nova.virt.libvirt.guest [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.761 2 DEBUG nova.virt.libvirt.guest [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface>not found in domain: <domain type='kvm' id='85'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <name>instance-000000ba</name>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <uuid>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</uuid>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:name>tempest-TestNetworkBasicOps-server-1692291686</nova:name>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:56:19</nova:creationTime>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:port uuid="76ca6742-e0ea-45a7-bf82-750d5106e5b8">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:port uuid="4a77b29a-85b0-4bc7-b38c-8915de0b74d3">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='serial'>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='uuid'>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk' index='2'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config' index='1'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:95:61:8f'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target dev='tap76ca6742-e0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:46:71:42'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target dev='tap4a77b29a-85'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='net1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log' append='off'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log' append='off'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c403,c714</label>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c403,c714</imagelabel>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.763 2 INFO nova.virt.libvirt.driver [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully detached device tap4a77b29a-85 from instance 2dff32de-ac4f-401a-9e6a-9446d4a0faaa from the persistent domain config.#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.763 2 DEBUG nova.virt.libvirt.driver [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] (1/8): Attempting to detach device tap4a77b29a-85 with device alias net1 from instance 2dff32de-ac4f-401a-9e6a-9446d4a0faaa from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.764 2 DEBUG nova.virt.libvirt.guest [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <mac address="fa:16:3e:46:71:42"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <model type="virtio"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <mtu size="1442"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <target dev="tap4a77b29a-85"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: </interface>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:58:16 np0005465987 kernel: tap4a77b29a-85 (unregistering): left promiscuous mode
Oct  2 08:58:16 np0005465987 NetworkManager[44910]: <info>  [1759409896.8587] device (tap4a77b29a-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:58:16 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:16Z|00757|binding|INFO|Releasing lport 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 from this chassis (sb_readonly=0)
Oct  2 08:58:16 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:16Z|00758|binding|INFO|Setting lport 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 down in Southbound
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:16 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:16Z|00759|binding|INFO|Removing iface tap4a77b29a-85 ovn-installed in OVS
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:16.874 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:71:42 10.100.0.21', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '2dff32de-ac4f-401a-9e6a-9446d4a0faaa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48b23f60-a626-4a95-b154-a764454c451b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3dbfe445-7f25-42ca-8688-0a8d6c43ed3f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=4a77b29a-85b0-4bc7-b38c-8915de0b74d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:58:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:16.876 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 in datapath 48b23f60-a626-4a95-b154-a764454c451b unbound from our chassis#033[00m
Oct  2 08:58:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:16.877 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48b23f60-a626-4a95-b154-a764454c451b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:58:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:16.879 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[62da9446-00c6-4ca1-9069-a95df004adda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:16 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:16.879 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48b23f60-a626-4a95-b154-a764454c451b namespace which is not needed anymore#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.886 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759409896.883301, 2dff32de-ac4f-401a-9e6a-9446d4a0faaa => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.887 2 DEBUG nova.virt.libvirt.driver [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Start waiting for the detach event from libvirt for device tap4a77b29a-85 with device alias net1 for instance 2dff32de-ac4f-401a-9e6a-9446d4a0faaa _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.887 2 DEBUG nova.virt.libvirt.guest [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.891 2 DEBUG nova.virt.libvirt.guest [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface>not found in domain: <domain type='kvm' id='85'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <name>instance-000000ba</name>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <uuid>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</uuid>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:name>tempest-TestNetworkBasicOps-server-1692291686</nova:name>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:56:19</nova:creationTime>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:port uuid="76ca6742-e0ea-45a7-bf82-750d5106e5b8">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:port uuid="4a77b29a-85b0-4bc7-b38c-8915de0b74d3">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='serial'>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='uuid'>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk' index='2'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config' index='1'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:95:61:8f'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target dev='tap76ca6742-e0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log' append='off'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log' append='off'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c403,c714</label>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c403,c714</imagelabel>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.892 2 INFO nova.virt.libvirt.driver [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully detached device tap4a77b29a-85 from instance 2dff32de-ac4f-401a-9e6a-9446d4a0faaa from the live domain config.#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.892 2 DEBUG nova.virt.libvirt.vif [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1692291686',display_name='tempest-TestNetworkBasicOps-server-1692291686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1692291686',id=186,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZWIx9o6kvPe+g4xv7jAf5RmegZo+XaIBjQcv6mFDdVE2Gj2eWqFYANF8qBhZ2trYTB0D6p6t6i6if67qDSCR68FKh99aOZV/40OkaYK0vaRDFLVdlWm6tj/SR2I1Vx6w==',key_name='tempest-TestNetworkBasicOps-590509202',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-uec0sg7u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:33Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2dff32de-ac4f-401a-9e6a-9446d4a0faaa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.892 2 DEBUG nova.network.os_vif_util [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.893 2 DEBUG nova.network.os_vif_util [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.893 2 DEBUG os_vif [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a77b29a-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.958 2 INFO os_vif [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85')#033[00m
Oct  2 08:58:16 np0005465987 nova_compute[230713]: 2025-10-02 12:58:16.958 2 DEBUG nova.virt.libvirt.guest [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:name>tempest-TestNetworkBasicOps-server-1692291686</nova:name>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:58:16</nova:creationTime>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    <nova:port uuid="76ca6742-e0ea-45a7-bf82-750d5106e5b8">
Oct  2 08:58:16 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:58:16 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:58:16 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:58:17 np0005465987 neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b[302318]: [NOTICE]   (302322) : haproxy version is 2.8.14-c23fe91
Oct  2 08:58:17 np0005465987 neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b[302318]: [NOTICE]   (302322) : path to executable is /usr/sbin/haproxy
Oct  2 08:58:17 np0005465987 neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b[302318]: [WARNING]  (302322) : Exiting Master process...
Oct  2 08:58:17 np0005465987 neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b[302318]: [ALERT]    (302322) : Current worker (302324) exited with code 143 (Terminated)
Oct  2 08:58:17 np0005465987 neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b[302318]: [WARNING]  (302322) : All workers exited. Exiting... (0)
Oct  2 08:58:17 np0005465987 systemd[1]: libpod-ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac.scope: Deactivated successfully.
Oct  2 08:58:17 np0005465987 podman[303660]: 2025-10-02 12:58:17.03822021 +0000 UTC m=+0.048531192 container died ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:58:17 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac-userdata-shm.mount: Deactivated successfully.
Oct  2 08:58:17 np0005465987 systemd[1]: var-lib-containers-storage-overlay-29c6737b21010c1116c286191d6117131a26dcb192e31944fe995753afa08193-merged.mount: Deactivated successfully.
Oct  2 08:58:17 np0005465987 podman[303660]: 2025-10-02 12:58:17.077866768 +0000 UTC m=+0.088177740 container cleanup ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:58:17 np0005465987 systemd[1]: libpod-conmon-ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac.scope: Deactivated successfully.
Oct  2 08:58:17 np0005465987 podman[303690]: 2025-10-02 12:58:17.134893163 +0000 UTC m=+0.037918801 container remove ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:58:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:17.140 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b196e9-9ad6-4339-9f4f-ea534f669823]: (4, ('Thu Oct  2 12:58:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b (ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac)\nac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac\nThu Oct  2 12:58:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48b23f60-a626-4a95-b154-a764454c451b (ac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac)\nac5a9c7378436d5b7c307380c1993b586320967db3b9b3838a856855ebd028ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:17.142 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0f2109-c108-47c9-a118-6aae35f93b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:17.142 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48b23f60-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:17 np0005465987 nova_compute[230713]: 2025-10-02 12:58:17.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:17 np0005465987 kernel: tap48b23f60-a0: left promiscuous mode
Oct  2 08:58:17 np0005465987 nova_compute[230713]: 2025-10-02 12:58:17.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:17.159 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[df48ea0f-c97a-47d7-83e6-35e7ac3ee9d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:17.187 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4791a38c-de05-4c49-abe7-2a964ceb6cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:17.188 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[42cd2049-de78-492d-a7b7-3174a1854882]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:17 np0005465987 nova_compute[230713]: 2025-10-02 12:58:17.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:17.202 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f480f044-4115-4104-a7a2-f554237d8d0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784787, 'reachable_time': 22677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303705, 'error': None, 'target': 'ovnmeta-48b23f60-a626-4a95-b154-a764454c451b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:17 np0005465987 systemd[1]: run-netns-ovnmeta\x2d48b23f60\x2da626\x2d4a95\x2db154\x2da764454c451b.mount: Deactivated successfully.
Oct  2 08:58:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:17.206 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48b23f60-a626-4a95-b154-a764454c451b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:58:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:17.206 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[6a05863a-31f8-43c6-9039-9bd55cd9a84f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:17.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:17 np0005465987 nova_compute[230713]: 2025-10-02 12:58:17.702 2 DEBUG nova.compute.manager [req-9ae018a8-df88-4ddf-9d3c-0c98f76e1061 req-8cbd78f8-11b4-434b-9b23-45cc6c913ece d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-unplugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:17 np0005465987 nova_compute[230713]: 2025-10-02 12:58:17.704 2 DEBUG oslo_concurrency.lockutils [req-9ae018a8-df88-4ddf-9d3c-0c98f76e1061 req-8cbd78f8-11b4-434b-9b23-45cc6c913ece d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:17 np0005465987 nova_compute[230713]: 2025-10-02 12:58:17.704 2 DEBUG oslo_concurrency.lockutils [req-9ae018a8-df88-4ddf-9d3c-0c98f76e1061 req-8cbd78f8-11b4-434b-9b23-45cc6c913ece d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:17 np0005465987 nova_compute[230713]: 2025-10-02 12:58:17.704 2 DEBUG oslo_concurrency.lockutils [req-9ae018a8-df88-4ddf-9d3c-0c98f76e1061 req-8cbd78f8-11b4-434b-9b23-45cc6c913ece d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:17 np0005465987 nova_compute[230713]: 2025-10-02 12:58:17.704 2 DEBUG nova.compute.manager [req-9ae018a8-df88-4ddf-9d3c-0c98f76e1061 req-8cbd78f8-11b4-434b-9b23-45cc6c913ece d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] No waiting events found dispatching network-vif-unplugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:58:17 np0005465987 nova_compute[230713]: 2025-10-02 12:58:17.705 2 WARNING nova.compute.manager [req-9ae018a8-df88-4ddf-9d3c-0c98f76e1061 req-8cbd78f8-11b4-434b-9b23-45cc6c913ece d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received unexpected event network-vif-unplugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:58:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:17.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.015 2 DEBUG oslo_concurrency.lockutils [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.016 2 DEBUG oslo_concurrency.lockutils [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.016 2 DEBUG nova.network.neutron [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.082 2 DEBUG nova.compute.manager [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-deleted-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.082 2 INFO nova.compute.manager [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Neutron deleted interface 4a77b29a-85b0-4bc7-b38c-8915de0b74d3; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.082 2 DEBUG nova.network.neutron [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.105 2 DEBUG nova.objects.instance [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'system_metadata' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.133 2 DEBUG nova.objects.instance [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lazy-loading 'flavor' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.165 2 DEBUG nova.virt.libvirt.vif [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1692291686',display_name='tempest-TestNetworkBasicOps-server-1692291686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1692291686',id=186,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZWIx9o6kvPe+g4xv7jAf5RmegZo+XaIBjQcv6mFDdVE2Gj2eWqFYANF8qBhZ2trYTB0D6p6t6i6if67qDSCR68FKh99aOZV/40OkaYK0vaRDFLVdlWm6tj/SR2I1Vx6w==',key_name='tempest-TestNetworkBasicOps-590509202',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-uec0sg7u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:33Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2dff32de-ac4f-401a-9e6a-9446d4a0faaa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.166 2 DEBUG nova.network.os_vif_util [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.167 2 DEBUG nova.network.os_vif_util [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.170 2 DEBUG nova.virt.libvirt.guest [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.174 2 DEBUG nova.virt.libvirt.guest [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface>not found in domain: <domain type='kvm' id='85'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <name>instance-000000ba</name>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <uuid>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</uuid>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:name>tempest-TestNetworkBasicOps-server-1692291686</nova:name>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:58:16</nova:creationTime>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:port uuid="76ca6742-e0ea-45a7-bf82-750d5106e5b8">
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:58:18 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='serial'>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='uuid'>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk' index='2'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config' index='1'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:95:61:8f'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target dev='tap76ca6742-e0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log' append='off'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log' append='off'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c403,c714</label>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c403,c714</imagelabel>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:58:18 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:58:18 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.175 2 DEBUG nova.virt.libvirt.guest [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.179 2 DEBUG nova.virt.libvirt.guest [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:46:71:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4a77b29a-85"/></interface>not found in domain: <domain type='kvm' id='85'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <name>instance-000000ba</name>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <uuid>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</uuid>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:name>tempest-TestNetworkBasicOps-server-1692291686</nova:name>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:58:16</nova:creationTime>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:port uuid="76ca6742-e0ea-45a7-bf82-750d5106e5b8">
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:58:18 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <memory unit='KiB'>131072</memory>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <resource>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <partition>/machine</partition>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </resource>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <sysinfo type='smbios'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='serial'>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='uuid'>2dff32de-ac4f-401a-9e6a-9446d4a0faaa</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <boot dev='hd'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <smbios mode='sysinfo'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <vmcoreinfo state='on'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <feature policy='require' name='x2apic'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <feature policy='require' name='vme'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <clock offset='utc'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <timer name='hpet' present='no'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <on_reboot>restart</on_reboot>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <on_crash>destroy</on_crash>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <disk type='network' device='disk'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk' index='2'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target dev='vda' bus='virtio'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='virtio-disk0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <disk type='network' device='cdrom'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <auth username='openstack'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <secret type='ceph' uuid='fd4c5763-22d1-50ea-ad0b-96a3dc3040b2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <source protocol='rbd' name='vms/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_disk.config' index='1'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target dev='sda' bus='sata'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <readonly/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='sata0-0-0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pcie.0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='1' port='0x10'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='2' port='0x11'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='3' port='0x12'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.3'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='4' port='0x13'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.4'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='5' port='0x14'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.5'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='6' port='0x15'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.6'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='7' port='0x16'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.7'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='8' port='0x17'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.8'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='9' port='0x18'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.9'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='10' port='0x19'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.10'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='11' port='0x1a'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.11'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='12' port='0x1b'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.12'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='13' port='0x1c'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.13'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='14' port='0x1d'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.14'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='15' port='0x1e'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.15'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='16' port='0x1f'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.16'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='17' port='0x20'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.17'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='18' port='0x21'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.18'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='19' port='0x22'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.19'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='20' port='0x23'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.20'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='21' port='0x24'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.21'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='22' port='0x25'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.22'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='23' port='0x26'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.23'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='24' port='0x27'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.24'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-root-port'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target chassis='25' port='0x28'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.25'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model name='pcie-pci-bridge'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='pci.26'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='usb'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <controller type='sata' index='0'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='ide'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </controller>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <interface type='ethernet'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <mac address='fa:16:3e:95:61:8f'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target dev='tap76ca6742-e0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model type='virtio'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <mtu size='1442'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='net0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <serial type='pty'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log' append='off'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target type='isa-serial' port='0'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:        <model name='isa-serial'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      </target>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <source path='/dev/pts/0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <log file='/var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa/console.log' append='off'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <target type='serial' port='0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='serial0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </console>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <input type='tablet' bus='usb'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='input0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <input type='mouse' bus='ps2'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='input1'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <input type='keyboard' bus='ps2'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='input2'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </input>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <listen type='address' address='::0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </graphics>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <audio id='1' type='none'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='video0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <watchdog model='itco' action='reset'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='watchdog0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </watchdog>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <memballoon model='virtio'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <stats period='10'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='balloon0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <rng model='virtio'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <alias name='rng0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <label>system_u:system_r:svirt_t:s0:c403,c714</label>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c403,c714</imagelabel>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <label>+107:+107</label>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </seclabel>
Oct  2 08:58:18 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:58:18 np0005465987 nova_compute[230713]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.180 2 WARNING nova.virt.libvirt.driver [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Detaching interface fa:16:3e:46:71:42 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap4a77b29a-85' not found.#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.181 2 DEBUG nova.virt.libvirt.vif [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1692291686',display_name='tempest-TestNetworkBasicOps-server-1692291686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1692291686',id=186,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZWIx9o6kvPe+g4xv7jAf5RmegZo+XaIBjQcv6mFDdVE2Gj2eWqFYANF8qBhZ2trYTB0D6p6t6i6if67qDSCR68FKh99aOZV/40OkaYK0vaRDFLVdlWm6tj/SR2I1Vx6w==',key_name='tempest-TestNetworkBasicOps-590509202',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-uec0sg7u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:33Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2dff32de-ac4f-401a-9e6a-9446d4a0faaa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.181 2 DEBUG nova.network.os_vif_util [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converting VIF {"id": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "address": "fa:16:3e:46:71:42", "network": {"id": "48b23f60-a626-4a95-b154-a764454c451b", "bridge": "br-int", "label": "tempest-network-smoke--1549158146", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a77b29a-85", "ovs_interfaceid": "4a77b29a-85b0-4bc7-b38c-8915de0b74d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.182 2 DEBUG nova.network.os_vif_util [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.182 2 DEBUG os_vif [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a77b29a-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.187 2 INFO os_vif [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:71:42,bridge_name='br-int',has_traffic_filtering=True,id=4a77b29a-85b0-4bc7-b38c-8915de0b74d3,network=Network(48b23f60-a626-4a95-b154-a764454c451b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a77b29a-85')#033[00m
Oct  2 08:58:18 np0005465987 nova_compute[230713]: 2025-10-02 12:58:18.188 2 DEBUG nova.virt.libvirt.guest [req-3d19908e-939e-4f5f-b7ef-6e519afe2b34 req-2c46cb6b-87b2-477d-8ec8-4276dfc7f12b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:name>tempest-TestNetworkBasicOps-server-1692291686</nova:name>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:creationTime>2025-10-02 12:58:18</nova:creationTime>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:flavor name="m1.nano">
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:memory>128</nova:memory>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:disk>1</nova:disk>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:swap>0</nova:swap>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </nova:flavor>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:owner>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </nova:owner>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  <nova:ports>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    <nova:port uuid="76ca6742-e0ea-45a7-bf82-750d5106e5b8">
Oct  2 08:58:18 np0005465987 nova_compute[230713]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:    </nova:port>
Oct  2 08:58:18 np0005465987 nova_compute[230713]:  </nova:ports>
Oct  2 08:58:18 np0005465987 nova_compute[230713]: </nova:instance>
Oct  2 08:58:18 np0005465987 nova_compute[230713]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:58:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:58:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:58:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:58:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:58:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:58:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:58:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:58:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:58:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:19.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:19.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:19 np0005465987 nova_compute[230713]: 2025-10-02 12:58:19.847 2 DEBUG nova.compute.manager [req-4b8a8e75-73f5-4e71-8354-6817625ffbe9 req-3b1dc7e4-91a0-4b67-966b-1e4bc7bdfde7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-plugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:19 np0005465987 nova_compute[230713]: 2025-10-02 12:58:19.847 2 DEBUG oslo_concurrency.lockutils [req-4b8a8e75-73f5-4e71-8354-6817625ffbe9 req-3b1dc7e4-91a0-4b67-966b-1e4bc7bdfde7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:19 np0005465987 nova_compute[230713]: 2025-10-02 12:58:19.848 2 DEBUG oslo_concurrency.lockutils [req-4b8a8e75-73f5-4e71-8354-6817625ffbe9 req-3b1dc7e4-91a0-4b67-966b-1e4bc7bdfde7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:19 np0005465987 nova_compute[230713]: 2025-10-02 12:58:19.848 2 DEBUG oslo_concurrency.lockutils [req-4b8a8e75-73f5-4e71-8354-6817625ffbe9 req-3b1dc7e4-91a0-4b67-966b-1e4bc7bdfde7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:19 np0005465987 nova_compute[230713]: 2025-10-02 12:58:19.848 2 DEBUG nova.compute.manager [req-4b8a8e75-73f5-4e71-8354-6817625ffbe9 req-3b1dc7e4-91a0-4b67-966b-1e4bc7bdfde7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] No waiting events found dispatching network-vif-plugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:58:19 np0005465987 nova_compute[230713]: 2025-10-02 12:58:19.848 2 WARNING nova.compute.manager [req-4b8a8e75-73f5-4e71-8354-6817625ffbe9 req-3b1dc7e4-91a0-4b67-966b-1e4bc7bdfde7 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received unexpected event network-vif-plugged-4a77b29a-85b0-4bc7-b38c-8915de0b74d3 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:58:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:20 np0005465987 nova_compute[230713]: 2025-10-02 12:58:20.866 2 INFO nova.network.neutron [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Port 4a77b29a-85b0-4bc7-b38c-8915de0b74d3 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 08:58:20 np0005465987 nova_compute[230713]: 2025-10-02 12:58:20.867 2 DEBUG nova.network.neutron [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:20 np0005465987 nova_compute[230713]: 2025-10-02 12:58:20.985 2 DEBUG oslo_concurrency.lockutils [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:58:21 np0005465987 nova_compute[230713]: 2025-10-02 12:58:21.242 2 DEBUG oslo_concurrency.lockutils [None req-0953c147-e696-4998-b72d-010d5ea63135 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "interface-2dff32de-ac4f-401a-9e6a-9446d4a0faaa-4a77b29a-85b0-4bc7-b38c-8915de0b74d3" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:21.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:21 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:21Z|00760|binding|INFO|Releasing lport 834e2700-8bec-46ce-90f8-a8053806269a from this chassis (sb_readonly=0)
Oct  2 08:58:21 np0005465987 nova_compute[230713]: 2025-10-02 12:58:21.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:21.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:21 np0005465987 nova_compute[230713]: 2025-10-02 12:58:21.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.267 2 DEBUG nova.compute.manager [req-2a931250-ff89-4bea-9313-e70d0c8f60b8 req-17fba0b9-8fe3-4607-9c7d-892d1946d3f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-changed-76ca6742-e0ea-45a7-bf82-750d5106e5b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.267 2 DEBUG nova.compute.manager [req-2a931250-ff89-4bea-9313-e70d0c8f60b8 req-17fba0b9-8fe3-4607-9c7d-892d1946d3f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing instance network info cache due to event network-changed-76ca6742-e0ea-45a7-bf82-750d5106e5b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.267 2 DEBUG oslo_concurrency.lockutils [req-2a931250-ff89-4bea-9313-e70d0c8f60b8 req-17fba0b9-8fe3-4607-9c7d-892d1946d3f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.268 2 DEBUG oslo_concurrency.lockutils [req-2a931250-ff89-4bea-9313-e70d0c8f60b8 req-17fba0b9-8fe3-4607-9c7d-892d1946d3f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.268 2 DEBUG nova.network.neutron [req-2a931250-ff89-4bea-9313-e70d0c8f60b8 req-17fba0b9-8fe3-4607-9c7d-892d1946d3f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Refreshing network info cache for port 76ca6742-e0ea-45a7-bf82-750d5106e5b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.352 2 DEBUG oslo_concurrency.lockutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.353 2 DEBUG oslo_concurrency.lockutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.353 2 DEBUG oslo_concurrency.lockutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.353 2 DEBUG oslo_concurrency.lockutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.354 2 DEBUG oslo_concurrency.lockutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.355 2 INFO nova.compute.manager [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Terminating instance#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.355 2 DEBUG nova.compute.manager [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:58:22 np0005465987 kernel: tap76ca6742-e0 (unregistering): left promiscuous mode
Oct  2 08:58:22 np0005465987 NetworkManager[44910]: <info>  [1759409902.4184] device (tap76ca6742-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:22Z|00761|binding|INFO|Releasing lport 76ca6742-e0ea-45a7-bf82-750d5106e5b8 from this chassis (sb_readonly=0)
Oct  2 08:58:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:22Z|00762|binding|INFO|Setting lport 76ca6742-e0ea-45a7-bf82-750d5106e5b8 down in Southbound
Oct  2 08:58:22 np0005465987 ovn_controller[129172]: 2025-10-02T12:58:22Z|00763|binding|INFO|Removing iface tap76ca6742-e0 ovn-installed in OVS
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.439 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:61:8f 10.100.0.6'], port_security=['fa:16:3e:95:61:8f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2dff32de-ac4f-401a-9e6a-9446d4a0faaa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67d7f48d-b1d1-47d6-8eed-15e85a093f32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cdfdfec-43b4-428e-8545-edd01c9392fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa8b8d0e-a3e0-4fe3-a765-c419014cae9f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=76ca6742-e0ea-45a7-bf82-750d5106e5b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.440 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 76ca6742-e0ea-45a7-bf82-750d5106e5b8 in datapath 67d7f48d-b1d1-47d6-8eed-15e85a093f32 unbound from our chassis#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.442 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67d7f48d-b1d1-47d6-8eed-15e85a093f32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.443 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[15aecdfe-dbf7-4416-95e1-375b15333da7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.443 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32 namespace which is not needed anymore#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:22 np0005465987 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000ba.scope: Deactivated successfully.
Oct  2 08:58:22 np0005465987 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000ba.scope: Consumed 21.896s CPU time.
Oct  2 08:58:22 np0005465987 systemd-machined[188335]: Machine qemu-85-instance-000000ba terminated.
Oct  2 08:58:22 np0005465987 neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32[301929]: [NOTICE]   (301933) : haproxy version is 2.8.14-c23fe91
Oct  2 08:58:22 np0005465987 neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32[301929]: [NOTICE]   (301933) : path to executable is /usr/sbin/haproxy
Oct  2 08:58:22 np0005465987 neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32[301929]: [WARNING]  (301933) : Exiting Master process...
Oct  2 08:58:22 np0005465987 neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32[301929]: [WARNING]  (301933) : Exiting Master process...
Oct  2 08:58:22 np0005465987 neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32[301929]: [ALERT]    (301933) : Current worker (301935) exited with code 143 (Terminated)
Oct  2 08:58:22 np0005465987 neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32[301929]: [WARNING]  (301933) : All workers exited. Exiting... (0)
Oct  2 08:58:22 np0005465987 systemd[1]: libpod-c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5.scope: Deactivated successfully.
Oct  2 08:58:22 np0005465987 podman[303863]: 2025-10-02 12:58:22.576471552 +0000 UTC m=+0.043691220 container died c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.593 2 INFO nova.virt.libvirt.driver [-] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Instance destroyed successfully.#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.593 2 DEBUG nova.objects.instance [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'resources' on Instance uuid 2dff32de-ac4f-401a-9e6a-9446d4a0faaa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.615 2 DEBUG nova.virt.libvirt.vif [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1692291686',display_name='tempest-TestNetworkBasicOps-server-1692291686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1692291686',id=186,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZWIx9o6kvPe+g4xv7jAf5RmegZo+XaIBjQcv6mFDdVE2Gj2eWqFYANF8qBhZ2trYTB0D6p6t6i6if67qDSCR68FKh99aOZV/40OkaYK0vaRDFLVdlWm6tj/SR2I1Vx6w==',key_name='tempest-TestNetworkBasicOps-590509202',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-uec0sg7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:33Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=2dff32de-ac4f-401a-9e6a-9446d4a0faaa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.616 2 DEBUG nova.network.os_vif_util [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.617 2 DEBUG nova.network.os_vif_util [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:61:8f,bridge_name='br-int',has_traffic_filtering=True,id=76ca6742-e0ea-45a7-bf82-750d5106e5b8,network=Network(67d7f48d-b1d1-47d6-8eed-15e85a093f32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ca6742-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.617 2 DEBUG os_vif [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:61:8f,bridge_name='br-int',has_traffic_filtering=True,id=76ca6742-e0ea-45a7-bf82-750d5106e5b8,network=Network(67d7f48d-b1d1-47d6-8eed-15e85a093f32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ca6742-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:58:22 np0005465987 systemd[1]: var-lib-containers-storage-overlay-a1a612d2b278e64e5aebe07682c069ac2862420b06611b845774ad2a62d5ee66-merged.mount: Deactivated successfully.
Oct  2 08:58:22 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5-userdata-shm.mount: Deactivated successfully.
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.620 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76ca6742-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.626 2 INFO os_vif [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:61:8f,bridge_name='br-int',has_traffic_filtering=True,id=76ca6742-e0ea-45a7-bf82-750d5106e5b8,network=Network(67d7f48d-b1d1-47d6-8eed-15e85a093f32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ca6742-e0')#033[00m
Oct  2 08:58:22 np0005465987 podman[303863]: 2025-10-02 12:58:22.627591104 +0000 UTC m=+0.094810762 container cleanup c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:58:22 np0005465987 systemd[1]: libpod-conmon-c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5.scope: Deactivated successfully.
Oct  2 08:58:22 np0005465987 podman[303914]: 2025-10-02 12:58:22.693559925 +0000 UTC m=+0.039117604 container remove c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.699 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7beb3ab2-72e9-4279-89dc-8ec5f16eae21]: (4, ('Thu Oct  2 12:58:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32 (c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5)\nc0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5\nThu Oct  2 12:58:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32 (c0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5)\nc0314f3ecf30c1baa79e6c8dbde537fdb8c567dd4ad1b3a327e9068daea1acc5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.701 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[73100fa2-e9a6-40bc-8259-1607ef4601b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.702 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67d7f48d-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:22 np0005465987 kernel: tap67d7f48d-b0: left promiscuous mode
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.720 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[aa74b968-99e6-47c1-8a1c-2c98028ecd43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.747 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2068f6-d7ec-4e7e-9e4e-005eff2c9b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.748 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[53f624b5-8899-4f1f-b12f-479af40d61e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.764 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee060ad-27e9-4a26-8734-ffc47c72f358]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779919, 'reachable_time': 20652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303937, 'error': None, 'target': 'ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:22 np0005465987 systemd[1]: run-netns-ovnmeta\x2d67d7f48d\x2db1d1\x2d47d6\x2d8eed\x2d15e85a093f32.mount: Deactivated successfully.
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.767 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67d7f48d-b1d1-47d6-8eed-15e85a093f32 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:58:22 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:22.767 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0f710b-7058-4114-878e-72991cb63e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:22 np0005465987 nova_compute[230713]: 2025-10-02 12:58:22.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:23 np0005465987 nova_compute[230713]: 2025-10-02 12:58:23.670 2 DEBUG nova.compute.manager [req-d8208183-9f77-4760-9165-58aadc02141c req-8ee7655d-972c-4674-b6d7-c78732be3e6b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-unplugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:23 np0005465987 nova_compute[230713]: 2025-10-02 12:58:23.671 2 DEBUG oslo_concurrency.lockutils [req-d8208183-9f77-4760-9165-58aadc02141c req-8ee7655d-972c-4674-b6d7-c78732be3e6b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:23 np0005465987 nova_compute[230713]: 2025-10-02 12:58:23.672 2 DEBUG oslo_concurrency.lockutils [req-d8208183-9f77-4760-9165-58aadc02141c req-8ee7655d-972c-4674-b6d7-c78732be3e6b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:23 np0005465987 nova_compute[230713]: 2025-10-02 12:58:23.672 2 DEBUG oslo_concurrency.lockutils [req-d8208183-9f77-4760-9165-58aadc02141c req-8ee7655d-972c-4674-b6d7-c78732be3e6b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:23 np0005465987 nova_compute[230713]: 2025-10-02 12:58:23.673 2 DEBUG nova.compute.manager [req-d8208183-9f77-4760-9165-58aadc02141c req-8ee7655d-972c-4674-b6d7-c78732be3e6b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] No waiting events found dispatching network-vif-unplugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:58:23 np0005465987 nova_compute[230713]: 2025-10-02 12:58:23.673 2 DEBUG nova.compute.manager [req-d8208183-9f77-4760-9165-58aadc02141c req-8ee7655d-972c-4674-b6d7-c78732be3e6b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-unplugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:58:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:23.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:24 np0005465987 nova_compute[230713]: 2025-10-02 12:58:24.682 2 DEBUG nova.network.neutron [req-2a931250-ff89-4bea-9313-e70d0c8f60b8 req-17fba0b9-8fe3-4607-9c7d-892d1946d3f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updated VIF entry in instance network info cache for port 76ca6742-e0ea-45a7-bf82-750d5106e5b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:58:24 np0005465987 nova_compute[230713]: 2025-10-02 12:58:24.683 2 DEBUG nova.network.neutron [req-2a931250-ff89-4bea-9313-e70d0c8f60b8 req-17fba0b9-8fe3-4607-9c7d-892d1946d3f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [{"id": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "address": "fa:16:3e:95:61:8f", "network": {"id": "67d7f48d-b1d1-47d6-8eed-15e85a093f32", "bridge": "br-int", "label": "tempest-network-smoke--1612171048", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ca6742-e0", "ovs_interfaceid": "76ca6742-e0ea-45a7-bf82-750d5106e5b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:24 np0005465987 nova_compute[230713]: 2025-10-02 12:58:24.703 2 DEBUG oslo_concurrency.lockutils [req-2a931250-ff89-4bea-9313-e70d0c8f60b8 req-17fba0b9-8fe3-4607-9c7d-892d1946d3f6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-2dff32de-ac4f-401a-9e6a-9446d4a0faaa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:58:24 np0005465987 nova_compute[230713]: 2025-10-02 12:58:24.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:24 np0005465987 nova_compute[230713]: 2025-10-02 12:58:24.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:58:24 np0005465987 nova_compute[230713]: 2025-10-02 12:58:24.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:58:24 np0005465987 nova_compute[230713]: 2025-10-02 12:58:24.988 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:58:24 np0005465987 nova_compute[230713]: 2025-10-02 12:58:24.989 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.103 2 INFO nova.virt.libvirt.driver [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Deleting instance files /var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_del#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.104 2 INFO nova.virt.libvirt.driver [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Deletion of /var/lib/nova/instances/2dff32de-ac4f-401a-9e6a-9446d4a0faaa_del complete#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.151 2 INFO nova.compute.manager [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Took 2.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.152 2 DEBUG oslo.service.loopingcall [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.152 2 DEBUG nova.compute.manager [-] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.152 2 DEBUG nova.network.neutron [-] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:58:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:25.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.759 2 DEBUG nova.compute.manager [req-0c2106fd-83fd-415a-9617-7b1b75afe4e0 req-a660e4fe-cb16-40e7-8883-98ad9467ec06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-plugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.760 2 DEBUG oslo_concurrency.lockutils [req-0c2106fd-83fd-415a-9617-7b1b75afe4e0 req-a660e4fe-cb16-40e7-8883-98ad9467ec06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.760 2 DEBUG oslo_concurrency.lockutils [req-0c2106fd-83fd-415a-9617-7b1b75afe4e0 req-a660e4fe-cb16-40e7-8883-98ad9467ec06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.760 2 DEBUG oslo_concurrency.lockutils [req-0c2106fd-83fd-415a-9617-7b1b75afe4e0 req-a660e4fe-cb16-40e7-8883-98ad9467ec06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.760 2 DEBUG nova.compute.manager [req-0c2106fd-83fd-415a-9617-7b1b75afe4e0 req-a660e4fe-cb16-40e7-8883-98ad9467ec06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] No waiting events found dispatching network-vif-plugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.761 2 WARNING nova.compute.manager [req-0c2106fd-83fd-415a-9617-7b1b75afe4e0 req-a660e4fe-cb16-40e7-8883-98ad9467ec06 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received unexpected event network-vif-plugged-76ca6742-e0ea-45a7-bf82-750d5106e5b8 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.798 2 DEBUG nova.network.neutron [-] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:58:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:25.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.820 2 INFO nova.compute.manager [-] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Took 0.67 seconds to deallocate network for instance.#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.863 2 DEBUG oslo_concurrency.lockutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.864 2 DEBUG oslo_concurrency.lockutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.876 2 DEBUG nova.compute.manager [req-0788dc87-579c-4d02-b8f8-57de4dbc9eac req-77823adf-d48b-4d89-8666-99c139b8367e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Received event network-vif-deleted-76ca6742-e0ea-45a7-bf82-750d5106e5b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.909 2 DEBUG oslo_concurrency.processutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:25 np0005465987 nova_compute[230713]: 2025-10-02 12:58:25.982 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3640934741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:26 np0005465987 nova_compute[230713]: 2025-10-02 12:58:26.367 2 DEBUG oslo_concurrency.processutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:26 np0005465987 nova_compute[230713]: 2025-10-02 12:58:26.375 2 DEBUG nova.compute.provider_tree [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:58:26 np0005465987 nova_compute[230713]: 2025-10-02 12:58:26.416 2 DEBUG nova.scheduler.client.report [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:58:26 np0005465987 nova_compute[230713]: 2025-10-02 12:58:26.485 2 DEBUG oslo_concurrency.lockutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:26 np0005465987 nova_compute[230713]: 2025-10-02 12:58:26.580 2 INFO nova.scheduler.client.report [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Deleted allocations for instance 2dff32de-ac4f-401a-9e6a-9446d4a0faaa#033[00m
Oct  2 08:58:26 np0005465987 nova_compute[230713]: 2025-10-02 12:58:26.951 2 DEBUG oslo_concurrency.lockutils [None req-c9d0d79e-6a0f-4c01-9147-dc01b14ec2a6 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "2dff32de-ac4f-401a-9e6a-9446d4a0faaa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:27 np0005465987 nova_compute[230713]: 2025-10-02 12:58:27.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:58:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:27.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:58:27 np0005465987 nova_compute[230713]: 2025-10-02 12:58:27.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:27.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:27.985 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:27.986 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:58:27.986 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:58:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:58:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:29.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:29 np0005465987 nova_compute[230713]: 2025-10-02 12:58:29.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:29.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:29 np0005465987 nova_compute[230713]: 2025-10-02 12:58:29.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:29 np0005465987 nova_compute[230713]: 2025-10-02 12:58:29.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:29 np0005465987 nova_compute[230713]: 2025-10-02 12:58:29.998 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:29.998 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:29.998 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:29.999 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:29.999 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/697969975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:30.447 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:30 np0005465987 podman[304035]: 2025-10-02 12:58:30.553723966 +0000 UTC m=+0.064911702 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:30.639 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:30.641 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4312MB free_disk=20.94265365600586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:30.641 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:30.641 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:30.697 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:30.698 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:58:30 np0005465987 nova_compute[230713]: 2025-10-02 12:58:30.712 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:31 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/394349639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:31 np0005465987 nova_compute[230713]: 2025-10-02 12:58:31.124 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:31 np0005465987 nova_compute[230713]: 2025-10-02 12:58:31.130 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:58:31 np0005465987 nova_compute[230713]: 2025-10-02 12:58:31.154 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:58:31 np0005465987 nova_compute[230713]: 2025-10-02 12:58:31.194 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:58:31 np0005465987 nova_compute[230713]: 2025-10-02 12:58:31.194 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:31.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:31 np0005465987 nova_compute[230713]: 2025-10-02 12:58:31.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:31 np0005465987 nova_compute[230713]: 2025-10-02 12:58:31.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:31.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:31 np0005465987 podman[304077]: 2025-10-02 12:58:31.858378314 +0000 UTC m=+0.057021525 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 08:58:31 np0005465987 podman[304075]: 2025-10-02 12:58:31.890388723 +0000 UTC m=+0.090638439 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:58:31 np0005465987 podman[304078]: 2025-10-02 12:58:31.890387993 +0000 UTC m=+0.085696463 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:58:32 np0005465987 nova_compute[230713]: 2025-10-02 12:58:32.195 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:32 np0005465987 nova_compute[230713]: 2025-10-02 12:58:32.196 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:32 np0005465987 nova_compute[230713]: 2025-10-02 12:58:32.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:32 np0005465987 nova_compute[230713]: 2025-10-02 12:58:32.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:33.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:33.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:34 np0005465987 nova_compute[230713]: 2025-10-02 12:58:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:35.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:35.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:37 np0005465987 nova_compute[230713]: 2025-10-02 12:58:37.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:37.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:37 np0005465987 nova_compute[230713]: 2025-10-02 12:58:37.591 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409902.59073, 2dff32de-ac4f-401a-9e6a-9446d4a0faaa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:37 np0005465987 nova_compute[230713]: 2025-10-02 12:58:37.591 2 INFO nova.compute.manager [-] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:58:37 np0005465987 nova_compute[230713]: 2025-10-02 12:58:37.620 2 DEBUG nova.compute.manager [None req-dc32bdab-7c1c-440f-8b45-c7636e0716e8 - - - - - -] [instance: 2dff32de-ac4f-401a-9e6a-9446d4a0faaa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:37 np0005465987 nova_compute[230713]: 2025-10-02 12:58:37.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:37.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:38 np0005465987 nova_compute[230713]: 2025-10-02 12:58:38.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:38 np0005465987 nova_compute[230713]: 2025-10-02 12:58:38.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:58:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:39.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:39.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:41.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:41.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:42 np0005465987 nova_compute[230713]: 2025-10-02 12:58:42.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:42 np0005465987 nova_compute[230713]: 2025-10-02 12:58:42.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:43.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:45.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:47 np0005465987 nova_compute[230713]: 2025-10-02 12:58:47.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:47.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:47 np0005465987 nova_compute[230713]: 2025-10-02 12:58:47.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:49.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:49.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:50 np0005465987 nova_compute[230713]: 2025-10-02 12:58:50.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:51.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:51.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:52 np0005465987 nova_compute[230713]: 2025-10-02 12:58:52.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:52 np0005465987 nova_compute[230713]: 2025-10-02 12:58:52.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:53.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:53.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:55.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:58:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:55.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:58:57 np0005465987 nova_compute[230713]: 2025-10-02 12:58:57.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:57.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:57 np0005465987 nova_compute[230713]: 2025-10-02 12:58:57.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:57.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:59.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:58:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:59.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:00 np0005465987 podman[304143]: 2025-10-02 12:59:00.841353863 +0000 UTC m=+0.059206385 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:59:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:59:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246538470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:59:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:01.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e391 e391: 3 total, 3 up, 3 in
Oct  2 08:59:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:59:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:01.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:59:02 np0005465987 nova_compute[230713]: 2025-10-02 12:59:02.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:02 np0005465987 nova_compute[230713]: 2025-10-02 12:59:02.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:02 np0005465987 podman[304163]: 2025-10-02 12:59:02.841952976 +0000 UTC m=+0.055744880 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:59:02 np0005465987 podman[304164]: 2025-10-02 12:59:02.842873922 +0000 UTC m=+0.052818380 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:59:02 np0005465987 podman[304162]: 2025-10-02 12:59:02.865818792 +0000 UTC m=+0.084637453 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:59:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e392 e392: 3 total, 3 up, 3 in
Oct  2 08:59:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:03.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:03.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e393 e393: 3 total, 3 up, 3 in
Oct  2 08:59:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e394 e394: 3 total, 3 up, 3 in
Oct  2 08:59:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:05.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:05.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:07 np0005465987 nova_compute[230713]: 2025-10-02 12:59:07.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:07.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:07 np0005465987 nova_compute[230713]: 2025-10-02 12:59:07.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:07.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:09.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e395 e395: 3 total, 3 up, 3 in
Oct  2 08:59:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 e396: 3 total, 3 up, 3 in
Oct  2 08:59:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:09.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:11.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:11.729 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:59:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:11.730 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:59:11 np0005465987 nova_compute[230713]: 2025-10-02 12:59:11.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:11.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:12 np0005465987 nova_compute[230713]: 2025-10-02 12:59:12.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005465987 nova_compute[230713]: 2025-10-02 12:59:12.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:12.732 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:13.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:13.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:15.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:15.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:16 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:16Z|00764|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Oct  2 08:59:17 np0005465987 nova_compute[230713]: 2025-10-02 12:59:17.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:17.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:17 np0005465987 nova_compute[230713]: 2025-10-02 12:59:17.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:17.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:19.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:19.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.103 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Acquiring lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.104 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.130 2 DEBUG nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.236 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.237 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.243 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.244 2 INFO nova.compute.claims [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:59:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:21.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.414 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3109007842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.901 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:21.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:21 np0005465987 nova_compute[230713]: 2025-10-02 12:59:21.909 2 DEBUG nova.compute.provider_tree [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.097 2 DEBUG nova.scheduler.client.report [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.149 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.150 2 DEBUG nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.215 2 DEBUG nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.216 2 DEBUG nova.network.neutron [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.238 2 INFO nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.272 2 DEBUG nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.347 2 INFO nova.virt.block_device [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Booting with volume bf6c6619-4739-4799-81d0-3fa07140ece6 at /dev/vda#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.532 2 DEBUG nova.policy [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '729669e8835d4d95b5aae25c140e1f06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55e97477dd13448d9dafe69ec8614ea6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.535 2 DEBUG os_brick.utils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.536 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.547 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.547 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[162b0382-c3c9-4aac-9a00-37d78728c788]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.548 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.555 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.555 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[19f145cd-c081-4800-bed8-5c76838074d3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.556 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.564 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.565 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[d542d270-a56d-48f0-a188-8453047a5b78]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.567 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[59ae7728-f8de-4aba-a47b-154a34a959b6]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.567 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.593 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.596 2 DEBUG os_brick.initiator.connectors.lightos [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.596 2 DEBUG os_brick.initiator.connectors.lightos [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.597 2 DEBUG os_brick.initiator.connectors.lightos [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.597 2 DEBUG os_brick.utils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.598 2 DEBUG nova.virt.block_device [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updating existing volume attachment record: 6a4cd3c6-2522-4cf8-890c-a4cbc4b75cf5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:59:22 np0005465987 nova_compute[230713]: 2025-10-02 12:59:22.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:23.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:23 np0005465987 nova_compute[230713]: 2025-10-02 12:59:23.782 2 DEBUG nova.network.neutron [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Successfully created port: 1be9a48c-b01f-4d44-b98d-a0fc8e159efd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:59:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:23.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:23 np0005465987 nova_compute[230713]: 2025-10-02 12:59:23.959 2 DEBUG nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:59:23 np0005465987 nova_compute[230713]: 2025-10-02 12:59:23.960 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:59:23 np0005465987 nova_compute[230713]: 2025-10-02 12:59:23.960 2 INFO nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Creating image(s)#033[00m
Oct  2 08:59:23 np0005465987 nova_compute[230713]: 2025-10-02 12:59:23.961 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:59:23 np0005465987 nova_compute[230713]: 2025-10-02 12:59:23.961 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Ensure instance console log exists: /var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:59:23 np0005465987 nova_compute[230713]: 2025-10-02 12:59:23.961 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:23 np0005465987 nova_compute[230713]: 2025-10-02 12:59:23.962 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:23 np0005465987 nova_compute[230713]: 2025-10-02 12:59:23.962 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:24 np0005465987 nova_compute[230713]: 2025-10-02 12:59:24.959 2 DEBUG nova.network.neutron [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Successfully updated port: 1be9a48c-b01f-4d44-b98d-a0fc8e159efd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:59:24 np0005465987 nova_compute[230713]: 2025-10-02 12:59:24.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:25.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:25 np0005465987 nova_compute[230713]: 2025-10-02 12:59:25.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:25 np0005465987 nova_compute[230713]: 2025-10-02 12:59:25.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:59:25 np0005465987 nova_compute[230713]: 2025-10-02 12:59:25.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:59:25 np0005465987 nova_compute[230713]: 2025-10-02 12:59:25.975 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Acquiring lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:25 np0005465987 nova_compute[230713]: 2025-10-02 12:59:25.976 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Acquired lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:25 np0005465987 nova_compute[230713]: 2025-10-02 12:59:25.976 2 DEBUG nova.network.neutron [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:59:26 np0005465987 nova_compute[230713]: 2025-10-02 12:59:26.358 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:59:26 np0005465987 nova_compute[230713]: 2025-10-02 12:59:26.358 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:59:26 np0005465987 nova_compute[230713]: 2025-10-02 12:59:26.359 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:26 np0005465987 nova_compute[230713]: 2025-10-02 12:59:26.359 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:59:26 np0005465987 nova_compute[230713]: 2025-10-02 12:59:26.558 2 DEBUG nova.compute.manager [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:26 np0005465987 nova_compute[230713]: 2025-10-02 12:59:26.558 2 DEBUG nova.compute.manager [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing instance network info cache due to event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:59:26 np0005465987 nova_compute[230713]: 2025-10-02 12:59:26.559 2 DEBUG oslo_concurrency.lockutils [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:26 np0005465987 nova_compute[230713]: 2025-10-02 12:59:26.624 2 DEBUG nova.network.neutron [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:59:27 np0005465987 nova_compute[230713]: 2025-10-02 12:59:27.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:27 np0005465987 nova_compute[230713]: 2025-10-02 12:59:27.378 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:27.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:27 np0005465987 nova_compute[230713]: 2025-10-02 12:59:27.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:27.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:27.986 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:27.987 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:27.987 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.299 2 DEBUG nova.network.neutron [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updating instance_info_cache with network_info: [{"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.371 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Releasing lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.371 2 DEBUG nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Instance network_info: |[{"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.372 2 DEBUG oslo_concurrency.lockutils [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.372 2 DEBUG nova.network.neutron [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.375 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Start _get_guest_xml network_info=[{"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '6a4cd3c6-2522-4cf8-890c-a4cbc4b75cf5', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bf6c6619-4739-4799-81d0-3fa07140ece6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bf6c6619-4739-4799-81d0-3fa07140ece6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'af08b6ce-23c3-4ccf-846b-c8c68da5e4e9', 'attached_at': '', 'detached_at': '', 'volume_id': 'bf6c6619-4739-4799-81d0-3fa07140ece6', 'serial': 'bf6c6619-4739-4799-81d0-3fa07140ece6'}, 'guest_format': None, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.380 2 WARNING nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.385 2 DEBUG nova.virt.libvirt.host [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.385 2 DEBUG nova.virt.libvirt.host [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.389 2 DEBUG nova.virt.libvirt.host [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.389 2 DEBUG nova.virt.libvirt.host [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.390 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.391 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.391 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.391 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.391 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.392 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.392 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.392 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.392 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.392 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.393 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.393 2 DEBUG nova.virt.hardware [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.420 2 DEBUG nova.storage.rbd_utils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] rbd image af08b6ce-23c3-4ccf-846b-c8c68da5e4e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.423 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:59:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3167330298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.887 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.961 2 DEBUG nova.virt.libvirt.vif [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:59:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1029505625',display_name='tempest-TestVolumeBackupRestore-server-1029505625',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1029505625',id=194,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl51NXHF/HxwJ4jMeoLXErEUoSGAeFB47uZ/t3bK+Ssp0Nn+DcjgOAl2m4exn3Yc7fFerv2a/iFdsxTBhF3/df1jP+dAerL8c78Ulh1ONSCW5N4b+6FVZfs4qD5KXmJqg==',key_name='tempest-TestVolumeBackupRestore-384018385',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55e97477dd13448d9dafe69ec8614ea6',ramdisk_id='',reservation_id='r-axbzyxue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-2015194200',owner_user_name='tempest-TestVolumeBackupRestore-2015194200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:59:22Z,user_data=None,user_id='729669e8835d4d95b5aae25c140e1f06',uuid=af08b6ce-23c3-4ccf-846b-c8c68da5e4e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.961 2 DEBUG nova.network.os_vif_util [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Converting VIF {"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.962 2 DEBUG nova.network.os_vif_util [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:a7:79,bridge_name='br-int',has_traffic_filtering=True,id=1be9a48c-b01f-4d44-b98d-a0fc8e159efd,network=Network(12c7272c-36a0-40ff-9024-81c013bfef24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be9a48c-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:59:28 np0005465987 nova_compute[230713]: 2025-10-02 12:59:28.963 2 DEBUG nova.objects.instance [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lazy-loading 'pci_devices' on Instance uuid af08b6ce-23c3-4ccf-846b-c8c68da5e4e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.003 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <uuid>af08b6ce-23c3-4ccf-846b-c8c68da5e4e9</uuid>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <name>instance-000000c2</name>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestVolumeBackupRestore-server-1029505625</nova:name>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 12:59:28</nova:creationTime>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <nova:user uuid="729669e8835d4d95b5aae25c140e1f06">tempest-TestVolumeBackupRestore-2015194200-project-member</nova:user>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <nova:project uuid="55e97477dd13448d9dafe69ec8614ea6">tempest-TestVolumeBackupRestore-2015194200</nova:project>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <nova:port uuid="1be9a48c-b01f-4d44-b98d-a0fc8e159efd">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <system>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <entry name="serial">af08b6ce-23c3-4ccf-846b-c8c68da5e4e9</entry>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <entry name="uuid">af08b6ce-23c3-4ccf-846b-c8c68da5e4e9</entry>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    </system>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <os>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  </os>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <features>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  </features>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  </clock>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  <devices>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9_disk.config">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-bf6c6619-4739-4799-81d0-3fa07140ece6">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      </source>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      </auth>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <serial>bf6c6619-4739-4799-81d0-3fa07140ece6</serial>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    </disk>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:d2:a7:79"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <target dev="tap1be9a48c-b0"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    </interface>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9/console.log" append="off"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    </serial>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <video>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    </video>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    </rng>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 08:59:29 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 08:59:29 np0005465987 nova_compute[230713]:  </devices>
Oct  2 08:59:29 np0005465987 nova_compute[230713]: </domain>
Oct  2 08:59:29 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.004 2 DEBUG nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Preparing to wait for external event network-vif-plugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.004 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Acquiring lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.005 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.005 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.005 2 DEBUG nova.virt.libvirt.vif [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:59:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1029505625',display_name='tempest-TestVolumeBackupRestore-server-1029505625',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1029505625',id=194,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl51NXHF/HxwJ4jMeoLXErEUoSGAeFB47uZ/t3bK+Ssp0Nn+DcjgOAl2m4exn3Yc7fFerv2a/iFdsxTBhF3/df1jP+dAerL8c78Ulh1ONSCW5N4b+6FVZfs4qD5KXmJqg==',key_name='tempest-TestVolumeBackupRestore-384018385',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55e97477dd13448d9dafe69ec8614ea6',ramdisk_id='',reservation_id='r-axbzyxue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-2015194200',owner_user_name='tempest-TestVolumeBackupRestore-2015194200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:59:22Z,user_data=None,user_id='729669e8835d4d95b5aae25c140e1f06',uuid=af08b6ce-23c3-4ccf-846b-c8c68da5e4e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.006 2 DEBUG nova.network.os_vif_util [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Converting VIF {"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.006 2 DEBUG nova.network.os_vif_util [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:a7:79,bridge_name='br-int',has_traffic_filtering=True,id=1be9a48c-b01f-4d44-b98d-a0fc8e159efd,network=Network(12c7272c-36a0-40ff-9024-81c013bfef24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be9a48c-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.006 2 DEBUG os_vif [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:a7:79,bridge_name='br-int',has_traffic_filtering=True,id=1be9a48c-b01f-4d44-b98d-a0fc8e159efd,network=Network(12c7272c-36a0-40ff-9024-81c013bfef24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be9a48c-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.007 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1be9a48c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1be9a48c-b0, col_values=(('external_ids', {'iface-id': '1be9a48c-b01f-4d44-b98d-a0fc8e159efd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:a7:79', 'vm-uuid': 'af08b6ce-23c3-4ccf-846b-c8c68da5e4e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:29 np0005465987 NetworkManager[44910]: <info>  [1759409969.0138] manager: (tap1be9a48c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.021 2 INFO os_vif [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:a7:79,bridge_name='br-int',has_traffic_filtering=True,id=1be9a48c-b01f-4d44-b98d-a0fc8e159efd,network=Network(12c7272c-36a0-40ff-9024-81c013bfef24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be9a48c-b0')#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.101 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.102 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.102 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] No VIF found with MAC fa:16:3e:d2:a7:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.102 2 INFO nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Using config drive#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.124 2 DEBUG nova.storage.rbd_utils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] rbd image af08b6ce-23c3-4ccf-846b-c8c68da5e4e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:59:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:29.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.432 2 INFO nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Creating config drive at /var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9/disk.config#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.439 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2bg92qmi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:59:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:59:29 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.575 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2bg92qmi" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.606 2 DEBUG nova.storage.rbd_utils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] rbd image af08b6ce-23c3-4ccf-846b-c8c68da5e4e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.610 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9/disk.config af08b6ce-23c3-4ccf-846b-c8c68da5e4e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.817 2 DEBUG oslo_concurrency.processutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9/disk.config af08b6ce-23c3-4ccf-846b-c8c68da5e4e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.818 2 INFO nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Deleting local config drive /var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9/disk.config because it was imported into RBD.#033[00m
Oct  2 08:59:29 np0005465987 kernel: tap1be9a48c-b0: entered promiscuous mode
Oct  2 08:59:29 np0005465987 NetworkManager[44910]: <info>  [1759409969.8843] manager: (tap1be9a48c-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Oct  2 08:59:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:29Z|00765|binding|INFO|Claiming lport 1be9a48c-b01f-4d44-b98d-a0fc8e159efd for this chassis.
Oct  2 08:59:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:29Z|00766|binding|INFO|1be9a48c-b01f-4d44-b98d-a0fc8e159efd: Claiming fa:16:3e:d2:a7:79 10.100.0.4
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005465987 systemd-udevd[304498]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:59:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:29.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:29 np0005465987 systemd-machined[188335]: New machine qemu-87-instance-000000c2.
Oct  2 08:59:29 np0005465987 NetworkManager[44910]: <info>  [1759409969.9341] device (tap1be9a48c-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:59:29 np0005465987 NetworkManager[44910]: <info>  [1759409969.9351] device (tap1be9a48c-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:59:29 np0005465987 systemd[1]: Started Virtual Machine qemu-87-instance-000000c2.
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:29Z|00767|binding|INFO|Setting lport 1be9a48c-b01f-4d44-b98d-a0fc8e159efd ovn-installed in OVS
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:29 np0005465987 nova_compute[230713]: 2025-10-02 12:59:29.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:29 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:29Z|00768|binding|INFO|Setting lport 1be9a48c-b01f-4d44-b98d-a0fc8e159efd up in Southbound
Oct  2 08:59:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:29.985 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:a7:79 10.100.0.4'], port_security=['fa:16:3e:d2:a7:79 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'af08b6ce-23c3-4ccf-846b-c8c68da5e4e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12c7272c-36a0-40ff-9024-81c013bfef24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55e97477dd13448d9dafe69ec8614ea6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ed3589d-e92e-4396-972d-dd9089866bf2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17571240-5d78-4092-8ad7-ae611dfa7cf0, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=1be9a48c-b01f-4d44-b98d-a0fc8e159efd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:59:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:29.986 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd in datapath 12c7272c-36a0-40ff-9024-81c013bfef24 bound to our chassis#033[00m
Oct  2 08:59:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:29.987 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12c7272c-36a0-40ff-9024-81c013bfef24#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:29.999 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[68edb332-15e4-4783-adb8-b05bc68a9484]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:29.999 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12c7272c-31 in ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.002 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12c7272c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.002 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5fbca4-fdb6-4a90-a9c7-b823dd5035d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.002 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4f14ae62-2542-48c9-9d1c-73cb260a51e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.007 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.008 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.008 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.008 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.009 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.013 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[8969efd6-8d73-4861-a369-7dfb60c1f8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.035 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6905e9c9-206e-4b73-a92a-651ed7ba9897]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.067 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[1c09583c-5b75-4271-b203-ba97834c852f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 NetworkManager[44910]: <info>  [1759409970.0789] manager: (tap12c7272c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/349)
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.078 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2760f9b7-1692-4e0a-8b30-be35ce7c7cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.115 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1f37c1-cb31-4279-802e-bc6c10098f43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.119 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[2590f59c-469b-4d9b-93eb-87e71e949453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 NetworkManager[44910]: <info>  [1759409970.1595] device (tap12c7272c-30): carrier: link connected
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.167 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[14509b0a-46fe-40c8-a4fb-df98b9088949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.188 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7790d376-d829-42fe-9e74-2130951e0ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12c7272c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 226], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803873, 'reachable_time': 15093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304554, 'error': None, 'target': 'ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.209 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5fd2c7-7637-4828-97b7-96b2685941d9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:3d46'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803873, 'tstamp': 803873}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304555, 'error': None, 'target': 'ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.229 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7da309a3-2632-4fc9-8160-8d03146dfbe7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12c7272c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3d:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 226], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803873, 'reachable_time': 15093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304556, 'error': None, 'target': 'ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.270 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[108bab3d-1a5f-480e-9f55-d36c1414caaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.346 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[12b9ef46-88f2-42a0-87b5-b2a248fd0b94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.349 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12c7272c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.349 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.349 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12c7272c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:30 np0005465987 kernel: tap12c7272c-30: entered promiscuous mode
Oct  2 08:59:30 np0005465987 NetworkManager[44910]: <info>  [1759409970.3531] manager: (tap12c7272c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.357 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12c7272c-30, col_values=(('external_ids', {'iface-id': 'cbb475a5-e771-4ae9-876e-47a528425aa6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:30 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:30Z|00769|binding|INFO|Releasing lport cbb475a5-e771-4ae9-876e-47a528425aa6 from this chassis (sb_readonly=0)
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.362 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12c7272c-36a0-40ff-9024-81c013bfef24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12c7272c-36a0-40ff-9024-81c013bfef24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.363 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[294ccca7-908f-49c3-8baf-6415b1733da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.364 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-12c7272c-36a0-40ff-9024-81c013bfef24
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/12c7272c-36a0-40ff-9024-81c013bfef24.pid.haproxy
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 12c7272c-36a0-40ff-9024-81c013bfef24
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:59:30 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:30.365 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24', 'env', 'PROCESS_TAG=haproxy-12c7272c-36a0-40ff-9024-81c013bfef24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12c7272c-36a0-40ff-9024-81c013bfef24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.438 2 DEBUG nova.network.neutron [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updated VIF entry in instance network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.439 2 DEBUG nova.network.neutron [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updating instance_info_cache with network_info: [{"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:30 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1514215622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.459 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.474 2 DEBUG oslo_concurrency.lockutils [req-fb661569-beee-4c1a-a8ad-e6cdfe56b83e req-c20543d1-b588-43a8-b2b0-b921a6c610a9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.592 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.593 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.659 2 DEBUG nova.compute.manager [req-0028ee4c-3036-43fc-9be0-2ff8d39dadd0 req-d3fe7401-3f04-4bfd-901e-4b259e3dbaa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-vif-plugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.660 2 DEBUG oslo_concurrency.lockutils [req-0028ee4c-3036-43fc-9be0-2ff8d39dadd0 req-d3fe7401-3f04-4bfd-901e-4b259e3dbaa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.661 2 DEBUG oslo_concurrency.lockutils [req-0028ee4c-3036-43fc-9be0-2ff8d39dadd0 req-d3fe7401-3f04-4bfd-901e-4b259e3dbaa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.661 2 DEBUG oslo_concurrency.lockutils [req-0028ee4c-3036-43fc-9be0-2ff8d39dadd0 req-d3fe7401-3f04-4bfd-901e-4b259e3dbaa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.661 2 DEBUG nova.compute.manager [req-0028ee4c-3036-43fc-9be0-2ff8d39dadd0 req-d3fe7401-3f04-4bfd-901e-4b259e3dbaa3 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Processing event network-vif-plugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:59:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.779 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.780 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4291MB free_disk=20.988277435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.781 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.781 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:30 np0005465987 podman[304590]: 2025-10-02 12:59:30.78560649 +0000 UTC m=+0.082980067 container create a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:59:30 np0005465987 systemd[1]: Started libpod-conmon-a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660.scope.
Oct  2 08:59:30 np0005465987 podman[304590]: 2025-10-02 12:59:30.739398032 +0000 UTC m=+0.036771629 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:59:30 np0005465987 systemd[1]: Started libcrun container.
Oct  2 08:59:30 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa5ad1371914147ac0d210535d153b38328fed24b2a9f45a90db239d2de5608/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:59:30 np0005465987 podman[304590]: 2025-10-02 12:59:30.870896651 +0000 UTC m=+0.168270228 container init a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:59:30 np0005465987 podman[304590]: 2025-10-02 12:59:30.87746224 +0000 UTC m=+0.174835817 container start a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.893 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance af08b6ce-23c3-4ccf-846b-c8c68da5e4e9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.893 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.894 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:59:30 np0005465987 neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24[304605]: [NOTICE]   (304619) : New worker (304621) forked
Oct  2 08:59:30 np0005465987 neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24[304605]: [NOTICE]   (304619) : Loading success.
Oct  2 08:59:30 np0005465987 podman[304608]: 2025-10-02 12:59:30.92556661 +0000 UTC m=+0.056712897 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 08:59:30 np0005465987 nova_compute[230713]: 2025-10-02 12:59:30.938 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:31 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2742360118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.353 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.359 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:59:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:31.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.460 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.500 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.500 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.853 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409971.8527277, af08b6ce-23c3-4ccf-846b-c8c68da5e4e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.853 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] VM Started (Lifecycle Event)#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.855 2 DEBUG nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.858 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.861 2 INFO nova.virt.libvirt.driver [-] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Instance spawned successfully.#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.861 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.894 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.900 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.902 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.902 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.903 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.903 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.903 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.904 2 DEBUG nova.virt.libvirt.driver [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:59:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:31.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.950 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.951 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409971.852832, af08b6ce-23c3-4ccf-846b-c8c68da5e4e9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.951 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.995 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.998 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759409971.8591464, af08b6ce-23c3-4ccf-846b-c8c68da5e4e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:59:31 np0005465987 nova_compute[230713]: 2025-10-02 12:59:31.999 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.011 2 INFO nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Took 8.05 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.011 2 DEBUG nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.071 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.074 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.116 2 INFO nova.compute.manager [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Took 10.91 seconds to build instance.#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.169 2 DEBUG oslo_concurrency.lockutils [None req-7eda801f-4dfa-4496-9583-2446766578d5 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.833 2 DEBUG nova.compute.manager [req-c5ff7412-75bc-4039-89a7-3aa624beb697 req-aa3cf466-328a-426e-98bb-ce7c2f3e828e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-vif-plugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.833 2 DEBUG oslo_concurrency.lockutils [req-c5ff7412-75bc-4039-89a7-3aa624beb697 req-aa3cf466-328a-426e-98bb-ce7c2f3e828e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.834 2 DEBUG oslo_concurrency.lockutils [req-c5ff7412-75bc-4039-89a7-3aa624beb697 req-aa3cf466-328a-426e-98bb-ce7c2f3e828e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.835 2 DEBUG oslo_concurrency.lockutils [req-c5ff7412-75bc-4039-89a7-3aa624beb697 req-aa3cf466-328a-426e-98bb-ce7c2f3e828e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.835 2 DEBUG nova.compute.manager [req-c5ff7412-75bc-4039-89a7-3aa624beb697 req-aa3cf466-328a-426e-98bb-ce7c2f3e828e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] No waiting events found dispatching network-vif-plugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:32 np0005465987 nova_compute[230713]: 2025-10-02 12:59:32.836 2 WARNING nova.compute.manager [req-c5ff7412-75bc-4039-89a7-3aa624beb697 req-aa3cf466-328a-426e-98bb-ce7c2f3e828e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received unexpected event network-vif-plugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd for instance with vm_state active and task_state None.#033[00m
Oct  2 08:59:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:33.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:33 np0005465987 podman[304702]: 2025-10-02 12:59:33.853480158 +0000 UTC m=+0.071156953 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:59:33 np0005465987 podman[304703]: 2025-10-02 12:59:33.879702998 +0000 UTC m=+0.093615490 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible)
Oct  2 08:59:33 np0005465987 podman[304701]: 2025-10-02 12:59:33.890271407 +0000 UTC m=+0.112626841 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:59:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:34 np0005465987 nova_compute[230713]: 2025-10-02 12:59:34.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:35.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:36 np0005465987 nova_compute[230713]: 2025-10-02 12:59:36.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:36 np0005465987 NetworkManager[44910]: <info>  [1759409976.9023] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Oct  2 08:59:36 np0005465987 NetworkManager[44910]: <info>  [1759409976.9036] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Oct  2 08:59:36 np0005465987 nova_compute[230713]: 2025-10-02 12:59:36.995 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:37 np0005465987 nova_compute[230713]: 2025-10-02 12:59:37.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:37 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:37Z|00770|binding|INFO|Releasing lport cbb475a5-e771-4ae9-876e-47a528425aa6 from this chassis (sb_readonly=0)
Oct  2 08:59:37 np0005465987 nova_compute[230713]: 2025-10-02 12:59:37.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:37 np0005465987 nova_compute[230713]: 2025-10-02 12:59:37.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:59:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:37.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:59:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:59:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 08:59:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:37.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:39 np0005465987 nova_compute[230713]: 2025-10-02 12:59:39.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:39.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:39 np0005465987 nova_compute[230713]: 2025-10-02 12:59:39.661 2 DEBUG nova.compute.manager [req-6ab0f046-82f5-4bcc-94d6-eef4f6abe264 req-63b06231-4047-4abe-a03f-e84234104265 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:39 np0005465987 nova_compute[230713]: 2025-10-02 12:59:39.662 2 DEBUG nova.compute.manager [req-6ab0f046-82f5-4bcc-94d6-eef4f6abe264 req-63b06231-4047-4abe-a03f-e84234104265 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing instance network info cache due to event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:59:39 np0005465987 nova_compute[230713]: 2025-10-02 12:59:39.662 2 DEBUG oslo_concurrency.lockutils [req-6ab0f046-82f5-4bcc-94d6-eef4f6abe264 req-63b06231-4047-4abe-a03f-e84234104265 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:39 np0005465987 nova_compute[230713]: 2025-10-02 12:59:39.663 2 DEBUG oslo_concurrency.lockutils [req-6ab0f046-82f5-4bcc-94d6-eef4f6abe264 req-63b06231-4047-4abe-a03f-e84234104265 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:39 np0005465987 nova_compute[230713]: 2025-10-02 12:59:39.663 2 DEBUG nova.network.neutron [req-6ab0f046-82f5-4bcc-94d6-eef4f6abe264 req-63b06231-4047-4abe-a03f-e84234104265 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:59:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:39.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:39 np0005465987 nova_compute[230713]: 2025-10-02 12:59:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:39 np0005465987 nova_compute[230713]: 2025-10-02 12:59:39.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:59:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:41 np0005465987 nova_compute[230713]: 2025-10-02 12:59:41.011 2 DEBUG nova.compute.manager [req-818249de-8049-440a-98fb-240485b5006e req-d6216118-55d3-4e19-9b6b-296350a36e1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:41 np0005465987 nova_compute[230713]: 2025-10-02 12:59:41.011 2 DEBUG nova.compute.manager [req-818249de-8049-440a-98fb-240485b5006e req-d6216118-55d3-4e19-9b6b-296350a36e1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing instance network info cache due to event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:59:41 np0005465987 nova_compute[230713]: 2025-10-02 12:59:41.011 2 DEBUG oslo_concurrency.lockutils [req-818249de-8049-440a-98fb-240485b5006e req-d6216118-55d3-4e19-9b6b-296350a36e1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:41 np0005465987 nova_compute[230713]: 2025-10-02 12:59:41.158 2 DEBUG nova.network.neutron [req-6ab0f046-82f5-4bcc-94d6-eef4f6abe264 req-63b06231-4047-4abe-a03f-e84234104265 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updated VIF entry in instance network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:59:41 np0005465987 nova_compute[230713]: 2025-10-02 12:59:41.159 2 DEBUG nova.network.neutron [req-6ab0f046-82f5-4bcc-94d6-eef4f6abe264 req-63b06231-4047-4abe-a03f-e84234104265 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updating instance_info_cache with network_info: [{"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:41 np0005465987 nova_compute[230713]: 2025-10-02 12:59:41.180 2 DEBUG oslo_concurrency.lockutils [req-6ab0f046-82f5-4bcc-94d6-eef4f6abe264 req-63b06231-4047-4abe-a03f-e84234104265 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:41 np0005465987 nova_compute[230713]: 2025-10-02 12:59:41.181 2 DEBUG oslo_concurrency.lockutils [req-818249de-8049-440a-98fb-240485b5006e req-d6216118-55d3-4e19-9b6b-296350a36e1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:41 np0005465987 nova_compute[230713]: 2025-10-02 12:59:41.181 2 DEBUG nova.network.neutron [req-818249de-8049-440a-98fb-240485b5006e req-d6216118-55d3-4e19-9b6b-296350a36e1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:59:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:41.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 08:59:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:41.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 08:59:42 np0005465987 nova_compute[230713]: 2025-10-02 12:59:42.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:43 np0005465987 nova_compute[230713]: 2025-10-02 12:59:43.097 2 DEBUG nova.compute.manager [req-c94720b8-5478-43ad-9084-ee8e85c70460 req-583b3261-ba0d-43bb-9d42-9950e4eaa3b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:43 np0005465987 nova_compute[230713]: 2025-10-02 12:59:43.098 2 DEBUG nova.compute.manager [req-c94720b8-5478-43ad-9084-ee8e85c70460 req-583b3261-ba0d-43bb-9d42-9950e4eaa3b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing instance network info cache due to event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:59:43 np0005465987 nova_compute[230713]: 2025-10-02 12:59:43.098 2 DEBUG oslo_concurrency.lockutils [req-c94720b8-5478-43ad-9084-ee8e85c70460 req-583b3261-ba0d-43bb-9d42-9950e4eaa3b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:43.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:43.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:44 np0005465987 nova_compute[230713]: 2025-10-02 12:59:44.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:45.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:45 np0005465987 nova_compute[230713]: 2025-10-02 12:59:45.434 2 DEBUG nova.network.neutron [req-818249de-8049-440a-98fb-240485b5006e req-d6216118-55d3-4e19-9b6b-296350a36e1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updated VIF entry in instance network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:59:45 np0005465987 nova_compute[230713]: 2025-10-02 12:59:45.435 2 DEBUG nova.network.neutron [req-818249de-8049-440a-98fb-240485b5006e req-d6216118-55d3-4e19-9b6b-296350a36e1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updating instance_info_cache with network_info: [{"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:45 np0005465987 nova_compute[230713]: 2025-10-02 12:59:45.454 2 DEBUG oslo_concurrency.lockutils [req-818249de-8049-440a-98fb-240485b5006e req-d6216118-55d3-4e19-9b6b-296350a36e1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:45 np0005465987 nova_compute[230713]: 2025-10-02 12:59:45.455 2 DEBUG oslo_concurrency.lockutils [req-c94720b8-5478-43ad-9084-ee8e85c70460 req-583b3261-ba0d-43bb-9d42-9950e4eaa3b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:45 np0005465987 nova_compute[230713]: 2025-10-02 12:59:45.455 2 DEBUG nova.network.neutron [req-c94720b8-5478-43ad-9084-ee8e85c70460 req-583b3261-ba0d-43bb-9d42-9950e4eaa3b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:59:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:45Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:a7:79 10.100.0.4
Oct  2 08:59:45 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:45Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:a7:79 10.100.0.4
Oct  2 08:59:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:45.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:47 np0005465987 nova_compute[230713]: 2025-10-02 12:59:47.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:47.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:47 np0005465987 nova_compute[230713]: 2025-10-02 12:59:47.524 2 DEBUG nova.network.neutron [req-c94720b8-5478-43ad-9084-ee8e85c70460 req-583b3261-ba0d-43bb-9d42-9950e4eaa3b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updated VIF entry in instance network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:59:47 np0005465987 nova_compute[230713]: 2025-10-02 12:59:47.524 2 DEBUG nova.network.neutron [req-c94720b8-5478-43ad-9084-ee8e85c70460 req-583b3261-ba0d-43bb-9d42-9950e4eaa3b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updating instance_info_cache with network_info: [{"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:47 np0005465987 nova_compute[230713]: 2025-10-02 12:59:47.562 2 DEBUG oslo_concurrency.lockutils [req-c94720b8-5478-43ad-9084-ee8e85c70460 req-583b3261-ba0d-43bb-9d42-9950e4eaa3b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:47.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:49 np0005465987 nova_compute[230713]: 2025-10-02 12:59:49.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:49.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:49.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:51.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:51.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:51 np0005465987 nova_compute[230713]: 2025-10-02 12:59:51.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:51 np0005465987 nova_compute[230713]: 2025-10-02 12:59:51.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:59:51 np0005465987 nova_compute[230713]: 2025-10-02 12:59:51.987 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.148 2 DEBUG nova.compute.manager [req-da67bda1-9ad6-4333-8006-4a0d8dbe8985 req-16bf88b1-72b0-43cd-bee8-2df3301844af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.149 2 DEBUG nova.compute.manager [req-da67bda1-9ad6-4333-8006-4a0d8dbe8985 req-16bf88b1-72b0-43cd-bee8-2df3301844af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing instance network info cache due to event network-changed-1be9a48c-b01f-4d44-b98d-a0fc8e159efd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.149 2 DEBUG oslo_concurrency.lockutils [req-da67bda1-9ad6-4333-8006-4a0d8dbe8985 req-16bf88b1-72b0-43cd-bee8-2df3301844af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.150 2 DEBUG oslo_concurrency.lockutils [req-da67bda1-9ad6-4333-8006-4a0d8dbe8985 req-16bf88b1-72b0-43cd-bee8-2df3301844af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.150 2 DEBUG nova.network.neutron [req-da67bda1-9ad6-4333-8006-4a0d8dbe8985 req-16bf88b1-72b0-43cd-bee8-2df3301844af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Refreshing network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.207 2 DEBUG oslo_concurrency.lockutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Acquiring lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.208 2 DEBUG oslo_concurrency.lockutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.208 2 DEBUG oslo_concurrency.lockutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Acquiring lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.208 2 DEBUG oslo_concurrency.lockutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.209 2 DEBUG oslo_concurrency.lockutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.209 2 INFO nova.compute.manager [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Terminating instance#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.210 2 DEBUG nova.compute.manager [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:59:52 np0005465987 kernel: tap1be9a48c-b0 (unregistering): left promiscuous mode
Oct  2 08:59:52 np0005465987 NetworkManager[44910]: <info>  [1759409992.2607] device (tap1be9a48c-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:59:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:52Z|00771|binding|INFO|Releasing lport 1be9a48c-b01f-4d44-b98d-a0fc8e159efd from this chassis (sb_readonly=0)
Oct  2 08:59:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:52Z|00772|binding|INFO|Setting lport 1be9a48c-b01f-4d44-b98d-a0fc8e159efd down in Southbound
Oct  2 08:59:52 np0005465987 ovn_controller[129172]: 2025-10-02T12:59:52Z|00773|binding|INFO|Removing iface tap1be9a48c-b0 ovn-installed in OVS
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.283 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:a7:79 10.100.0.4'], port_security=['fa:16:3e:d2:a7:79 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'af08b6ce-23c3-4ccf-846b-c8c68da5e4e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12c7272c-36a0-40ff-9024-81c013bfef24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55e97477dd13448d9dafe69ec8614ea6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ed3589d-e92e-4396-972d-dd9089866bf2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17571240-5d78-4092-8ad7-ae611dfa7cf0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=1be9a48c-b01f-4d44-b98d-a0fc8e159efd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.286 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd in datapath 12c7272c-36a0-40ff-9024-81c013bfef24 unbound from our chassis#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.289 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12c7272c-36a0-40ff-9024-81c013bfef24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.290 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7dca5f-f13f-4875-b916-f49dec8d8bde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.292 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24 namespace which is not needed anymore#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Oct  2 08:59:52 np0005465987 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000c2.scope: Consumed 15.201s CPU time.
Oct  2 08:59:52 np0005465987 systemd-machined[188335]: Machine qemu-87-instance-000000c2 terminated.
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.441 2 INFO nova.virt.libvirt.driver [-] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Instance destroyed successfully.#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.442 2 DEBUG nova.objects.instance [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lazy-loading 'resources' on Instance uuid af08b6ce-23c3-4ccf-846b-c8c68da5e4e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.456 2 DEBUG nova.virt.libvirt.vif [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:59:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1029505625',display_name='tempest-TestVolumeBackupRestore-server-1029505625',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1029505625',id=194,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDl51NXHF/HxwJ4jMeoLXErEUoSGAeFB47uZ/t3bK+Ssp0Nn+DcjgOAl2m4exn3Yc7fFerv2a/iFdsxTBhF3/df1jP+dAerL8c78Ulh1ONSCW5N4b+6FVZfs4qD5KXmJqg==',key_name='tempest-TestVolumeBackupRestore-384018385',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:59:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55e97477dd13448d9dafe69ec8614ea6',ramdisk_id='',reservation_id='r-axbzyxue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-2015194200',owner_user_name='tempest-TestVolumeBackupRestore-2015194200-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:59:32Z,user_data=None,user_id='729669e8835d4d95b5aae25c140e1f06',uuid=af08b6ce-23c3-4ccf-846b-c8c68da5e4e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.456 2 DEBUG nova.network.os_vif_util [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Converting VIF {"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.457 2 DEBUG nova.network.os_vif_util [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:a7:79,bridge_name='br-int',has_traffic_filtering=True,id=1be9a48c-b01f-4d44-b98d-a0fc8e159efd,network=Network(12c7272c-36a0-40ff-9024-81c013bfef24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be9a48c-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.457 2 DEBUG os_vif [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:a7:79,bridge_name='br-int',has_traffic_filtering=True,id=1be9a48c-b01f-4d44-b98d-a0fc8e159efd,network=Network(12c7272c-36a0-40ff-9024-81c013bfef24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be9a48c-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1be9a48c-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.464 2 INFO os_vif [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:a7:79,bridge_name='br-int',has_traffic_filtering=True,id=1be9a48c-b01f-4d44-b98d-a0fc8e159efd,network=Network(12c7272c-36a0-40ff-9024-81c013bfef24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be9a48c-b0')#033[00m
Oct  2 08:59:52 np0005465987 neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24[304605]: [NOTICE]   (304619) : haproxy version is 2.8.14-c23fe91
Oct  2 08:59:52 np0005465987 neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24[304605]: [NOTICE]   (304619) : path to executable is /usr/sbin/haproxy
Oct  2 08:59:52 np0005465987 neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24[304605]: [WARNING]  (304619) : Exiting Master process...
Oct  2 08:59:52 np0005465987 neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24[304605]: [WARNING]  (304619) : Exiting Master process...
Oct  2 08:59:52 np0005465987 neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24[304605]: [ALERT]    (304619) : Current worker (304621) exited with code 143 (Terminated)
Oct  2 08:59:52 np0005465987 neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24[304605]: [WARNING]  (304619) : All workers exited. Exiting... (0)
Oct  2 08:59:52 np0005465987 systemd[1]: libpod-a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660.scope: Deactivated successfully.
Oct  2 08:59:52 np0005465987 podman[304841]: 2025-10-02 12:59:52.48430885 +0000 UTC m=+0.050135436 container died a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:59:52 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660-userdata-shm.mount: Deactivated successfully.
Oct  2 08:59:52 np0005465987 systemd[1]: var-lib-containers-storage-overlay-ffa5ad1371914147ac0d210535d153b38328fed24b2a9f45a90db239d2de5608-merged.mount: Deactivated successfully.
Oct  2 08:59:52 np0005465987 podman[304841]: 2025-10-02 12:59:52.530323173 +0000 UTC m=+0.096149749 container cleanup a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:59:52 np0005465987 systemd[1]: libpod-conmon-a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660.scope: Deactivated successfully.
Oct  2 08:59:52 np0005465987 podman[304899]: 2025-10-02 12:59:52.593551268 +0000 UTC m=+0.040653057 container remove a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.598 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bf74d040-c314-4c88-90e6-ac43912dc562]: (4, ('Thu Oct  2 12:59:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24 (a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660)\na75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660\nThu Oct  2 12:59:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24 (a75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660)\na75690cbd8d492ee380209e78111d46bba5911753151f7227b23e785d5e2b660\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.600 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c66996de-0113-4b83-8645-fdf5f79b9809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.601 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12c7272c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:52 np0005465987 kernel: tap12c7272c-30: left promiscuous mode
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.612 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d00a94-03f8-4ad4-b730-5d33f147566a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.649 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bc273f44-f588-440b-a2b3-b2c520ddedfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.650 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2df2c120-c7bf-4f2c-bf7a-5ecd489b105e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.670 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1587bf2e-539d-4b8a-a8c2-4ac6d44af0f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803863, 'reachable_time': 16289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304914, 'error': None, 'target': 'ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.674 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12c7272c-36a0-40ff-9024-81c013bfef24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:59:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 12:59:52.674 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[ec28c91d-a050-4f41-bb06-a9775dfaa690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:52 np0005465987 systemd[1]: run-netns-ovnmeta\x2d12c7272c\x2d36a0\x2d40ff\x2d9024\x2d81c013bfef24.mount: Deactivated successfully.
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.703 2 INFO nova.virt.libvirt.driver [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Deleting instance files /var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9_del#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.704 2 INFO nova.virt.libvirt.driver [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Deletion of /var/lib/nova/instances/af08b6ce-23c3-4ccf-846b-c8c68da5e4e9_del complete#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.779 2 INFO nova.compute.manager [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.780 2 DEBUG oslo.service.loopingcall [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.780 2 DEBUG nova.compute.manager [-] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:59:52 np0005465987 nova_compute[230713]: 2025-10-02 12:59:52.780 2 DEBUG nova.network.neutron [-] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:59:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:53.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:53 np0005465987 nova_compute[230713]: 2025-10-02 12:59:53.756 2 DEBUG nova.network.neutron [req-da67bda1-9ad6-4333-8006-4a0d8dbe8985 req-16bf88b1-72b0-43cd-bee8-2df3301844af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updated VIF entry in instance network info cache for port 1be9a48c-b01f-4d44-b98d-a0fc8e159efd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:59:53 np0005465987 nova_compute[230713]: 2025-10-02 12:59:53.757 2 DEBUG nova.network.neutron [req-da67bda1-9ad6-4333-8006-4a0d8dbe8985 req-16bf88b1-72b0-43cd-bee8-2df3301844af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updating instance_info_cache with network_info: [{"id": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "address": "fa:16:3e:d2:a7:79", "network": {"id": "12c7272c-36a0-40ff-9024-81c013bfef24", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-962426721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55e97477dd13448d9dafe69ec8614ea6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be9a48c-b0", "ovs_interfaceid": "1be9a48c-b01f-4d44-b98d-a0fc8e159efd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:53 np0005465987 nova_compute[230713]: 2025-10-02 12:59:53.782 2 DEBUG nova.network.neutron [-] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:53 np0005465987 nova_compute[230713]: 2025-10-02 12:59:53.785 2 DEBUG oslo_concurrency.lockutils [req-da67bda1-9ad6-4333-8006-4a0d8dbe8985 req-16bf88b1-72b0-43cd-bee8-2df3301844af d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:53 np0005465987 nova_compute[230713]: 2025-10-02 12:59:53.799 2 INFO nova.compute.manager [-] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Took 1.02 seconds to deallocate network for instance.#033[00m
Oct  2 08:59:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:53.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.068 2 INFO nova.compute.manager [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Took 0.27 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.131 2 DEBUG oslo_concurrency.lockutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.132 2 DEBUG oslo_concurrency.lockutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.228 2 DEBUG oslo_concurrency.processutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.276 2 DEBUG nova.compute.manager [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-vif-unplugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.277 2 DEBUG oslo_concurrency.lockutils [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.278 2 DEBUG oslo_concurrency.lockutils [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.278 2 DEBUG oslo_concurrency.lockutils [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.279 2 DEBUG nova.compute.manager [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] No waiting events found dispatching network-vif-unplugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.279 2 WARNING nova.compute.manager [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received unexpected event network-vif-unplugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.279 2 DEBUG nova.compute.manager [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-vif-plugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.280 2 DEBUG oslo_concurrency.lockutils [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.280 2 DEBUG oslo_concurrency.lockutils [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.280 2 DEBUG oslo_concurrency.lockutils [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.281 2 DEBUG nova.compute.manager [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] No waiting events found dispatching network-vif-plugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.281 2 WARNING nova.compute.manager [req-3888c7f3-8e2d-4b97-b156-977277aa198e req-4db8c629-241d-4d7f-96d8-7614d86623b1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received unexpected event network-vif-plugged-1be9a48c-b01f-4d44-b98d-a0fc8e159efd for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.284 2 DEBUG nova.compute.manager [req-20010ff8-0c3f-4d23-a9f9-65690c7bad0d req-3127236a-a34c-4092-8731-2e687a929824 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Received event network-vif-deleted-1be9a48c-b01f-4d44-b98d-a0fc8e159efd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4289389056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.697 2 DEBUG oslo_concurrency.processutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.703 2 DEBUG nova.compute.provider_tree [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.718 2 DEBUG nova.scheduler.client.report [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.746 2 DEBUG oslo_concurrency.lockutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.780 2 INFO nova.scheduler.client.report [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Deleted allocations for instance af08b6ce-23c3-4ccf-846b-c8c68da5e4e9#033[00m
Oct  2 08:59:54 np0005465987 nova_compute[230713]: 2025-10-02 12:59:54.896 2 DEBUG oslo_concurrency.lockutils [None req-ffa4f933-306b-46a7-8dea-563b0ba570ec 729669e8835d4d95b5aae25c140e1f06 55e97477dd13448d9dafe69ec8614ea6 - - default default] Lock "af08b6ce-23c3-4ccf-846b-c8c68da5e4e9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:59:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464753884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:59:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:59:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464753884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:59:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:55.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 08:59:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:55.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 08:59:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e397 e397: 3 total, 3 up, 3 in
Oct  2 08:59:57 np0005465987 nova_compute[230713]: 2025-10-02 12:59:57.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:57.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:57 np0005465987 nova_compute[230713]: 2025-10-02 12:59:57.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:57 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:59:57 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:59:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e398 e398: 3 total, 3 up, 3 in
Oct  2 08:59:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:57.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:58 np0005465987 nova_compute[230713]: 2025-10-02 12:59:58.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:59 np0005465987 nova_compute[230713]: 2025-10-02 12:59:59.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:59.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 08:59:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:59.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 09:00:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:01.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:01 np0005465987 podman[304940]: 2025-10-02 13:00:01.84368097 +0000 UTC m=+0.059844003 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:00:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:01.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:02 np0005465987 nova_compute[230713]: 2025-10-02 13:00:02.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:02 np0005465987 nova_compute[230713]: 2025-10-02 13:00:02.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:03.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 e399: 3 total, 3 up, 3 in
Oct  2 09:00:04 np0005465987 podman[304960]: 2025-10-02 13:00:04.835331647 +0000 UTC m=+0.052756559 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:00:04 np0005465987 podman[304961]: 2025-10-02 13:00:04.84054295 +0000 UTC m=+0.056010268 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:00:04 np0005465987 podman[304959]: 2025-10-02 13:00:04.856564149 +0000 UTC m=+0.077032864 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 09:00:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:05.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:07 np0005465987 nova_compute[230713]: 2025-10-02 13:00:07.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:07 np0005465987 nova_compute[230713]: 2025-10-02 13:00:07.440 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409992.4387934, af08b6ce-23c3-4ccf-846b-c8c68da5e4e9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:07 np0005465987 nova_compute[230713]: 2025-10-02 13:00:07.440 2 INFO nova.compute.manager [-] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:00:07 np0005465987 nova_compute[230713]: 2025-10-02 13:00:07.458 2 DEBUG nova.compute.manager [None req-dd904d09-33ed-486d-a7f7-912b80b4bd0b - - - - - -] [instance: af08b6ce-23c3-4ccf-846b-c8c68da5e4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:07 np0005465987 nova_compute[230713]: 2025-10-02 13:00:07.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:07.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:09.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:09.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:11.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:11.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:12 np0005465987 nova_compute[230713]: 2025-10-02 13:00:12.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:12 np0005465987 nova_compute[230713]: 2025-10-02 13:00:12.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:13.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:13.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:15.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:17 np0005465987 nova_compute[230713]: 2025-10-02 13:00:17.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:17.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:17 np0005465987 nova_compute[230713]: 2025-10-02 13:00:17.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:00:17.753 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:00:17.754 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:00:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:00:17.754 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:17 np0005465987 nova_compute[230713]: 2025-10-02 13:00:17.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:00:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:17.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:00:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:00:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:00:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:20.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:21.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:22.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:22 np0005465987 nova_compute[230713]: 2025-10-02 13:00:22.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:22 np0005465987 nova_compute[230713]: 2025-10-02 13:00:22.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:23.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:24.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:25.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:25 np0005465987 nova_compute[230713]: 2025-10-02 13:00:25.987 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:26.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:27 np0005465987 nova_compute[230713]: 2025-10-02 13:00:27.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:27.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:27 np0005465987 nova_compute[230713]: 2025-10-02 13:00:27.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:27 np0005465987 nova_compute[230713]: 2025-10-02 13:00:27.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:27 np0005465987 nova_compute[230713]: 2025-10-02 13:00:27.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:27 np0005465987 nova_compute[230713]: 2025-10-02 13:00:27.964 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:00:27 np0005465987 nova_compute[230713]: 2025-10-02 13:00:27.964 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:00:27 np0005465987 nova_compute[230713]: 2025-10-02 13:00:27.976 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:00:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:00:27.988 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:00:27.989 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:00:27.989 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:28.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:29.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:00:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:30.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:00:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:30 np0005465987 nova_compute[230713]: 2025-10-02 13:00:30.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:30 np0005465987 nova_compute[230713]: 2025-10-02 13:00:30.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:31.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:31 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1643893512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:31 np0005465987 nova_compute[230713]: 2025-10-02 13:00:31.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:31 np0005465987 nova_compute[230713]: 2025-10-02 13:00:31.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:31 np0005465987 nova_compute[230713]: 2025-10-02 13:00:31.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:31 np0005465987 nova_compute[230713]: 2025-10-02 13:00:31.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:31 np0005465987 nova_compute[230713]: 2025-10-02 13:00:31.995 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:00:31 np0005465987 nova_compute[230713]: 2025-10-02 13:00:31.996 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/77007013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.448 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:32 np0005465987 podman[305050]: 2025-10-02 13:00:32.550790559 +0000 UTC m=+0.060643904 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.619 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.620 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4344MB free_disk=20.921810150146484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.620 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.620 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.695 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.696 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.715 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.738 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.738 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.750 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.776 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:00:32 np0005465987 nova_compute[230713]: 2025-10-02 13:00:32.789 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1130639323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:33 np0005465987 nova_compute[230713]: 2025-10-02 13:00:33.310 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:33 np0005465987 nova_compute[230713]: 2025-10-02 13:00:33.315 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:33 np0005465987 nova_compute[230713]: 2025-10-02 13:00:33.337 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:33 np0005465987 nova_compute[230713]: 2025-10-02 13:00:33.361 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:00:33 np0005465987 nova_compute[230713]: 2025-10-02 13:00:33.362 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:33.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:34.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:34 np0005465987 nova_compute[230713]: 2025-10-02 13:00:34.363 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:35.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:35 np0005465987 podman[305093]: 2025-10-02 13:00:35.863731531 +0000 UTC m=+0.081896318 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 09:00:35 np0005465987 podman[305092]: 2025-10-02 13:00:35.888570613 +0000 UTC m=+0.076713466 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:00:35 np0005465987 podman[305094]: 2025-10-02 13:00:35.888684066 +0000 UTC m=+0.103314916 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:00:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:36.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/998983341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:37 np0005465987 nova_compute[230713]: 2025-10-02 13:00:37.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:37 np0005465987 nova_compute[230713]: 2025-10-02 13:00:37.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:37.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:37 np0005465987 nova_compute[230713]: 2025-10-02 13:00:37.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:38.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:00:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:00:38 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:00:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:39.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:40.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:41.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:41 np0005465987 nova_compute[230713]: 2025-10-02 13:00:41.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:41 np0005465987 nova_compute[230713]: 2025-10-02 13:00:41.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:00:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:42.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:42 np0005465987 nova_compute[230713]: 2025-10-02 13:00:42.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:42 np0005465987 nova_compute[230713]: 2025-10-02 13:00:42.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:43.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:44.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:00:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:00:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:46.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:47 np0005465987 nova_compute[230713]: 2025-10-02 13:00:47.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:47 np0005465987 nova_compute[230713]: 2025-10-02 13:00:47.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:47.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:48.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:49.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:00:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:50.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:00:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:51.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:00:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3702376452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:00:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:00:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3702376452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:00:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:52.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:52 np0005465987 nova_compute[230713]: 2025-10-02 13:00:52.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:52 np0005465987 nova_compute[230713]: 2025-10-02 13:00:52.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:52 np0005465987 nova_compute[230713]: 2025-10-02 13:00:52.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:53.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:54.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:00:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/609838238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:00:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:00:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/609838238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:00:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:55.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:00:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:57 np0005465987 nova_compute[230713]: 2025-10-02 13:00:57.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:57 np0005465987 nova_compute[230713]: 2025-10-02 13:00:57.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:57.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:58.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:00:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:00:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:59.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:01.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:02 np0005465987 nova_compute[230713]: 2025-10-02 13:01:02.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:02 np0005465987 nova_compute[230713]: 2025-10-02 13:01:02.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:02 np0005465987 podman[305345]: 2025-10-02 13:01:02.834172742 +0000 UTC m=+0.052865531 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:01:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:03.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:04.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:05.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:06.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:06 np0005465987 podman[305365]: 2025-10-02 13:01:06.853855395 +0000 UTC m=+0.058954508 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:01:06 np0005465987 podman[305366]: 2025-10-02 13:01:06.856519288 +0000 UTC m=+0.067925564 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:01:06 np0005465987 podman[305364]: 2025-10-02 13:01:06.861431853 +0000 UTC m=+0.082654198 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:01:07 np0005465987 nova_compute[230713]: 2025-10-02 13:01:07.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:07 np0005465987 nova_compute[230713]: 2025-10-02 13:01:07.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:07.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:08.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:09.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:10.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:11.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:12.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:12 np0005465987 nova_compute[230713]: 2025-10-02 13:01:12.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:12 np0005465987 nova_compute[230713]: 2025-10-02 13:01:12.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:13 np0005465987 nova_compute[230713]: 2025-10-02 13:01:13.249 2 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 0.85 sec#033[00m
Oct  2 09:01:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:13.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:14.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:15.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:16.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:17 np0005465987 nova_compute[230713]: 2025-10-02 13:01:17.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:17 np0005465987 nova_compute[230713]: 2025-10-02 13:01:17.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:17.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:18.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:01:19.213 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:01:19 np0005465987 nova_compute[230713]: 2025-10-02 13:01:19.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:01:19.214 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:01:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:19.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:21.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:22.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:22 np0005465987 nova_compute[230713]: 2025-10-02 13:01:22.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:22 np0005465987 nova_compute[230713]: 2025-10-02 13:01:22.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:23 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:01:23.216 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:23.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:24.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:25.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:25 np0005465987 nova_compute[230713]: 2025-10-02 13:01:25.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:26.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:27 np0005465987 nova_compute[230713]: 2025-10-02 13:01:27.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:27 np0005465987 nova_compute[230713]: 2025-10-02 13:01:27.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:27.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:27 np0005465987 nova_compute[230713]: 2025-10-02 13:01:27.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:01:27.989 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:01:27.989 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:01:27.989 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:01:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:28.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:01:28 np0005465987 nova_compute[230713]: 2025-10-02 13:01:28.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:28 np0005465987 nova_compute[230713]: 2025-10-02 13:01:28.964 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:01:28 np0005465987 nova_compute[230713]: 2025-10-02 13:01:28.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:01:29 np0005465987 nova_compute[230713]: 2025-10-02 13:01:29.065 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:01:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:29.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:30.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:30 np0005465987 nova_compute[230713]: 2025-10-02 13:01:30.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:31.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:32.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:32 np0005465987 nova_compute[230713]: 2025-10-02 13:01:32.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:32 np0005465987 nova_compute[230713]: 2025-10-02 13:01:32.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:32 np0005465987 nova_compute[230713]: 2025-10-02 13:01:32.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:32 np0005465987 nova_compute[230713]: 2025-10-02 13:01:32.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.132 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.132 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.133 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.133 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.133 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:33 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:33 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2957230612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.551 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:33.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.692 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.693 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4335MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.693 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:33 np0005465987 nova_compute[230713]: 2025-10-02 13:01:33.693 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:33 np0005465987 podman[305451]: 2025-10-02 13:01:33.844572761 +0000 UTC m=+0.059093473 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 09:01:34 np0005465987 nova_compute[230713]: 2025-10-02 13:01:34.110 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:01:34 np0005465987 nova_compute[230713]: 2025-10-02 13:01:34.110 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:01:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:34.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:34 np0005465987 nova_compute[230713]: 2025-10-02 13:01:34.197 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:34 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:34 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/61616564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:34 np0005465987 nova_compute[230713]: 2025-10-02 13:01:34.638 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:34 np0005465987 nova_compute[230713]: 2025-10-02 13:01:34.644 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:01:34 np0005465987 nova_compute[230713]: 2025-10-02 13:01:34.703 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:01:34 np0005465987 nova_compute[230713]: 2025-10-02 13:01:34.705 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:01:34 np0005465987 nova_compute[230713]: 2025-10-02 13:01:34.705 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.118806) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095118879, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2438, "num_deletes": 254, "total_data_size": 5769262, "memory_usage": 5856576, "flush_reason": "Manual Compaction"}
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095156703, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3781841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71733, "largest_seqno": 74166, "table_properties": {"data_size": 3771917, "index_size": 6289, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20823, "raw_average_key_size": 20, "raw_value_size": 3751972, "raw_average_value_size": 3729, "num_data_blocks": 273, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409886, "oldest_key_time": 1759409886, "file_creation_time": 1759410095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 37965 microseconds, and 8008 cpu microseconds.
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.156779) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3781841 bytes OK
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.156799) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.166746) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.166772) EVENT_LOG_v1 {"time_micros": 1759410095166766, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.166792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5758566, prev total WAL file size 5758566, number of live WAL files 2.
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.168288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3693KB)], [147(10011KB)]
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095168323, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 14033492, "oldest_snapshot_seqno": -1}
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 9558 keys, 12069775 bytes, temperature: kUnknown
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095253410, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 12069775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12008187, "index_size": 36579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 251551, "raw_average_key_size": 26, "raw_value_size": 11840666, "raw_average_value_size": 1238, "num_data_blocks": 1393, "num_entries": 9558, "num_filter_entries": 9558, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759410095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.253641) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12069775 bytes
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.254925) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.8 rd, 141.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 10083, records dropped: 525 output_compression: NoCompression
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.254961) EVENT_LOG_v1 {"time_micros": 1759410095254949, "job": 94, "event": "compaction_finished", "compaction_time_micros": 85159, "compaction_time_cpu_micros": 25578, "output_level": 6, "num_output_files": 1, "total_output_size": 12069775, "num_input_records": 10083, "num_output_records": 9558, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095255683, "job": 94, "event": "table_file_deletion", "file_number": 149}
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410095257848, "job": 94, "event": "table_file_deletion", "file_number": 147}
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.168185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.257959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.257965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.257967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.257968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:01:35.257970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:01:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:35.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:35 np0005465987 nova_compute[230713]: 2025-10-02 13:01:35.705 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:36.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:37 np0005465987 nova_compute[230713]: 2025-10-02 13:01:37.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:37 np0005465987 nova_compute[230713]: 2025-10-02 13:01:37.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:37.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:37 np0005465987 podman[305494]: 2025-10-02 13:01:37.846497699 +0000 UTC m=+0.056583125 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:01:37 np0005465987 podman[305493]: 2025-10-02 13:01:37.84656031 +0000 UTC m=+0.059633278 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 09:01:37 np0005465987 podman[305492]: 2025-10-02 13:01:37.861424468 +0000 UTC m=+0.078512195 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  2 09:01:37 np0005465987 nova_compute[230713]: 2025-10-02 13:01:37.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:38.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:39.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:40.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:41.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:42.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:42 np0005465987 nova_compute[230713]: 2025-10-02 13:01:42.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:42 np0005465987 nova_compute[230713]: 2025-10-02 13:01:42.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:42 np0005465987 nova_compute[230713]: 2025-10-02 13:01:42.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:42 np0005465987 nova_compute[230713]: 2025-10-02 13:01:42.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:01:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:43.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:44.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:45.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:45 np0005465987 podman[305727]: 2025-10-02 13:01:45.727616935 +0000 UTC m=+0.232537122 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 09:01:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:45 np0005465987 podman[305727]: 2025-10-02 13:01:45.829049828 +0000 UTC m=+0.333970005 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 09:01:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:46.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:46 np0005465987 ovn_controller[129172]: 2025-10-02T13:01:46Z|00774|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct  2 09:01:47 np0005465987 nova_compute[230713]: 2025-10-02 13:01:47.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:47 np0005465987 nova_compute[230713]: 2025-10-02 13:01:47.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:01:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:01:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:01:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:01:47 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:01:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:48.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:01:48 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:01:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:49.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:50.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:51.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:52.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:52 np0005465987 nova_compute[230713]: 2025-10-02 13:01:52.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:52 np0005465987 nova_compute[230713]: 2025-10-02 13:01:52.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:01:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:53.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:01:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:54.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:01:54 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:01:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:56.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:57 np0005465987 nova_compute[230713]: 2025-10-02 13:01:57.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:57 np0005465987 nova_compute[230713]: 2025-10-02 13:01:57.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:57.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:58.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:01:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:59.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:00.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:01.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:02.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:02 np0005465987 nova_compute[230713]: 2025-10-02 13:02:02.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:02 np0005465987 nova_compute[230713]: 2025-10-02 13:02:02.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:04.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:04 np0005465987 podman[306030]: 2025-10-02 13:02:04.844130363 +0000 UTC m=+0.062955509 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:02:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:06.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:07 np0005465987 nova_compute[230713]: 2025-10-02 13:02:07.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:07 np0005465987 nova_compute[230713]: 2025-10-02 13:02:07.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:07 np0005465987 nova_compute[230713]: 2025-10-02 13:02:07.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 09:02:07 np0005465987 nova_compute[230713]: 2025-10-02 13:02:07.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:02:07 np0005465987 nova_compute[230713]: 2025-10-02 13:02:07.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:07 np0005465987 nova_compute[230713]: 2025-10-02 13:02:07.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:02:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:07.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:08.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:08 np0005465987 podman[306051]: 2025-10-02 13:02:08.871935996 +0000 UTC m=+0.076192467 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:02:08 np0005465987 podman[306050]: 2025-10-02 13:02:08.87246167 +0000 UTC m=+0.079969389 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:02:08 np0005465987 podman[306049]: 2025-10-02 13:02:08.911425807 +0000 UTC m=+0.119909002 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:02:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:09.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:10.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:11.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:02:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:12.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:02:12 np0005465987 nova_compute[230713]: 2025-10-02 13:02:12.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:13.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:14.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:15.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:16.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:17 np0005465987 nova_compute[230713]: 2025-10-02 13:02:17.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:17 np0005465987 nova_compute[230713]: 2025-10-02 13:02:17.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:17 np0005465987 nova_compute[230713]: 2025-10-02 13:02:17.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 09:02:17 np0005465987 nova_compute[230713]: 2025-10-02 13:02:17.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:02:17 np0005465987 nova_compute[230713]: 2025-10-02 13:02:17.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:02:17 np0005465987 nova_compute[230713]: 2025-10-02 13:02:17.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:17.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:18.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:19.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:20.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:20.385 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:20 np0005465987 nova_compute[230713]: 2025-10-02 13:02:20.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:20 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:20.387 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:02:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:21.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:22.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:22 np0005465987 nova_compute[230713]: 2025-10-02 13:02:22.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:23.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:24.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:25.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:25 np0005465987 nova_compute[230713]: 2025-10-02 13:02:25.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:26.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:27.388 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:27 np0005465987 nova_compute[230713]: 2025-10-02 13:02:27.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:27.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:27 np0005465987 nova_compute[230713]: 2025-10-02 13:02:27.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:27.990 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:27.990 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:27.990 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:29 np0005465987 nova_compute[230713]: 2025-10-02 13:02:29.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:29 np0005465987 nova_compute[230713]: 2025-10-02 13:02:29.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:02:29 np0005465987 nova_compute[230713]: 2025-10-02 13:02:29.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:02:30 np0005465987 nova_compute[230713]: 2025-10-02 13:02:30.081 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:02:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:30.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:31 np0005465987 nova_compute[230713]: 2025-10-02 13:02:31.338 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:31 np0005465987 nova_compute[230713]: 2025-10-02 13:02:31.338 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:31 np0005465987 nova_compute[230713]: 2025-10-02 13:02:31.584 2 DEBUG nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:02:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:31.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:31 np0005465987 nova_compute[230713]: 2025-10-02 13:02:31.946 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:31 np0005465987 nova_compute[230713]: 2025-10-02 13:02:31.947 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:31 np0005465987 nova_compute[230713]: 2025-10-02 13:02:31.965 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:02:31 np0005465987 nova_compute[230713]: 2025-10-02 13:02:31.965 2 INFO nova.compute.claims [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:02:31 np0005465987 nova_compute[230713]: 2025-10-02 13:02:31.968 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.195 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:32.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/28107586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2639032570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.633 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.639 2 DEBUG nova.compute.provider_tree [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.749 2 DEBUG nova.scheduler.client.report [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.782 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.783 2 DEBUG nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.886 2 DEBUG nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.887 2 DEBUG nova.network.neutron [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:02:32 np0005465987 nova_compute[230713]: 2025-10-02 13:02:32.968 2 INFO nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:02:33 np0005465987 nova_compute[230713]: 2025-10-02 13:02:33.016 2 DEBUG nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:02:33 np0005465987 nova_compute[230713]: 2025-10-02 13:02:33.547 2 DEBUG nova.policy [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:02:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:33.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:33 np0005465987 nova_compute[230713]: 2025-10-02 13:02:33.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:34.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:34 np0005465987 nova_compute[230713]: 2025-10-02 13:02:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:35.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.714 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.715 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.715 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.715 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.716 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.799 2 DEBUG nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.801 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.801 2 INFO nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Creating image(s)#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.833 2 DEBUG nova.storage.rbd_utils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:35 np0005465987 podman[306136]: 2025-10-02 13:02:35.839954266 +0000 UTC m=+0.054795466 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:02:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.875 2 DEBUG nova.storage.rbd_utils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.905 2 DEBUG nova.storage.rbd_utils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.909 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.975 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.976 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.977 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:35 np0005465987 nova_compute[230713]: 2025-10-02 13:02:35.977 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:36 np0005465987 nova_compute[230713]: 2025-10-02 13:02:36.002 2 DEBUG nova.storage.rbd_utils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:36 np0005465987 nova_compute[230713]: 2025-10-02 13:02:36.006 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2751757529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:36 np0005465987 nova_compute[230713]: 2025-10-02 13:02:36.158 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:36.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:36 np0005465987 nova_compute[230713]: 2025-10-02 13:02:36.287 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:02:36 np0005465987 nova_compute[230713]: 2025-10-02 13:02:36.288 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4344MB free_disk=20.897167205810547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:02:36 np0005465987 nova_compute[230713]: 2025-10-02 13:02:36.288 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:36 np0005465987 nova_compute[230713]: 2025-10-02 13:02:36.289 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:37 np0005465987 nova_compute[230713]: 2025-10-02 13:02:37.181 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:37 np0005465987 nova_compute[230713]: 2025-10-02 13:02:37.252 2 DEBUG nova.storage.rbd_utils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] resizing rbd image b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:02:37 np0005465987 nova_compute[230713]: 2025-10-02 13:02:37.344 2 DEBUG nova.objects.instance [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'migration_context' on Instance uuid b0c2e2d6-1438-4b68-8a71-55a4c0583098 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:37 np0005465987 nova_compute[230713]: 2025-10-02 13:02:37.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:37.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:37 np0005465987 nova_compute[230713]: 2025-10-02 13:02:37.948 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:02:37 np0005465987 nova_compute[230713]: 2025-10-02 13:02:37.949 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Ensure instance console log exists: /var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:02:37 np0005465987 nova_compute[230713]: 2025-10-02 13:02:37.949 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:37 np0005465987 nova_compute[230713]: 2025-10-02 13:02:37.949 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:37 np0005465987 nova_compute[230713]: 2025-10-02 13:02:37.950 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:38 np0005465987 nova_compute[230713]: 2025-10-02 13:02:38.024 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance b0c2e2d6-1438-4b68-8a71-55a4c0583098 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:02:38 np0005465987 nova_compute[230713]: 2025-10-02 13:02:38.025 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:02:38 np0005465987 nova_compute[230713]: 2025-10-02 13:02:38.025 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:02:38 np0005465987 nova_compute[230713]: 2025-10-02 13:02:38.094 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:38 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1299835460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:38 np0005465987 nova_compute[230713]: 2025-10-02 13:02:38.525 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:38 np0005465987 nova_compute[230713]: 2025-10-02 13:02:38.529 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:38 np0005465987 nova_compute[230713]: 2025-10-02 13:02:38.555 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:38 np0005465987 nova_compute[230713]: 2025-10-02 13:02:38.591 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:02:38 np0005465987 nova_compute[230713]: 2025-10-02 13:02:38.591 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:39 np0005465987 nova_compute[230713]: 2025-10-02 13:02:39.592 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:39 np0005465987 nova_compute[230713]: 2025-10-02 13:02:39.593 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:39.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:39 np0005465987 podman[306363]: 2025-10-02 13:02:39.835936766 +0000 UTC m=+0.054603642 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 09:02:39 np0005465987 podman[306364]: 2025-10-02 13:02:39.841492717 +0000 UTC m=+0.056457892 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:02:39 np0005465987 podman[306362]: 2025-10-02 13:02:39.860642906 +0000 UTC m=+0.082778656 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 09:02:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:40 np0005465987 nova_compute[230713]: 2025-10-02 13:02:40.298 2 DEBUG nova.network.neutron [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Successfully created port: 07172cab-ff7e-4e8b-a17d-d27013d63b4d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:02:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:41.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:41 np0005465987 nova_compute[230713]: 2025-10-02 13:02:41.980 2 DEBUG nova.network.neutron [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Successfully updated port: 07172cab-ff7e-4e8b-a17d-d27013d63b4d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.016 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.017 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.017 2 DEBUG nova.network.neutron [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.140 2 DEBUG nova.compute.manager [req-3b36ac89-270c-4604-b08e-f22934d4233b req-a4b71071-6413-4238-aa19-705ad91cb7c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received event network-changed-07172cab-ff7e-4e8b-a17d-d27013d63b4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.141 2 DEBUG nova.compute.manager [req-3b36ac89-270c-4604-b08e-f22934d4233b req-a4b71071-6413-4238-aa19-705ad91cb7c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Refreshing instance network info cache due to event network-changed-07172cab-ff7e-4e8b-a17d-d27013d63b4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.141 2 DEBUG oslo_concurrency.lockutils [req-3b36ac89-270c-4604-b08e-f22934d4233b req-a4b71071-6413-4238-aa19-705ad91cb7c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:42.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.571 2 DEBUG nova.network.neutron [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:42 np0005465987 nova_compute[230713]: 2025-10-02 13:02:42.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:02:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:43.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:44.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:45.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.038 2 DEBUG nova.network.neutron [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updating instance_info_cache with network_info: [{"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.095 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.095 2 DEBUG nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Instance network_info: |[{"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.096 2 DEBUG oslo_concurrency.lockutils [req-3b36ac89-270c-4604-b08e-f22934d4233b req-a4b71071-6413-4238-aa19-705ad91cb7c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.096 2 DEBUG nova.network.neutron [req-3b36ac89-270c-4604-b08e-f22934d4233b req-a4b71071-6413-4238-aa19-705ad91cb7c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Refreshing network info cache for port 07172cab-ff7e-4e8b-a17d-d27013d63b4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.100 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Start _get_guest_xml network_info=[{"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.103 2 WARNING nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.112 2 DEBUG nova.virt.libvirt.host [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.113 2 DEBUG nova.virt.libvirt.host [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.120 2 DEBUG nova.virt.libvirt.host [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.120 2 DEBUG nova.virt.libvirt.host [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.121 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.122 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.122 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.123 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.123 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.123 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.123 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.124 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.124 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.124 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.125 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.125 2 DEBUG nova.virt.hardware [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.128 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:46.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3463811713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.544 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.569 2 DEBUG nova.storage.rbd_utils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.572 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3106170106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.975 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.977 2 DEBUG nova.virt.libvirt.vif [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-14876516',display_name='tempest-TestNetworkBasicOps-server-14876516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-14876516',id=200,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoZA8437yhqlUP8kndlQoNsOAXIeKjDjewP0BWVQaNVf8Q+upVxxbOqAkwFlrc6pcoXhW5vRVStjZp+bpXaMSPNBAqe4IJZ+1Xbr4LKDzmC6FdkunFLWFGfbEHp8lIQOQ==',key_name='tempest-TestNetworkBasicOps-1890293940',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-hf0ixp07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:33Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=b0c2e2d6-1438-4b68-8a71-55a4c0583098,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.977 2 DEBUG nova.network.os_vif_util [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.978 2 DEBUG nova.network.os_vif_util [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2e:8a,bridge_name='br-int',has_traffic_filtering=True,id=07172cab-ff7e-4e8b-a17d-d27013d63b4d,network=Network(3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07172cab-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:46 np0005465987 nova_compute[230713]: 2025-10-02 13:02:46.979 2 DEBUG nova.objects.instance [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid b0c2e2d6-1438-4b68-8a71-55a4c0583098 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.024 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <uuid>b0c2e2d6-1438-4b68-8a71-55a4c0583098</uuid>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <name>instance-000000c8</name>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestNetworkBasicOps-server-14876516</nova:name>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:02:46</nova:creationTime>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <nova:port uuid="07172cab-ff7e-4e8b-a17d-d27013d63b4d">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <entry name="serial">b0c2e2d6-1438-4b68-8a71-55a4c0583098</entry>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <entry name="uuid">b0c2e2d6-1438-4b68-8a71-55a4c0583098</entry>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk.config">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:e7:2e:8a"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <target dev="tap07172cab-ff"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098/console.log" append="off"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:02:47 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:02:47 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:02:47 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:02:47 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.025 2 DEBUG nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Preparing to wait for external event network-vif-plugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.026 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.026 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.026 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.027 2 DEBUG nova.virt.libvirt.vif [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-14876516',display_name='tempest-TestNetworkBasicOps-server-14876516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-14876516',id=200,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoZA8437yhqlUP8kndlQoNsOAXIeKjDjewP0BWVQaNVf8Q+upVxxbOqAkwFlrc6pcoXhW5vRVStjZp+bpXaMSPNBAqe4IJZ+1Xbr4LKDzmC6FdkunFLWFGfbEHp8lIQOQ==',key_name='tempest-TestNetworkBasicOps-1890293940',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-hf0ixp07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:33Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=b0c2e2d6-1438-4b68-8a71-55a4c0583098,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.027 2 DEBUG nova.network.os_vif_util [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.028 2 DEBUG nova.network.os_vif_util [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2e:8a,bridge_name='br-int',has_traffic_filtering=True,id=07172cab-ff7e-4e8b-a17d-d27013d63b4d,network=Network(3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07172cab-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.028 2 DEBUG os_vif [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2e:8a,bridge_name='br-int',has_traffic_filtering=True,id=07172cab-ff7e-4e8b-a17d-d27013d63b4d,network=Network(3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07172cab-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.034 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07172cab-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.034 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07172cab-ff, col_values=(('external_ids', {'iface-id': '07172cab-ff7e-4e8b-a17d-d27013d63b4d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:2e:8a', 'vm-uuid': 'b0c2e2d6-1438-4b68-8a71-55a4c0583098'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:47 np0005465987 NetworkManager[44910]: <info>  [1759410167.0847] manager: (tap07172cab-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.093 2 INFO os_vif [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2e:8a,bridge_name='br-int',has_traffic_filtering=True,id=07172cab-ff7e-4e8b-a17d-d27013d63b4d,network=Network(3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07172cab-ff')#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.163 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.164 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.164 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:e7:2e:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.164 2 INFO nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Using config drive#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.186 2 DEBUG nova.storage.rbd_utils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:47.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.751 2 INFO nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Creating config drive at /var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098/disk.config#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.762 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps4nvf2w9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.909 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps4nvf2w9" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.942 2 DEBUG nova.storage.rbd_utils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:47 np0005465987 nova_compute[230713]: 2025-10-02 13:02:47.946 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098/disk.config b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:48.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.349 2 DEBUG oslo_concurrency.processutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098/disk.config b0c2e2d6-1438-4b68-8a71-55a4c0583098_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.350 2 INFO nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Deleting local config drive /var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098/disk.config because it was imported into RBD.#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.385 2 DEBUG nova.network.neutron [req-3b36ac89-270c-4604-b08e-f22934d4233b req-a4b71071-6413-4238-aa19-705ad91cb7c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updated VIF entry in instance network info cache for port 07172cab-ff7e-4e8b-a17d-d27013d63b4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.386 2 DEBUG nova.network.neutron [req-3b36ac89-270c-4604-b08e-f22934d4233b req-a4b71071-6413-4238-aa19-705ad91cb7c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updating instance_info_cache with network_info: [{"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:48 np0005465987 kernel: tap07172cab-ff: entered promiscuous mode
Oct  2 09:02:48 np0005465987 NetworkManager[44910]: <info>  [1759410168.4086] manager: (tap07172cab-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Oct  2 09:02:48 np0005465987 ovn_controller[129172]: 2025-10-02T13:02:48Z|00775|binding|INFO|Claiming lport 07172cab-ff7e-4e8b-a17d-d27013d63b4d for this chassis.
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 ovn_controller[129172]: 2025-10-02T13:02:48Z|00776|binding|INFO|07172cab-ff7e-4e8b-a17d-d27013d63b4d: Claiming fa:16:3e:e7:2e:8a 10.100.0.3
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 NetworkManager[44910]: <info>  [1759410168.4242] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Oct  2 09:02:48 np0005465987 NetworkManager[44910]: <info>  [1759410168.4265] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.431 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:2e:8a 10.100.0.3'], port_security=['fa:16:3e:e7:2e:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b0c2e2d6-1438-4b68-8a71-55a4c0583098', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49a3b79e-7ce8-4d17-b0c3-daaaf1a10050', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=010b8986-6ed3-4d08-a5a9-d68a3bf546a0, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=07172cab-ff7e-4e8b-a17d-d27013d63b4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.434 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 07172cab-ff7e-4e8b-a17d-d27013d63b4d in datapath 3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f bound to our chassis#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.436 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.444 2 DEBUG oslo_concurrency.lockutils [req-3b36ac89-270c-4604-b08e-f22934d4233b req-a4b71071-6413-4238-aa19-705ad91cb7c2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:48 np0005465987 systemd-machined[188335]: New machine qemu-88-instance-000000c8.
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.449 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f913601f-4355-4741-b0f3-05f31176103f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.453 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e2a6e0a-a1 in ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.458 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e2a6e0a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.458 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3a554ffb-4815-4af4-909a-c70c9d9dca8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.459 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[598487ec-a9dc-4240-8f1f-e3a1c82b6cce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 systemd[1]: Started Virtual Machine qemu-88-instance-000000c8.
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.475 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[80eea529-f428-4527-9535-673842769398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 systemd-udevd[306559]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:48 np0005465987 NetworkManager[44910]: <info>  [1759410168.5035] device (tap07172cab-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:02:48 np0005465987 NetworkManager[44910]: <info>  [1759410168.5046] device (tap07172cab-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.502 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[143da656-e952-4261-886b-87c525d5b186]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.536 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6659f7d5-c0c9-4c24-8dc0-f034a70c2b5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 systemd-udevd[306564]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.551 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2a7ca5-f672-48cc-a6ec-e81ef47deebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 NetworkManager[44910]: <info>  [1759410168.5521] manager: (tap3e2a6e0a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/357)
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.586 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ca94d067-4251-41e7-b366-abc8d5f5bf71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.593 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[af13bf0b-ba73-4a0c-90d8-a9fa3045cefe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_controller[129172]: 2025-10-02T13:02:48Z|00777|binding|INFO|Setting lport 07172cab-ff7e-4e8b-a17d-d27013d63b4d ovn-installed in OVS
Oct  2 09:02:48 np0005465987 ovn_controller[129172]: 2025-10-02T13:02:48Z|00778|binding|INFO|Setting lport 07172cab-ff7e-4e8b-a17d-d27013d63b4d up in Southbound
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 NetworkManager[44910]: <info>  [1759410168.6210] device (tap3e2a6e0a-a0): carrier: link connected
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.626 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0caaf0c8-b450-4a6c-9fdc-2f9298ae98d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.641 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a871215d-db65-487d-9cb5-cdbf109f2b3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e2a6e0a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823719, 'reachable_time': 32133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306589, 'error': None, 'target': 'ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.654 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdae7ba-d162-4c92-83b3-f42ddfa18141]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feee:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 823719, 'tstamp': 823719}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306590, 'error': None, 'target': 'ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.668 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4d34d7ce-32f7-4e36-91bc-52bff1d6bc6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e2a6e0a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823719, 'reachable_time': 32133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306591, 'error': None, 'target': 'ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.691 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[56f70b4e-c100-4108-8080-779f25c47d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.748 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5de49c6d-cbb8-45b4-a027-3505f88e9f2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.749 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e2a6e0a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.750 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.750 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e2a6e0a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:48 np0005465987 NetworkManager[44910]: <info>  [1759410168.7531] manager: (tap3e2a6e0a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Oct  2 09:02:48 np0005465987 kernel: tap3e2a6e0a-a0: entered promiscuous mode
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.757 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e2a6e0a-a0, col_values=(('external_ids', {'iface-id': '7036aed5-0937-4e25-a7aa-7fca92d826ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:48 np0005465987 ovn_controller[129172]: 2025-10-02T13:02:48Z|00779|binding|INFO|Releasing lport 7036aed5-0937-4e25-a7aa-7fca92d826ad from this chassis (sb_readonly=0)
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.762 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.763 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4086ce3d-a968-443e-87c0-66e92a422ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.764 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f.pid.haproxy
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:02:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:02:48.766 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f', 'env', 'PROCESS_TAG=haproxy-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:02:48 np0005465987 nova_compute[230713]: 2025-10-02 13:02:48.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:49 np0005465987 podman[306623]: 2025-10-02 13:02:49.127302432 +0000 UTC m=+0.061234411 container create 20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 09:02:49 np0005465987 systemd[1]: Started libpod-conmon-20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70.scope.
Oct  2 09:02:49 np0005465987 podman[306623]: 2025-10-02 13:02:49.094950195 +0000 UTC m=+0.028882184 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:02:49 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:02:49 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9436e41331acd93f644d36fd7548b94110116d66bf975f9496c3022f74fe6a83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:02:49 np0005465987 podman[306623]: 2025-10-02 13:02:49.218513165 +0000 UTC m=+0.152445164 container init 20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:02:49 np0005465987 podman[306623]: 2025-10-02 13:02:49.225131894 +0000 UTC m=+0.159063863 container start 20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:02:49 np0005465987 neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f[306636]: [NOTICE]   (306640) : New worker (306642) forked
Oct  2 09:02:49 np0005465987 neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f[306636]: [NOTICE]   (306640) : Loading success.
Oct  2 09:02:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:49.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:49 np0005465987 nova_compute[230713]: 2025-10-02 13:02:49.888 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410169.8876407, b0c2e2d6-1438-4b68-8a71-55a4c0583098 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:49 np0005465987 nova_compute[230713]: 2025-10-02 13:02:49.888 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] VM Started (Lifecycle Event)#033[00m
Oct  2 09:02:49 np0005465987 nova_compute[230713]: 2025-10-02 13:02:49.932 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:49 np0005465987 nova_compute[230713]: 2025-10-02 13:02:49.935 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410169.8884683, b0c2e2d6-1438-4b68-8a71-55a4c0583098 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:49 np0005465987 nova_compute[230713]: 2025-10-02 13:02:49.935 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:02:49 np0005465987 nova_compute[230713]: 2025-10-02 13:02:49.980 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:49 np0005465987 nova_compute[230713]: 2025-10-02 13:02:49.983 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.015 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.225 2 DEBUG nova.compute.manager [req-20a30ac4-9c0a-48d9-abdc-df5fa0ad5559 req-f6cc598a-2949-4f4f-a1f8-c8b959da7447 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received event network-vif-plugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.226 2 DEBUG oslo_concurrency.lockutils [req-20a30ac4-9c0a-48d9-abdc-df5fa0ad5559 req-f6cc598a-2949-4f4f-a1f8-c8b959da7447 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.226 2 DEBUG oslo_concurrency.lockutils [req-20a30ac4-9c0a-48d9-abdc-df5fa0ad5559 req-f6cc598a-2949-4f4f-a1f8-c8b959da7447 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.226 2 DEBUG oslo_concurrency.lockutils [req-20a30ac4-9c0a-48d9-abdc-df5fa0ad5559 req-f6cc598a-2949-4f4f-a1f8-c8b959da7447 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.226 2 DEBUG nova.compute.manager [req-20a30ac4-9c0a-48d9-abdc-df5fa0ad5559 req-f6cc598a-2949-4f4f-a1f8-c8b959da7447 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Processing event network-vif-plugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.227 2 DEBUG nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.229 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410170.229423, b0c2e2d6-1438-4b68-8a71-55a4c0583098 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.230 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.232 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.234 2 INFO nova.virt.libvirt.driver [-] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Instance spawned successfully.#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.234 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:02:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.303 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.305 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.305 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.306 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.306 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.307 2 DEBUG nova.virt.libvirt.driver [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.311 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.313 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.422 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.721 2 INFO nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Took 14.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.721 2 DEBUG nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.801 2 INFO nova.compute.manager [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Took 18.99 seconds to build instance.#033[00m
Oct  2 09:02:50 np0005465987 nova_compute[230713]: 2025-10-02 13:02:50.847 2 DEBUG oslo_concurrency.lockutils [None req-7416dabc-c012-4c92-9f23-547daed47a66 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:51.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:52 np0005465987 nova_compute[230713]: 2025-10-02 13:02:52.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:52.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:52 np0005465987 nova_compute[230713]: 2025-10-02 13:02:52.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:52 np0005465987 nova_compute[230713]: 2025-10-02 13:02:52.960 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:53 np0005465987 nova_compute[230713]: 2025-10-02 13:02:53.380 2 DEBUG nova.compute.manager [req-4e1ffab8-6ee7-4082-a01c-d7b3a3667276 req-6d21686b-ae1c-4328-975d-7b1cda6ee076 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received event network-vif-plugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:53 np0005465987 nova_compute[230713]: 2025-10-02 13:02:53.380 2 DEBUG oslo_concurrency.lockutils [req-4e1ffab8-6ee7-4082-a01c-d7b3a3667276 req-6d21686b-ae1c-4328-975d-7b1cda6ee076 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:53 np0005465987 nova_compute[230713]: 2025-10-02 13:02:53.381 2 DEBUG oslo_concurrency.lockutils [req-4e1ffab8-6ee7-4082-a01c-d7b3a3667276 req-6d21686b-ae1c-4328-975d-7b1cda6ee076 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:53 np0005465987 nova_compute[230713]: 2025-10-02 13:02:53.381 2 DEBUG oslo_concurrency.lockutils [req-4e1ffab8-6ee7-4082-a01c-d7b3a3667276 req-6d21686b-ae1c-4328-975d-7b1cda6ee076 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:53 np0005465987 nova_compute[230713]: 2025-10-02 13:02:53.381 2 DEBUG nova.compute.manager [req-4e1ffab8-6ee7-4082-a01c-d7b3a3667276 req-6d21686b-ae1c-4328-975d-7b1cda6ee076 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] No waiting events found dispatching network-vif-plugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:53 np0005465987 nova_compute[230713]: 2025-10-02 13:02:53.381 2 WARNING nova.compute.manager [req-4e1ffab8-6ee7-4082-a01c-d7b3a3667276 req-6d21686b-ae1c-4328-975d-7b1cda6ee076 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received unexpected event network-vif-plugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d for instance with vm_state active and task_state None.#033[00m
Oct  2 09:02:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:02:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:53.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:02:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:54.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:02:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 56K writes, 211K keys, 56K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.04 MB/s#012Cumulative WAL: 56K writes, 20K syncs, 2.71 writes per sync, written: 0.21 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4720 writes, 16K keys, 4720 commit groups, 1.0 writes per commit group, ingest: 17.63 MB, 0.03 MB/s#012Interval WAL: 4720 writes, 1941 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:02:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:02:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:02:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:02:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:55.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:56.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:57 np0005465987 nova_compute[230713]: 2025-10-02 13:02:57.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:57 np0005465987 nova_compute[230713]: 2025-10-02 13:02:57.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:57.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:57 np0005465987 nova_compute[230713]: 2025-10-02 13:02:57.807 2 DEBUG nova.compute.manager [req-1ef4aef9-9fc9-42c7-8716-08835c2a54e3 req-741e563d-9301-4840-809d-e23c125d8e6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received event network-changed-07172cab-ff7e-4e8b-a17d-d27013d63b4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:57 np0005465987 nova_compute[230713]: 2025-10-02 13:02:57.807 2 DEBUG nova.compute.manager [req-1ef4aef9-9fc9-42c7-8716-08835c2a54e3 req-741e563d-9301-4840-809d-e23c125d8e6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Refreshing instance network info cache due to event network-changed-07172cab-ff7e-4e8b-a17d-d27013d63b4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:02:57 np0005465987 nova_compute[230713]: 2025-10-02 13:02:57.807 2 DEBUG oslo_concurrency.lockutils [req-1ef4aef9-9fc9-42c7-8716-08835c2a54e3 req-741e563d-9301-4840-809d-e23c125d8e6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:57 np0005465987 nova_compute[230713]: 2025-10-02 13:02:57.808 2 DEBUG oslo_concurrency.lockutils [req-1ef4aef9-9fc9-42c7-8716-08835c2a54e3 req-741e563d-9301-4840-809d-e23c125d8e6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:57 np0005465987 nova_compute[230713]: 2025-10-02 13:02:57.808 2 DEBUG nova.network.neutron [req-1ef4aef9-9fc9-42c7-8716-08835c2a54e3 req-741e563d-9301-4840-809d-e23c125d8e6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Refreshing network info cache for port 07172cab-ff7e-4e8b-a17d-d27013d63b4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:02:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:58.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:02:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:59.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:00.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:01 np0005465987 nova_compute[230713]: 2025-10-02 13:03:01.030 2 DEBUG nova.network.neutron [req-1ef4aef9-9fc9-42c7-8716-08835c2a54e3 req-741e563d-9301-4840-809d-e23c125d8e6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updated VIF entry in instance network info cache for port 07172cab-ff7e-4e8b-a17d-d27013d63b4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:01 np0005465987 nova_compute[230713]: 2025-10-02 13:03:01.031 2 DEBUG nova.network.neutron [req-1ef4aef9-9fc9-42c7-8716-08835c2a54e3 req-741e563d-9301-4840-809d-e23c125d8e6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updating instance_info_cache with network_info: [{"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:01 np0005465987 nova_compute[230713]: 2025-10-02 13:03:01.334 2 DEBUG oslo_concurrency.lockutils [req-1ef4aef9-9fc9-42c7-8716-08835c2a54e3 req-741e563d-9301-4840-809d-e23c125d8e6f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:03:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:03:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:01.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:02 np0005465987 nova_compute[230713]: 2025-10-02 13:03:02.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:02.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:02 np0005465987 nova_compute[230713]: 2025-10-02 13:03:02.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:03.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:04.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:04 np0005465987 ovn_controller[129172]: 2025-10-02T13:03:04Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:2e:8a 10.100.0.3
Oct  2 09:03:04 np0005465987 ovn_controller[129172]: 2025-10-02T13:03:04Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:2e:8a 10.100.0.3
Oct  2 09:03:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:05.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:06.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:06 np0005465987 podman[306873]: 2025-10-02 13:03:06.850529175 +0000 UTC m=+0.063487943 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  2 09:03:07 np0005465987 nova_compute[230713]: 2025-10-02 13:03:07.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:07 np0005465987 nova_compute[230713]: 2025-10-02 13:03:07.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:07.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:08.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:08 np0005465987 nova_compute[230713]: 2025-10-02 13:03:08.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:08.721 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:08.723 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:03:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:09.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:10.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:10 np0005465987 podman[306894]: 2025-10-02 13:03:10.837642843 +0000 UTC m=+0.053244425 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 09:03:10 np0005465987 podman[306896]: 2025-10-02 13:03:10.841514438 +0000 UTC m=+0.051926579 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:03:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:10 np0005465987 podman[306893]: 2025-10-02 13:03:10.869700572 +0000 UTC m=+0.091056860 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:03:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:11.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:11.725 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:12 np0005465987 nova_compute[230713]: 2025-10-02 13:03:12.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:12.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:12 np0005465987 nova_compute[230713]: 2025-10-02 13:03:12.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:13.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:03:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:14.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:03:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:15.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:16.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:17 np0005465987 nova_compute[230713]: 2025-10-02 13:03:17.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:17 np0005465987 nova_compute[230713]: 2025-10-02 13:03:17.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:03:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:17.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:03:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:18.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:19.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:20.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:21.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:22 np0005465987 nova_compute[230713]: 2025-10-02 13:03:22.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:22.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:22 np0005465987 nova_compute[230713]: 2025-10-02 13:03:22.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:23.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:24.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:25.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:25 np0005465987 nova_compute[230713]: 2025-10-02 13:03:25.846 2 DEBUG nova.compute.manager [req-ff99d953-bc70-47df-b6de-4864a31794d7 req-5d0dac4d-6044-423b-bd36-979d524ff1d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received event network-changed-07172cab-ff7e-4e8b-a17d-d27013d63b4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:25 np0005465987 nova_compute[230713]: 2025-10-02 13:03:25.847 2 DEBUG nova.compute.manager [req-ff99d953-bc70-47df-b6de-4864a31794d7 req-5d0dac4d-6044-423b-bd36-979d524ff1d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Refreshing instance network info cache due to event network-changed-07172cab-ff7e-4e8b-a17d-d27013d63b4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:25 np0005465987 nova_compute[230713]: 2025-10-02 13:03:25.847 2 DEBUG oslo_concurrency.lockutils [req-ff99d953-bc70-47df-b6de-4864a31794d7 req-5d0dac4d-6044-423b-bd36-979d524ff1d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:25 np0005465987 nova_compute[230713]: 2025-10-02 13:03:25.847 2 DEBUG oslo_concurrency.lockutils [req-ff99d953-bc70-47df-b6de-4864a31794d7 req-5d0dac4d-6044-423b-bd36-979d524ff1d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:25 np0005465987 nova_compute[230713]: 2025-10-02 13:03:25.847 2 DEBUG nova.network.neutron [req-ff99d953-bc70-47df-b6de-4864a31794d7 req-5d0dac4d-6044-423b-bd36-979d524ff1d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Refreshing network info cache for port 07172cab-ff7e-4e8b-a17d-d27013d63b4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.024 2 DEBUG oslo_concurrency.lockutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.024 2 DEBUG oslo_concurrency.lockutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.025 2 DEBUG oslo_concurrency.lockutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.025 2 DEBUG oslo_concurrency.lockutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.025 2 DEBUG oslo_concurrency.lockutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.027 2 INFO nova.compute.manager [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Terminating instance#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.027 2 DEBUG nova.compute.manager [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:03:26 np0005465987 kernel: tap07172cab-ff (unregistering): left promiscuous mode
Oct  2 09:03:26 np0005465987 NetworkManager[44910]: <info>  [1759410206.2401] device (tap07172cab-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:26 np0005465987 ovn_controller[129172]: 2025-10-02T13:03:26Z|00780|binding|INFO|Releasing lport 07172cab-ff7e-4e8b-a17d-d27013d63b4d from this chassis (sb_readonly=0)
Oct  2 09:03:26 np0005465987 ovn_controller[129172]: 2025-10-02T13:03:26Z|00781|binding|INFO|Setting lport 07172cab-ff7e-4e8b-a17d-d27013d63b4d down in Southbound
Oct  2 09:03:26 np0005465987 ovn_controller[129172]: 2025-10-02T13:03:26Z|00782|binding|INFO|Removing iface tap07172cab-ff ovn-installed in OVS
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:26.261 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:2e:8a 10.100.0.3'], port_security=['fa:16:3e:e7:2e:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b0c2e2d6-1438-4b68-8a71-55a4c0583098', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49a3b79e-7ce8-4d17-b0c3-daaaf1a10050', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=010b8986-6ed3-4d08-a5a9-d68a3bf546a0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=07172cab-ff7e-4e8b-a17d-d27013d63b4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:26.263 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 07172cab-ff7e-4e8b-a17d-d27013d63b4d in datapath 3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f unbound from our chassis#033[00m
Oct  2 09:03:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:26.264 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:03:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:26.266 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b3bc9d99-7b22-42aa-9547-b1b392e99ce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:26 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:26.267 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f namespace which is not needed anymore#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:26 np0005465987 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Oct  2 09:03:26 np0005465987 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c8.scope: Consumed 14.741s CPU time.
Oct  2 09:03:26 np0005465987 systemd-machined[188335]: Machine qemu-88-instance-000000c8 terminated.
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.467 2 INFO nova.virt.libvirt.driver [-] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Instance destroyed successfully.#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.468 2 DEBUG nova.objects.instance [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'resources' on Instance uuid b0c2e2d6-1438-4b68-8a71-55a4c0583098 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.499 2 DEBUG nova.virt.libvirt.vif [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:02:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-14876516',display_name='tempest-TestNetworkBasicOps-server-14876516',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-14876516',id=200,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoZA8437yhqlUP8kndlQoNsOAXIeKjDjewP0BWVQaNVf8Q+upVxxbOqAkwFlrc6pcoXhW5vRVStjZp+bpXaMSPNBAqe4IJZ+1Xbr4LKDzmC6FdkunFLWFGfbEHp8lIQOQ==',key_name='tempest-TestNetworkBasicOps-1890293940',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-hf0ixp07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:50Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=b0c2e2d6-1438-4b68-8a71-55a4c0583098,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.499 2 DEBUG nova.network.os_vif_util [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.500 2 DEBUG nova.network.os_vif_util [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:2e:8a,bridge_name='br-int',has_traffic_filtering=True,id=07172cab-ff7e-4e8b-a17d-d27013d63b4d,network=Network(3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07172cab-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.500 2 DEBUG os_vif [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:2e:8a,bridge_name='br-int',has_traffic_filtering=True,id=07172cab-ff7e-4e8b-a17d-d27013d63b4d,network=Network(3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07172cab-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07172cab-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.506 2 INFO os_vif [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:2e:8a,bridge_name='br-int',has_traffic_filtering=True,id=07172cab-ff7e-4e8b-a17d-d27013d63b4d,network=Network(3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07172cab-ff')#033[00m
Oct  2 09:03:26 np0005465987 neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f[306636]: [NOTICE]   (306640) : haproxy version is 2.8.14-c23fe91
Oct  2 09:03:26 np0005465987 neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f[306636]: [NOTICE]   (306640) : path to executable is /usr/sbin/haproxy
Oct  2 09:03:26 np0005465987 neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f[306636]: [WARNING]  (306640) : Exiting Master process...
Oct  2 09:03:26 np0005465987 neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f[306636]: [ALERT]    (306640) : Current worker (306642) exited with code 143 (Terminated)
Oct  2 09:03:26 np0005465987 neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f[306636]: [WARNING]  (306640) : All workers exited. Exiting... (0)
Oct  2 09:03:26 np0005465987 systemd[1]: libpod-20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70.scope: Deactivated successfully.
Oct  2 09:03:26 np0005465987 podman[306983]: 2025-10-02 13:03:26.541956764 +0000 UTC m=+0.180813433 container died 20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.621 2 DEBUG nova.compute.manager [req-e2bf1a93-e68a-483f-9001-2d00229d0147 req-67910b1c-dd07-4416-8c6d-404a577caae0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received event network-vif-unplugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.622 2 DEBUG oslo_concurrency.lockutils [req-e2bf1a93-e68a-483f-9001-2d00229d0147 req-67910b1c-dd07-4416-8c6d-404a577caae0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.622 2 DEBUG oslo_concurrency.lockutils [req-e2bf1a93-e68a-483f-9001-2d00229d0147 req-67910b1c-dd07-4416-8c6d-404a577caae0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.622 2 DEBUG oslo_concurrency.lockutils [req-e2bf1a93-e68a-483f-9001-2d00229d0147 req-67910b1c-dd07-4416-8c6d-404a577caae0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.622 2 DEBUG nova.compute.manager [req-e2bf1a93-e68a-483f-9001-2d00229d0147 req-67910b1c-dd07-4416-8c6d-404a577caae0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] No waiting events found dispatching network-vif-unplugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:26 np0005465987 nova_compute[230713]: 2025-10-02 13:03:26.623 2 DEBUG nova.compute.manager [req-e2bf1a93-e68a-483f-9001-2d00229d0147 req-67910b1c-dd07-4416-8c6d-404a577caae0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received event network-vif-unplugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:03:26 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70-userdata-shm.mount: Deactivated successfully.
Oct  2 09:03:26 np0005465987 systemd[1]: var-lib-containers-storage-overlay-9436e41331acd93f644d36fd7548b94110116d66bf975f9496c3022f74fe6a83-merged.mount: Deactivated successfully.
Oct  2 09:03:26 np0005465987 podman[306983]: 2025-10-02 13:03:26.961592001 +0000 UTC m=+0.600448660 container cleanup 20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:03:26 np0005465987 systemd[1]: libpod-conmon-20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70.scope: Deactivated successfully.
Oct  2 09:03:27 np0005465987 podman[307042]: 2025-10-02 13:03:27.503533854 +0000 UTC m=+0.520431690 container remove 20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.509 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f8769741-613a-44c6-8f39-8677c2ef32e9]: (4, ('Thu Oct  2 01:03:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f (20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70)\n20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70\nThu Oct  2 01:03:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f (20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70)\n20c7133b1ae74d6e7d44116c3192615d43fdd10671822ba3934eda97e942fa70\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.511 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bc992537-835a-486c-a76f-860b1ea7973d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.512 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e2a6e0a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:27 np0005465987 nova_compute[230713]: 2025-10-02 13:03:27.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:27 np0005465987 kernel: tap3e2a6e0a-a0: left promiscuous mode
Oct  2 09:03:27 np0005465987 nova_compute[230713]: 2025-10-02 13:03:27.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.531 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[87a46e3e-28e3-471f-8108-4210a4db9026]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.555 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4be147e9-88e9-41c6-abfb-ca377618fc9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.556 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[61f75595-58ab-4831-b036-3ebd2aa48679]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:27 np0005465987 nova_compute[230713]: 2025-10-02 13:03:27.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.573 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4785614f-3f95-467c-b03d-faceba5d3c3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823710, 'reachable_time': 37987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307058, 'error': None, 'target': 'ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.576 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.576 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[364f841c-8ca4-419e-bcc5-311b477553b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:27 np0005465987 systemd[1]: run-netns-ovnmeta\x2d3e2a6e0a\x2da0e0\x2d4ffa\x2da478\x2df7ace212a48f.mount: Deactivated successfully.
Oct  2 09:03:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:27.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:27 np0005465987 nova_compute[230713]: 2025-10-02 13:03:27.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.990 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.991 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:27.991 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:28.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.554 2 INFO nova.virt.libvirt.driver [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Deleting instance files /var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098_del#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.554 2 INFO nova.virt.libvirt.driver [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Deletion of /var/lib/nova/instances/b0c2e2d6-1438-4b68-8a71-55a4c0583098_del complete#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.665 2 INFO nova.compute.manager [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Took 2.64 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.666 2 DEBUG oslo.service.loopingcall [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.666 2 DEBUG nova.compute.manager [-] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.666 2 DEBUG nova.network.neutron [-] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.808 2 DEBUG nova.compute.manager [req-49577deb-ae13-4b1e-b3db-1edc57fdbbf5 req-953b63df-7672-444c-b317-6534eabd757e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received event network-vif-plugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.809 2 DEBUG oslo_concurrency.lockutils [req-49577deb-ae13-4b1e-b3db-1edc57fdbbf5 req-953b63df-7672-444c-b317-6534eabd757e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.809 2 DEBUG oslo_concurrency.lockutils [req-49577deb-ae13-4b1e-b3db-1edc57fdbbf5 req-953b63df-7672-444c-b317-6534eabd757e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.809 2 DEBUG oslo_concurrency.lockutils [req-49577deb-ae13-4b1e-b3db-1edc57fdbbf5 req-953b63df-7672-444c-b317-6534eabd757e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.810 2 DEBUG nova.compute.manager [req-49577deb-ae13-4b1e-b3db-1edc57fdbbf5 req-953b63df-7672-444c-b317-6534eabd757e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] No waiting events found dispatching network-vif-plugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.810 2 WARNING nova.compute.manager [req-49577deb-ae13-4b1e-b3db-1edc57fdbbf5 req-953b63df-7672-444c-b317-6534eabd757e d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received unexpected event network-vif-plugged-07172cab-ff7e-4e8b-a17d-d27013d63b4d for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:03:28 np0005465987 nova_compute[230713]: 2025-10-02 13:03:28.957 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:29 np0005465987 nova_compute[230713]: 2025-10-02 13:03:29.650 2 DEBUG nova.network.neutron [req-ff99d953-bc70-47df-b6de-4864a31794d7 req-5d0dac4d-6044-423b-bd36-979d524ff1d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updated VIF entry in instance network info cache for port 07172cab-ff7e-4e8b-a17d-d27013d63b4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:29 np0005465987 nova_compute[230713]: 2025-10-02 13:03:29.650 2 DEBUG nova.network.neutron [req-ff99d953-bc70-47df-b6de-4864a31794d7 req-5d0dac4d-6044-423b-bd36-979d524ff1d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updating instance_info_cache with network_info: [{"id": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "address": "fa:16:3e:e7:2e:8a", "network": {"id": "3e2a6e0a-a0e0-4ffa-a478-f7ace212a48f", "bridge": "br-int", "label": "tempest-network-smoke--1973286350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07172cab-ff", "ovs_interfaceid": "07172cab-ff7e-4e8b-a17d-d27013d63b4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.674400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209674474, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1404, "num_deletes": 256, "total_data_size": 3121154, "memory_usage": 3165448, "flush_reason": "Manual Compaction"}
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209684586, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 2036658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74172, "largest_seqno": 75570, "table_properties": {"data_size": 2030704, "index_size": 3220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12767, "raw_average_key_size": 19, "raw_value_size": 2018706, "raw_average_value_size": 3115, "num_data_blocks": 142, "num_entries": 648, "num_filter_entries": 648, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410095, "oldest_key_time": 1759410095, "file_creation_time": 1759410209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 10218 microseconds, and 4631 cpu microseconds.
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.684627) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 2036658 bytes OK
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.684644) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.685968) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.685979) EVENT_LOG_v1 {"time_micros": 1759410209685975, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.685994) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 3114610, prev total WAL file size 3114610, number of live WAL files 2.
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.686876) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373539' seq:72057594037927935, type:22 .. '6C6F676D0033303131' seq:0, type:0; will stop at (end)
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(1988KB)], [150(11MB)]
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209686932, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 14106433, "oldest_snapshot_seqno": -1}
Oct  2 09:03:29 np0005465987 nova_compute[230713]: 2025-10-02 13:03:29.691 2 DEBUG oslo_concurrency.lockutils [req-ff99d953-bc70-47df-b6de-4864a31794d7 req-5d0dac4d-6044-423b-bd36-979d524ff1d4 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:29.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 9679 keys, 13960802 bytes, temperature: kUnknown
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209770314, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 13960802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13896248, "index_size": 39225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24261, "raw_key_size": 255031, "raw_average_key_size": 26, "raw_value_size": 13724553, "raw_average_value_size": 1417, "num_data_blocks": 1504, "num_entries": 9679, "num_filter_entries": 9679, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759410209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.770562) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 13960802 bytes
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.771877) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.0 rd, 167.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.5 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(13.8) write-amplify(6.9) OK, records in: 10206, records dropped: 527 output_compression: NoCompression
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.771897) EVENT_LOG_v1 {"time_micros": 1759410209771888, "job": 96, "event": "compaction_finished", "compaction_time_micros": 83449, "compaction_time_cpu_micros": 30192, "output_level": 6, "num_output_files": 1, "total_output_size": 13960802, "num_input_records": 10206, "num_output_records": 9679, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209772322, "job": 96, "event": "table_file_deletion", "file_number": 152}
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410209774450, "job": 96, "event": "table_file_deletion", "file_number": 150}
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.686736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.774539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.774546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.774547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.774549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:29 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:03:29.774550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:03:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:30.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:03:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:03:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 15K writes, 75K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1581 writes, 7733 keys, 1581 commit groups, 1.0 writes per commit group, ingest: 15.73 MB, 0.03 MB/s#012Interval WAL: 1581 writes, 1581 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     54.4      1.71              0.28        48    0.036       0      0       0.0       0.0#012  L6      1/0   13.31 MB   0.0      0.5     0.1      0.5       0.5      0.0       0.0   5.1    112.2     96.1      4.94              1.40        47    0.105    338K    25K       0.0       0.0#012 Sum      1/0   13.31 MB   0.0      0.5     0.1      0.5       0.6      0.1       0.0   6.1     83.3     85.4      6.66              1.68        95    0.070    338K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.6     76.2     79.3      1.05              0.23        12    0.087     58K   3136       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.5       0.5      0.0       0.0   0.0    112.2     96.1      4.94              1.40        47    0.105    338K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.5      1.71              0.28        47    0.036       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.091, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.55 GB write, 0.11 MB/s write, 0.54 GB read, 0.10 MB/s read, 6.7 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 59.96 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000401 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3436,57.47 MB,18.9058%) FilterBlock(95,943.05 KB,0.302942%) IndexBlock(95,1.56 MB,0.514086%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.657 2 DEBUG nova.network.neutron [-] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.677 2 INFO nova.compute.manager [-] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Took 3.01 seconds to deallocate network for instance.#033[00m
Oct  2 09:03:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:31.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.760 2 DEBUG oslo_concurrency.lockutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.760 2 DEBUG oslo_concurrency.lockutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.767 2 DEBUG nova.compute.manager [req-4901a2e1-a9aa-4567-bebc-f87ee3f7f283 req-d32e3cb8-6e32-4888-b7c3-91c242d59393 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Received event network-vif-deleted-07172cab-ff7e-4e8b-a17d-d27013d63b4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.815 2 DEBUG oslo_concurrency.processutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:03:31 np0005465987 nova_compute[230713]: 2025-10-02 13:03:31.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:03:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/612150870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.238 2 DEBUG oslo_concurrency.processutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.243 2 DEBUG nova.compute.provider_tree [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.267 2 DEBUG nova.scheduler.client.report [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:03:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:32.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.333 2 DEBUG oslo_concurrency.lockutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.386 2 INFO nova.scheduler.client.report [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Deleted allocations for instance b0c2e2d6-1438-4b68-8a71-55a4c0583098#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.511 2 DEBUG oslo_concurrency.lockutils [None req-59ee4991-016b-4e63-8c87-8deab297a077 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "b0c2e2d6-1438-4b68-8a71-55a4c0583098" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.591 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.591 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.591 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.592 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b0c2e2d6-1438-4b68-8a71-55a4c0583098 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.629 2 DEBUG nova.compute.utils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010#033[00m
Oct  2 09:03:32 np0005465987 nova_compute[230713]: 2025-10-02 13:03:32.771 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:03:33 np0005465987 nova_compute[230713]: 2025-10-02 13:03:33.324 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:33 np0005465987 nova_compute[230713]: 2025-10-02 13:03:33.358 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-b0c2e2d6-1438-4b68-8a71-55a4c0583098" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:33 np0005465987 nova_compute[230713]: 2025-10-02 13:03:33.358 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:03:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:33.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:33 np0005465987 nova_compute[230713]: 2025-10-02 13:03:33.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:33 np0005465987 nova_compute[230713]: 2025-10-02 13:03:33.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:34.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:35.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:35 np0005465987 nova_compute[230713]: 2025-10-02 13:03:35.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.012 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.012 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.013 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.013 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.013 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:36.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:36 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3504288442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.475 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.639 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.640 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4327MB free_disk=20.851673126220703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.640 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.641 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.771 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.771 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:03:36 np0005465987 nova_compute[230713]: 2025-10-02 13:03:36.788 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:37 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1883656776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:37 np0005465987 nova_compute[230713]: 2025-10-02 13:03:37.206 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:37 np0005465987 nova_compute[230713]: 2025-10-02 13:03:37.211 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:37 np0005465987 nova_compute[230713]: 2025-10-02 13:03:37.235 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:37 np0005465987 nova_compute[230713]: 2025-10-02 13:03:37.261 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:03:37 np0005465987 nova_compute[230713]: 2025-10-02 13:03:37.261 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:37 np0005465987 nova_compute[230713]: 2025-10-02 13:03:37.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:37.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:37 np0005465987 podman[307127]: 2025-10-02 13:03:37.856627708 +0000 UTC m=+0.074758957 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  2 09:03:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:38.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:39 np0005465987 nova_compute[230713]: 2025-10-02 13:03:39.264 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:39 np0005465987 nova_compute[230713]: 2025-10-02 13:03:39.264 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:39.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:40.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:41 np0005465987 nova_compute[230713]: 2025-10-02 13:03:41.465 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410206.4643853, b0c2e2d6-1438-4b68-8a71-55a4c0583098 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:41 np0005465987 nova_compute[230713]: 2025-10-02 13:03:41.466 2 INFO nova.compute.manager [-] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:03:41 np0005465987 nova_compute[230713]: 2025-10-02 13:03:41.492 2 DEBUG nova.compute.manager [None req-c549cddd-9330-40dd-8d7e-701910931adc - - - - - -] [instance: b0c2e2d6-1438-4b68-8a71-55a4c0583098] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:41 np0005465987 nova_compute[230713]: 2025-10-02 13:03:41.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:41.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:41 np0005465987 podman[307148]: 2025-10-02 13:03:41.845650289 +0000 UTC m=+0.058467216 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 09:03:41 np0005465987 podman[307146]: 2025-10-02 13:03:41.857392427 +0000 UTC m=+0.077415139 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller)
Oct  2 09:03:41 np0005465987 podman[307147]: 2025-10-02 13:03:41.861069637 +0000 UTC m=+0.067752128 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2)
Oct  2 09:03:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:42.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:42 np0005465987 nova_compute[230713]: 2025-10-02 13:03:42.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:43.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:43 np0005465987 nova_compute[230713]: 2025-10-02 13:03:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:43 np0005465987 nova_compute[230713]: 2025-10-02 13:03:43.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:03:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:44.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:45.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:46 np0005465987 nova_compute[230713]: 2025-10-02 13:03:46.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:46.270 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:46.272 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:03:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:46 np0005465987 nova_compute[230713]: 2025-10-02 13:03:46.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:47 np0005465987 nova_compute[230713]: 2025-10-02 13:03:47.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:48.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:49.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:50 np0005465987 nova_compute[230713]: 2025-10-02 13:03:50.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:50 np0005465987 nova_compute[230713]: 2025-10-02 13:03:50.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:50.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:51 np0005465987 nova_compute[230713]: 2025-10-02 13:03:51.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:52.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:52 np0005465987 nova_compute[230713]: 2025-10-02 13:03:52.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:53.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:03:54.274 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:54.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:55.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:56.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:56 np0005465987 nova_compute[230713]: 2025-10-02 13:03:56.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:57 np0005465987 nova_compute[230713]: 2025-10-02 13:03:57.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:03:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:57.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:03:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:03:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:58.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:03:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:03:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:59.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:00.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:01 np0005465987 nova_compute[230713]: 2025-10-02 13:04:01.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:01.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:04:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:02.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:04:02 np0005465987 nova_compute[230713]: 2025-10-02 13:04:02.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:04:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:04:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:04:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:04:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:04:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:04:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:04:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:03.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:04.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:04 np0005465987 ceph-mgr[81180]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct  2 09:04:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:06.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:06 np0005465987 nova_compute[230713]: 2025-10-02 13:04:06.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:07 np0005465987 nova_compute[230713]: 2025-10-02 13:04:07.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:04:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:07.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:04:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:08.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:08 np0005465987 podman[307485]: 2025-10-02 13:04:08.782631251 +0000 UTC m=+0.058822896 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 09:04:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:04:09 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:04:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:04:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:09.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:04:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:10.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:11 np0005465987 nova_compute[230713]: 2025-10-02 13:04:11.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:11.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:12.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:12 np0005465987 nova_compute[230713]: 2025-10-02 13:04:12.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:12 np0005465987 podman[307531]: 2025-10-02 13:04:12.843507019 +0000 UTC m=+0.055003772 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 09:04:12 np0005465987 podman[307532]: 2025-10-02 13:04:12.849824921 +0000 UTC m=+0.056754340 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 09:04:12 np0005465987 podman[307530]: 2025-10-02 13:04:12.865868215 +0000 UTC m=+0.078547940 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct  2 09:04:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:13.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:14.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:15.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:16.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:16 np0005465987 nova_compute[230713]: 2025-10-02 13:04:16.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:17 np0005465987 nova_compute[230713]: 2025-10-02 13:04:17.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:17.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:18.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:19 np0005465987 nova_compute[230713]: 2025-10-02 13:04:19.213 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "80f30891-f4d0-47d1-900e-23551cdab3aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:19 np0005465987 nova_compute[230713]: 2025-10-02 13:04:19.213 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:19 np0005465987 nova_compute[230713]: 2025-10-02 13:04:19.267 2 DEBUG nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:04:19 np0005465987 nova_compute[230713]: 2025-10-02 13:04:19.395 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:19 np0005465987 nova_compute[230713]: 2025-10-02 13:04:19.395 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:19 np0005465987 nova_compute[230713]: 2025-10-02 13:04:19.403 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:04:19 np0005465987 nova_compute[230713]: 2025-10-02 13:04:19.404 2 INFO nova.compute.claims [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:04:19 np0005465987 nova_compute[230713]: 2025-10-02 13:04:19.667 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:19.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/597105384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.085 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.091 2 DEBUG nova.compute.provider_tree [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.130 2 DEBUG nova.scheduler.client.report [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.227 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.228 2 DEBUG nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.316 2 DEBUG nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.317 2 DEBUG nova.network.neutron [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:04:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:20.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.395 2 INFO nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.470 2 DEBUG nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.739 2 DEBUG nova.policy [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb366465e6154871b8a53c9f500105ce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.774 2 DEBUG nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.775 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.775 2 INFO nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Creating image(s)#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.798 2 DEBUG nova.storage.rbd_utils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80f30891-f4d0-47d1-900e-23551cdab3aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.823 2 DEBUG nova.storage.rbd_utils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80f30891-f4d0-47d1-900e-23551cdab3aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.852 2 DEBUG nova.storage.rbd_utils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80f30891-f4d0-47d1-900e-23551cdab3aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.856 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.923 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.924 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.925 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.925 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.950 2 DEBUG nova.storage.rbd_utils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80f30891-f4d0-47d1-900e-23551cdab3aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:20 np0005465987 nova_compute[230713]: 2025-10-02 13:04:20.953 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 80f30891-f4d0-47d1-900e-23551cdab3aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:21 np0005465987 nova_compute[230713]: 2025-10-02 13:04:21.208 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 80f30891-f4d0-47d1-900e-23551cdab3aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:21 np0005465987 nova_compute[230713]: 2025-10-02 13:04:21.280 2 DEBUG nova.storage.rbd_utils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] resizing rbd image 80f30891-f4d0-47d1-900e-23551cdab3aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:04:21 np0005465987 nova_compute[230713]: 2025-10-02 13:04:21.393 2 DEBUG nova.objects.instance [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'migration_context' on Instance uuid 80f30891-f4d0-47d1-900e-23551cdab3aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:21 np0005465987 nova_compute[230713]: 2025-10-02 13:04:21.475 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:04:21 np0005465987 nova_compute[230713]: 2025-10-02 13:04:21.476 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Ensure instance console log exists: /var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:04:21 np0005465987 nova_compute[230713]: 2025-10-02 13:04:21.476 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:21 np0005465987 nova_compute[230713]: 2025-10-02 13:04:21.476 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:21 np0005465987 nova_compute[230713]: 2025-10-02 13:04:21.477 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:21 np0005465987 nova_compute[230713]: 2025-10-02 13:04:21.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:21.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:22.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:22 np0005465987 nova_compute[230713]: 2025-10-02 13:04:22.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:23 np0005465987 nova_compute[230713]: 2025-10-02 13:04:23.687 2 DEBUG nova.network.neutron [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Successfully created port: b1ddb776-e25b-4b8b-902f-db25970d2db7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:04:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:23.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:24.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:25 np0005465987 nova_compute[230713]: 2025-10-02 13:04:25.028 2 DEBUG nova.network.neutron [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Successfully updated port: b1ddb776-e25b-4b8b-902f-db25970d2db7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:04:25 np0005465987 nova_compute[230713]: 2025-10-02 13:04:25.049 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:25 np0005465987 nova_compute[230713]: 2025-10-02 13:04:25.049 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquired lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:25 np0005465987 nova_compute[230713]: 2025-10-02 13:04:25.049 2 DEBUG nova.network.neutron [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:04:25 np0005465987 nova_compute[230713]: 2025-10-02 13:04:25.203 2 DEBUG nova.compute.manager [req-e0b99d9c-8042-4ed5-8f4e-4e722d36e97f req-86a0cab8-9332-4d03-b776-a141b40dbee0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received event network-changed-b1ddb776-e25b-4b8b-902f-db25970d2db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:25 np0005465987 nova_compute[230713]: 2025-10-02 13:04:25.203 2 DEBUG nova.compute.manager [req-e0b99d9c-8042-4ed5-8f4e-4e722d36e97f req-86a0cab8-9332-4d03-b776-a141b40dbee0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Refreshing instance network info cache due to event network-changed-b1ddb776-e25b-4b8b-902f-db25970d2db7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:04:25 np0005465987 nova_compute[230713]: 2025-10-02 13:04:25.204 2 DEBUG oslo_concurrency.lockutils [req-e0b99d9c-8042-4ed5-8f4e-4e722d36e97f req-86a0cab8-9332-4d03-b776-a141b40dbee0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:25 np0005465987 nova_compute[230713]: 2025-10-02 13:04:25.256 2 DEBUG nova.network.neutron [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:04:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:25.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:26.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.675 2 DEBUG nova.network.neutron [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updating instance_info_cache with network_info: [{"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.707 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Releasing lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.708 2 DEBUG nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Instance network_info: |[{"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.708 2 DEBUG oslo_concurrency.lockutils [req-e0b99d9c-8042-4ed5-8f4e-4e722d36e97f req-86a0cab8-9332-4d03-b776-a141b40dbee0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.709 2 DEBUG nova.network.neutron [req-e0b99d9c-8042-4ed5-8f4e-4e722d36e97f req-86a0cab8-9332-4d03-b776-a141b40dbee0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Refreshing network info cache for port b1ddb776-e25b-4b8b-902f-db25970d2db7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.713 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Start _get_guest_xml network_info=[{"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.718 2 WARNING nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.723 2 DEBUG nova.virt.libvirt.host [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.724 2 DEBUG nova.virt.libvirt.host [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.729 2 DEBUG nova.virt.libvirt.host [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.730 2 DEBUG nova.virt.libvirt.host [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.732 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.733 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.734 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.734 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.735 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.735 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.736 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.736 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.737 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.737 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.738 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.738 2 DEBUG nova.virt.hardware [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:04:26 np0005465987 nova_compute[230713]: 2025-10-02 13:04:26.743 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1867330307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.179 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.203 2 DEBUG nova.storage.rbd_utils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80f30891-f4d0-47d1-900e-23551cdab3aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.207 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2193588837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.629 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.630 2 DEBUG nova.virt.libvirt.vif [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1372956208',display_name='tempest-TestNetworkBasicOps-server-1372956208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1372956208',id=203,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGntx2wpYl+OT989Wxe0aasfIJ1pG5E2ZX4O0ejDmg1Cma3C6o13U/JATqNvaHrQNlbP1gshYOATN0s8uJj3Y+TxRGhRYBZ8rQ0XHxxjXsz/SJtDKZ+fIFr7lWoGJrYLVg==',key_name='tempest-TestNetworkBasicOps-2090096111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-i4uy2fsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:20Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=80f30891-f4d0-47d1-900e-23551cdab3aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.631 2 DEBUG nova.network.os_vif_util [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.632 2 DEBUG nova.network.os_vif_util [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:55:d4,bridge_name='br-int',has_traffic_filtering=True,id=b1ddb776-e25b-4b8b-902f-db25970d2db7,network=Network(5381c09c-d446-4df3-9be4-8be3f242a8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1ddb776-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.633 2 DEBUG nova.objects.instance [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid 80f30891-f4d0-47d1-900e-23551cdab3aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.664 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <uuid>80f30891-f4d0-47d1-900e-23551cdab3aa</uuid>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <name>instance-000000cb</name>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestNetworkBasicOps-server-1372956208</nova:name>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:04:26</nova:creationTime>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <nova:user uuid="fb366465e6154871b8a53c9f500105ce">tempest-TestNetworkBasicOps-1692262680-project-member</nova:user>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <nova:project uuid="ce2ca82c03554560b55ed747ae63f1fb">tempest-TestNetworkBasicOps-1692262680</nova:project>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <nova:port uuid="b1ddb776-e25b-4b8b-902f-db25970d2db7">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <entry name="serial">80f30891-f4d0-47d1-900e-23551cdab3aa</entry>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <entry name="uuid">80f30891-f4d0-47d1-900e-23551cdab3aa</entry>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/80f30891-f4d0-47d1-900e-23551cdab3aa_disk">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/80f30891-f4d0-47d1-900e-23551cdab3aa_disk.config">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:e6:55:d4"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <target dev="tapb1ddb776-e2"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa/console.log" append="off"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:04:27 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:04:27 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:04:27 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:04:27 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.665 2 DEBUG nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Preparing to wait for external event network-vif-plugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.665 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.666 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.666 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.666 2 DEBUG nova.virt.libvirt.vif [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1372956208',display_name='tempest-TestNetworkBasicOps-server-1372956208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1372956208',id=203,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGntx2wpYl+OT989Wxe0aasfIJ1pG5E2ZX4O0ejDmg1Cma3C6o13U/JATqNvaHrQNlbP1gshYOATN0s8uJj3Y+TxRGhRYBZ8rQ0XHxxjXsz/SJtDKZ+fIFr7lWoGJrYLVg==',key_name='tempest-TestNetworkBasicOps-2090096111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-i4uy2fsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:20Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=80f30891-f4d0-47d1-900e-23551cdab3aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.667 2 DEBUG nova.network.os_vif_util [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.667 2 DEBUG nova.network.os_vif_util [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:55:d4,bridge_name='br-int',has_traffic_filtering=True,id=b1ddb776-e25b-4b8b-902f-db25970d2db7,network=Network(5381c09c-d446-4df3-9be4-8be3f242a8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1ddb776-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.668 2 DEBUG os_vif [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:55:d4,bridge_name='br-int',has_traffic_filtering=True,id=b1ddb776-e25b-4b8b-902f-db25970d2db7,network=Network(5381c09c-d446-4df3-9be4-8be3f242a8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1ddb776-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.672 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1ddb776-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.672 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1ddb776-e2, col_values=(('external_ids', {'iface-id': 'b1ddb776-e25b-4b8b-902f-db25970d2db7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:55:d4', 'vm-uuid': '80f30891-f4d0-47d1-900e-23551cdab3aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:27 np0005465987 NetworkManager[44910]: <info>  [1759410267.6746] manager: (tapb1ddb776-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.682 2 INFO os_vif [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:55:d4,bridge_name='br-int',has_traffic_filtering=True,id=b1ddb776-e25b-4b8b-902f-db25970d2db7,network=Network(5381c09c-d446-4df3-9be4-8be3f242a8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1ddb776-e2')#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.726 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.727 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.727 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] No VIF found with MAC fa:16:3e:e6:55:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.727 2 INFO nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Using config drive#033[00m
Oct  2 09:04:27 np0005465987 nova_compute[230713]: 2025-10-02 13:04:27.750 2 DEBUG nova.storage.rbd_utils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80f30891-f4d0-47d1-900e-23551cdab3aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:27.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:27.991 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:27.992 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:27.993 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.296 2 INFO nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Creating config drive at /var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa/disk.config#033[00m
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.303 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi96lmjhw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:28.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.432 2 DEBUG nova.network.neutron [req-e0b99d9c-8042-4ed5-8f4e-4e722d36e97f req-86a0cab8-9332-4d03-b776-a141b40dbee0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updated VIF entry in instance network info cache for port b1ddb776-e25b-4b8b-902f-db25970d2db7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.433 2 DEBUG nova.network.neutron [req-e0b99d9c-8042-4ed5-8f4e-4e722d36e97f req-86a0cab8-9332-4d03-b776-a141b40dbee0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updating instance_info_cache with network_info: [{"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.444 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi96lmjhw" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.471 2 DEBUG nova.storage.rbd_utils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] rbd image 80f30891-f4d0-47d1-900e-23551cdab3aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.475 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa/disk.config 80f30891-f4d0-47d1-900e-23551cdab3aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.534 2 DEBUG oslo_concurrency.lockutils [req-e0b99d9c-8042-4ed5-8f4e-4e722d36e97f req-86a0cab8-9332-4d03-b776-a141b40dbee0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.641 2 DEBUG oslo_concurrency.processutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa/disk.config 80f30891-f4d0-47d1-900e-23551cdab3aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.644 2 INFO nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Deleting local config drive /var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa/disk.config because it was imported into RBD.#033[00m
Oct  2 09:04:28 np0005465987 kernel: tapb1ddb776-e2: entered promiscuous mode
Oct  2 09:04:28 np0005465987 NetworkManager[44910]: <info>  [1759410268.7157] manager: (tapb1ddb776-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Oct  2 09:04:28 np0005465987 systemd-udevd[307916]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:28 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:28Z|00783|binding|INFO|Claiming lport b1ddb776-e25b-4b8b-902f-db25970d2db7 for this chassis.
Oct  2 09:04:28 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:28Z|00784|binding|INFO|b1ddb776-e25b-4b8b-902f-db25970d2db7: Claiming fa:16:3e:e6:55:d4 10.100.0.9
Oct  2 09:04:28 np0005465987 NetworkManager[44910]: <info>  [1759410268.7722] device (tapb1ddb776-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:04:28 np0005465987 NetworkManager[44910]: <info>  [1759410268.7731] device (tapb1ddb776-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.774 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:55:d4 10.100.0.9'], port_security=['fa:16:3e:e6:55:d4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '80f30891-f4d0-47d1-900e-23551cdab3aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5381c09c-d446-4df3-9be4-8be3f242a8f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3e1f742-f4a1-4be3-ad95-c673c5f8d534', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb4bf8e9-e02f-4a11-8d58-0035adb0e6ec, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=b1ddb776-e25b-4b8b-902f-db25970d2db7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.776 138325 INFO neutron.agent.ovn.metadata.agent [-] Port b1ddb776-e25b-4b8b-902f-db25970d2db7 in datapath 5381c09c-d446-4df3-9be4-8be3f242a8f7 bound to our chassis#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.777 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5381c09c-d446-4df3-9be4-8be3f242a8f7#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.791 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[205e617b-1943-46cc-a130-6a294c077c50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.792 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5381c09c-d1 in ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.794 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5381c09c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.794 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[568e4128-4fde-4338-a858-a7f70a0040f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.795 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e186e505-6b8e-4c44-89a8-e62df89d5c26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 systemd-machined[188335]: New machine qemu-89-instance-000000cb.
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.809 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[b69e5d3c-f80b-42e1-bc73-8bfedc209eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 systemd[1]: Started Virtual Machine qemu-89-instance-000000cb.
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.837 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d3170a47-1424-4e9a-8e17-9bd036313916]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:28Z|00785|binding|INFO|Setting lport b1ddb776-e25b-4b8b-902f-db25970d2db7 ovn-installed in OVS
Oct  2 09:04:28 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:28Z|00786|binding|INFO|Setting lport b1ddb776-e25b-4b8b-902f-db25970d2db7 up in Southbound
Oct  2 09:04:28 np0005465987 nova_compute[230713]: 2025-10-02 13:04:28.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.878 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3d76bc-6724-4599-a32c-cd37a1a3fcba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 NetworkManager[44910]: <info>  [1759410268.8858] manager: (tap5381c09c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/361)
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.884 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[322c1bb8-8849-4f46-a626-aa43dc8d96d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.917 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a9b525-cec9-483d-8485-a5bb8a14b075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.919 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[3d20948c-dd44-497f-8e03-deaa33df23c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 NetworkManager[44910]: <info>  [1759410268.9419] device (tap5381c09c-d0): carrier: link connected
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.947 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6d45799d-d6f7-4de6-98fa-10b6038f63bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.963 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[893fb132-a605-410b-952a-0b3a83bcfb23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5381c09c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:17:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833751, 'reachable_time': 18035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307952, 'error': None, 'target': 'ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.977 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e79355c8-6564-4f79-88e5-59303e4e9d06]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:177e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 833751, 'tstamp': 833751}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307953, 'error': None, 'target': 'ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:28.993 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[954f6a33-647e-4d44-9e22-a502a5b8032f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5381c09c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:17:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833751, 'reachable_time': 18035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307954, 'error': None, 'target': 'ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.023 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b930f4-9534-4fb8-a0e7-67244d2da45d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.098 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0b320586-bded-42bc-8abc-b6c6598ace70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.099 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5381c09c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.100 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.100 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5381c09c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:29 np0005465987 kernel: tap5381c09c-d0: entered promiscuous mode
Oct  2 09:04:29 np0005465987 NetworkManager[44910]: <info>  [1759410269.1033] manager: (tap5381c09c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.109 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5381c09c-d0, col_values=(('external_ids', {'iface-id': '76358f8c-5928-4806-8bf7-b75615b3283a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:29 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:29Z|00787|binding|INFO|Releasing lport 76358f8c-5928-4806-8bf7-b75615b3283a from this chassis (sb_readonly=0)
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.115 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5381c09c-d446-4df3-9be4-8be3f242a8f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5381c09c-d446-4df3-9be4-8be3f242a8f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.116 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3433a8c7-3b23-45bf-b9bf-146dfdb02667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.117 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-5381c09c-d446-4df3-9be4-8be3f242a8f7
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/5381c09c-d446-4df3-9be4-8be3f242a8f7.pid.haproxy
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 5381c09c-d446-4df3-9be4-8be3f242a8f7
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.118 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7', 'env', 'PROCESS_TAG=haproxy-5381c09c-d446-4df3-9be4-8be3f242a8f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5381c09c-d446-4df3-9be4-8be3f242a8f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.253 2 DEBUG nova.compute.manager [req-192b653b-66f9-44c5-8eba-c5c74f958606 req-6d56eb78-a5a1-4ded-a70a-443d962c5a9b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received event network-vif-plugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.253 2 DEBUG oslo_concurrency.lockutils [req-192b653b-66f9-44c5-8eba-c5c74f958606 req-6d56eb78-a5a1-4ded-a70a-443d962c5a9b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.253 2 DEBUG oslo_concurrency.lockutils [req-192b653b-66f9-44c5-8eba-c5c74f958606 req-6d56eb78-a5a1-4ded-a70a-443d962c5a9b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.253 2 DEBUG oslo_concurrency.lockutils [req-192b653b-66f9-44c5-8eba-c5c74f958606 req-6d56eb78-a5a1-4ded-a70a-443d962c5a9b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.253 2 DEBUG nova.compute.manager [req-192b653b-66f9-44c5-8eba-c5c74f958606 req-6d56eb78-a5a1-4ded-a70a-443d962c5a9b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Processing event network-vif-plugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.435 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:29 np0005465987 podman[308028]: 2025-10-02 13:04:29.48548639 +0000 UTC m=+0.048182447 container create 7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 09:04:29 np0005465987 systemd[1]: Started libpod-conmon-7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e.scope.
Oct  2 09:04:29 np0005465987 podman[308028]: 2025-10-02 13:04:29.459348312 +0000 UTC m=+0.022044399 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:04:29 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:04:29 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cce0d2fc1b912464d5b5cf73d6f3d304917597a150ef2183697d75ab1bca6fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:04:29 np0005465987 podman[308028]: 2025-10-02 13:04:29.574108303 +0000 UTC m=+0.136804370 container init 7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:04:29 np0005465987 podman[308028]: 2025-10-02 13:04:29.58139053 +0000 UTC m=+0.144086597 container start 7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 09:04:29 np0005465987 neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7[308043]: [NOTICE]   (308047) : New worker (308049) forked
Oct  2 09:04:29 np0005465987 neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7[308043]: [NOTICE]   (308047) : Loading success.
Oct  2 09:04:29 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:29.627 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.641 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410269.6411335, 80f30891-f4d0-47d1-900e-23551cdab3aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.641 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] VM Started (Lifecycle Event)#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.643 2 DEBUG nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.646 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.649 2 INFO nova.virt.libvirt.driver [-] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Instance spawned successfully.#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.649 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.664 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.667 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.687 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.688 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.688 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.688 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.689 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.689 2 DEBUG nova.virt.libvirt.driver [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.695 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.695 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410269.641329, 80f30891-f4d0-47d1-900e-23551cdab3aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.695 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.732 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.734 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410269.6462672, 80f30891-f4d0-47d1-900e-23551cdab3aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.734 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.760 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.762 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.785 2 INFO nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Took 9.01 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.785 2 DEBUG nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:29.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.838 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.923 2 INFO nova.compute.manager [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Took 10.58 seconds to build instance.#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.957 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.959 2 DEBUG oslo_concurrency.lockutils [None req-3cd6e9b8-1742-4f44-93c9-a77a94d2292f fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:29 np0005465987 nova_compute[230713]: 2025-10-02 13:04:29.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:30.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:31 np0005465987 nova_compute[230713]: 2025-10-02 13:04:31.350 2 DEBUG nova.compute.manager [req-870ccfd9-f15f-4df2-a27b-403534c2f8e1 req-93a04e6c-e5d2-4fd1-8fd8-029c9a2cf7c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received event network-vif-plugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:31 np0005465987 nova_compute[230713]: 2025-10-02 13:04:31.351 2 DEBUG oslo_concurrency.lockutils [req-870ccfd9-f15f-4df2-a27b-403534c2f8e1 req-93a04e6c-e5d2-4fd1-8fd8-029c9a2cf7c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:31 np0005465987 nova_compute[230713]: 2025-10-02 13:04:31.351 2 DEBUG oslo_concurrency.lockutils [req-870ccfd9-f15f-4df2-a27b-403534c2f8e1 req-93a04e6c-e5d2-4fd1-8fd8-029c9a2cf7c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:31 np0005465987 nova_compute[230713]: 2025-10-02 13:04:31.351 2 DEBUG oslo_concurrency.lockutils [req-870ccfd9-f15f-4df2-a27b-403534c2f8e1 req-93a04e6c-e5d2-4fd1-8fd8-029c9a2cf7c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:31 np0005465987 nova_compute[230713]: 2025-10-02 13:04:31.351 2 DEBUG nova.compute.manager [req-870ccfd9-f15f-4df2-a27b-403534c2f8e1 req-93a04e6c-e5d2-4fd1-8fd8-029c9a2cf7c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] No waiting events found dispatching network-vif-plugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:31 np0005465987 nova_compute[230713]: 2025-10-02 13:04:31.351 2 WARNING nova.compute.manager [req-870ccfd9-f15f-4df2-a27b-403534c2f8e1 req-93a04e6c-e5d2-4fd1-8fd8-029c9a2cf7c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received unexpected event network-vif-plugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:04:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:04:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:04:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:32.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:32 np0005465987 nova_compute[230713]: 2025-10-02 13:04:32.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:32 np0005465987 nova_compute[230713]: 2025-10-02 13:04:32.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:32 np0005465987 nova_compute[230713]: 2025-10-02 13:04:32.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:32 np0005465987 nova_compute[230713]: 2025-10-02 13:04:32.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:04:32 np0005465987 nova_compute[230713]: 2025-10-02 13:04:32.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:04:33 np0005465987 nova_compute[230713]: 2025-10-02 13:04:33.230 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:33 np0005465987 nova_compute[230713]: 2025-10-02 13:04:33.231 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:33 np0005465987 nova_compute[230713]: 2025-10-02 13:04:33.231 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:04:33 np0005465987 nova_compute[230713]: 2025-10-02 13:04:33.232 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 80f30891-f4d0-47d1-900e-23551cdab3aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:33 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:33.629 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:33.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:34.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:34 np0005465987 NetworkManager[44910]: <info>  [1759410274.4363] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Oct  2 09:04:34 np0005465987 NetworkManager[44910]: <info>  [1759410274.4372] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Oct  2 09:04:34 np0005465987 nova_compute[230713]: 2025-10-02 13:04:34.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:34 np0005465987 nova_compute[230713]: 2025-10-02 13:04:34.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:34 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:34Z|00788|binding|INFO|Releasing lport 76358f8c-5928-4806-8bf7-b75615b3283a from this chassis (sb_readonly=0)
Oct  2 09:04:34 np0005465987 nova_compute[230713]: 2025-10-02 13:04:34.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:34 np0005465987 nova_compute[230713]: 2025-10-02 13:04:34.973 2 DEBUG nova.compute.manager [req-72630b67-2fb9-44c7-8c57-9497bfbd7f00 req-0c34b19c-def5-45c6-8815-15b21f44458f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received event network-changed-b1ddb776-e25b-4b8b-902f-db25970d2db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:34 np0005465987 nova_compute[230713]: 2025-10-02 13:04:34.973 2 DEBUG nova.compute.manager [req-72630b67-2fb9-44c7-8c57-9497bfbd7f00 req-0c34b19c-def5-45c6-8815-15b21f44458f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Refreshing instance network info cache due to event network-changed-b1ddb776-e25b-4b8b-902f-db25970d2db7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:04:34 np0005465987 nova_compute[230713]: 2025-10-02 13:04:34.973 2 DEBUG oslo_concurrency.lockutils [req-72630b67-2fb9-44c7-8c57-9497bfbd7f00 req-0c34b19c-def5-45c6-8815-15b21f44458f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:35 np0005465987 nova_compute[230713]: 2025-10-02 13:04:35.421 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updating instance_info_cache with network_info: [{"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:35 np0005465987 nova_compute[230713]: 2025-10-02 13:04:35.452 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:35 np0005465987 nova_compute[230713]: 2025-10-02 13:04:35.453 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:04:35 np0005465987 nova_compute[230713]: 2025-10-02 13:04:35.453 2 DEBUG oslo_concurrency.lockutils [req-72630b67-2fb9-44c7-8c57-9497bfbd7f00 req-0c34b19c-def5-45c6-8815-15b21f44458f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:35 np0005465987 nova_compute[230713]: 2025-10-02 13:04:35.453 2 DEBUG nova.network.neutron [req-72630b67-2fb9-44c7-8c57-9497bfbd7f00 req-0c34b19c-def5-45c6-8815-15b21f44458f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Refreshing network info cache for port b1ddb776-e25b-4b8b-902f-db25970d2db7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:04:35 np0005465987 nova_compute[230713]: 2025-10-02 13:04:35.455 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:35.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:35 np0005465987 nova_compute[230713]: 2025-10-02 13:04:35.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:04:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:36.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:04:36 np0005465987 nova_compute[230713]: 2025-10-02 13:04:36.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.008 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.009 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.010 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.351 2 DEBUG nova.network.neutron [req-72630b67-2fb9-44c7-8c57-9497bfbd7f00 req-0c34b19c-def5-45c6-8815-15b21f44458f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updated VIF entry in instance network info cache for port b1ddb776-e25b-4b8b-902f-db25970d2db7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.352 2 DEBUG nova.network.neutron [req-72630b67-2fb9-44c7-8c57-9497bfbd7f00 req-0c34b19c-def5-45c6-8815-15b21f44458f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updating instance_info_cache with network_info: [{"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.372 2 DEBUG oslo_concurrency.lockutils [req-72630b67-2fb9-44c7-8c57-9497bfbd7f00 req-0c34b19c-def5-45c6-8815-15b21f44458f d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:37 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3449800090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.440 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.518 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.519 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.688 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.689 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4198MB free_disk=20.946460723876953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.689 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.689 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.782 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 80f30891-f4d0-47d1-900e-23551cdab3aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.782 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.782 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:04:37 np0005465987 nova_compute[230713]: 2025-10-02 13:04:37.833 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:04:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:04:38 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:38 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/44426836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:38 np0005465987 nova_compute[230713]: 2025-10-02 13:04:38.274 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:38 np0005465987 nova_compute[230713]: 2025-10-02 13:04:38.281 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:38 np0005465987 nova_compute[230713]: 2025-10-02 13:04:38.314 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:38 np0005465987 nova_compute[230713]: 2025-10-02 13:04:38.364 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:04:38 np0005465987 nova_compute[230713]: 2025-10-02 13:04:38.364 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:38.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:39.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:39 np0005465987 podman[308105]: 2025-10-02 13:04:39.854544656 +0000 UTC m=+0.066970686 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:04:39 np0005465987 nova_compute[230713]: 2025-10-02 13:04:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:39 np0005465987 nova_compute[230713]: 2025-10-02 13:04:39.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:39 np0005465987 nova_compute[230713]: 2025-10-02 13:04:39.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:39 np0005465987 nova_compute[230713]: 2025-10-02 13:04:39.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:04:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:40.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:40 np0005465987 nova_compute[230713]: 2025-10-02 13:04:40.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000053s ======
Oct  2 09:04:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Oct  2 09:04:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:04:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:42.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:04:42 np0005465987 nova_compute[230713]: 2025-10-02 13:04:42.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:42 np0005465987 nova_compute[230713]: 2025-10-02 13:04:42.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:43 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:43Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:55:d4 10.100.0.9
Oct  2 09:04:43 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:43Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:55:d4 10.100.0.9
Oct  2 09:04:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:43.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:43 np0005465987 podman[308126]: 2025-10-02 13:04:43.857376201 +0000 UTC m=+0.061335624 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:04:43 np0005465987 podman[308124]: 2025-10-02 13:04:43.886864311 +0000 UTC m=+0.097294219 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:04:43 np0005465987 podman[308125]: 2025-10-02 13:04:43.889246766 +0000 UTC m=+0.097661619 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 09:04:43 np0005465987 nova_compute[230713]: 2025-10-02 13:04:43.979 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:43 np0005465987 nova_compute[230713]: 2025-10-02 13:04:43.980 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:04:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:04:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:44.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:04:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:04:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:45.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:04:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:46.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:47 np0005465987 nova_compute[230713]: 2025-10-02 13:04:47.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:47 np0005465987 nova_compute[230713]: 2025-10-02 13:04:47.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:47.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:04:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:48.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:04:49 np0005465987 nova_compute[230713]: 2025-10-02 13:04:49.679 2 INFO nova.compute.manager [None req-46a3246a-7aa8-48ed-85d6-69d427200cba fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Get console output#033[00m
Oct  2 09:04:49 np0005465987 nova_compute[230713]: 2025-10-02 13:04:49.684 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:04:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:49.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:50.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:50 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:50Z|00789|binding|INFO|Releasing lport 76358f8c-5928-4806-8bf7-b75615b3283a from this chassis (sb_readonly=0)
Oct  2 09:04:50 np0005465987 nova_compute[230713]: 2025-10-02 13:04:50.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:50 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:50Z|00790|binding|INFO|Releasing lport 76358f8c-5928-4806-8bf7-b75615b3283a from this chassis (sb_readonly=0)
Oct  2 09:04:50 np0005465987 nova_compute[230713]: 2025-10-02 13:04:50.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:04:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:51.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:04:52 np0005465987 nova_compute[230713]: 2025-10-02 13:04:52.007 2 INFO nova.compute.manager [None req-1cc426b2-4938-484e-bde9-f471c4edd84c fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Get console output#033[00m
Oct  2 09:04:52 np0005465987 nova_compute[230713]: 2025-10-02 13:04:52.011 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:04:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:52.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:52 np0005465987 nova_compute[230713]: 2025-10-02 13:04:52.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:53 np0005465987 NetworkManager[44910]: <info>  [1759410293.1017] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Oct  2 09:04:53 np0005465987 nova_compute[230713]: 2025-10-02 13:04:53.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:53 np0005465987 NetworkManager[44910]: <info>  [1759410293.1026] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Oct  2 09:04:53 np0005465987 nova_compute[230713]: 2025-10-02 13:04:53.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:53 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:53Z|00791|binding|INFO|Releasing lport 76358f8c-5928-4806-8bf7-b75615b3283a from this chassis (sb_readonly=0)
Oct  2 09:04:53 np0005465987 nova_compute[230713]: 2025-10-02 13:04:53.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:53 np0005465987 nova_compute[230713]: 2025-10-02 13:04:53.750 2 INFO nova.compute.manager [None req-ee1c8830-77d5-47ef-8709-f0e9bfe60713 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Get console output#033[00m
Oct  2 09:04:53 np0005465987 nova_compute[230713]: 2025-10-02 13:04:53.754 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:04:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:53.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:54.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:54 np0005465987 nova_compute[230713]: 2025-10-02 13:04:54.951 2 DEBUG nova.compute.manager [req-507ac4eb-cc66-4043-a4a8-3a44080ec714 req-c615201a-d21e-472e-abad-8e474e5f0368 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received event network-changed-b1ddb776-e25b-4b8b-902f-db25970d2db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:54 np0005465987 nova_compute[230713]: 2025-10-02 13:04:54.951 2 DEBUG nova.compute.manager [req-507ac4eb-cc66-4043-a4a8-3a44080ec714 req-c615201a-d21e-472e-abad-8e474e5f0368 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Refreshing instance network info cache due to event network-changed-b1ddb776-e25b-4b8b-902f-db25970d2db7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:04:54 np0005465987 nova_compute[230713]: 2025-10-02 13:04:54.952 2 DEBUG oslo_concurrency.lockutils [req-507ac4eb-cc66-4043-a4a8-3a44080ec714 req-c615201a-d21e-472e-abad-8e474e5f0368 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:54 np0005465987 nova_compute[230713]: 2025-10-02 13:04:54.952 2 DEBUG oslo_concurrency.lockutils [req-507ac4eb-cc66-4043-a4a8-3a44080ec714 req-c615201a-d21e-472e-abad-8e474e5f0368 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:54 np0005465987 nova_compute[230713]: 2025-10-02 13:04:54.952 2 DEBUG nova.network.neutron [req-507ac4eb-cc66-4043-a4a8-3a44080ec714 req-c615201a-d21e-472e-abad-8e474e5f0368 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Refreshing network info cache for port b1ddb776-e25b-4b8b-902f-db25970d2db7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:04:54 np0005465987 nova_compute[230713]: 2025-10-02 13:04:54.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.029 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.054 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Triggering sync for uuid 80f30891-f4d0-47d1-900e-23551cdab3aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.055 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "80f30891-f4d0-47d1-900e-23551cdab3aa" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.055 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.089 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.154 2 DEBUG oslo_concurrency.lockutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "80f30891-f4d0-47d1-900e-23551cdab3aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.154 2 DEBUG oslo_concurrency.lockutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.155 2 DEBUG oslo_concurrency.lockutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.155 2 DEBUG oslo_concurrency.lockutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.156 2 DEBUG oslo_concurrency.lockutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.157 2 INFO nova.compute.manager [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Terminating instance#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.158 2 DEBUG nova.compute.manager [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:04:55 np0005465987 kernel: tapb1ddb776-e2 (unregistering): left promiscuous mode
Oct  2 09:04:55 np0005465987 NetworkManager[44910]: <info>  [1759410295.2188] device (tapb1ddb776-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:55Z|00792|binding|INFO|Releasing lport b1ddb776-e25b-4b8b-902f-db25970d2db7 from this chassis (sb_readonly=0)
Oct  2 09:04:55 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:55Z|00793|binding|INFO|Setting lport b1ddb776-e25b-4b8b-902f-db25970d2db7 down in Southbound
Oct  2 09:04:55 np0005465987 ovn_controller[129172]: 2025-10-02T13:04:55Z|00794|binding|INFO|Removing iface tapb1ddb776-e2 ovn-installed in OVS
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.241 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:55:d4 10.100.0.9'], port_security=['fa:16:3e:e6:55:d4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '80f30891-f4d0-47d1-900e-23551cdab3aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5381c09c-d446-4df3-9be4-8be3f242a8f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ca82c03554560b55ed747ae63f1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3e1f742-f4a1-4be3-ad95-c673c5f8d534', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb4bf8e9-e02f-4a11-8d58-0035adb0e6ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=b1ddb776-e25b-4b8b-902f-db25970d2db7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.243 138325 INFO neutron.agent.ovn.metadata.agent [-] Port b1ddb776-e25b-4b8b-902f-db25970d2db7 in datapath 5381c09c-d446-4df3-9be4-8be3f242a8f7 unbound from our chassis#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.245 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5381c09c-d446-4df3-9be4-8be3f242a8f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.246 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[31cf609c-0eb9-446c-a82f-b35a76cba2a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.248 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7 namespace which is not needed anymore#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000cb.scope: Deactivated successfully.
Oct  2 09:04:55 np0005465987 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000cb.scope: Consumed 13.946s CPU time.
Oct  2 09:04:55 np0005465987 systemd-machined[188335]: Machine qemu-89-instance-000000cb terminated.
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.396 2 INFO nova.virt.libvirt.driver [-] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Instance destroyed successfully.#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.397 2 DEBUG nova.objects.instance [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lazy-loading 'resources' on Instance uuid 80f30891-f4d0-47d1-900e-23551cdab3aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:55 np0005465987 neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7[308043]: [NOTICE]   (308047) : haproxy version is 2.8.14-c23fe91
Oct  2 09:04:55 np0005465987 neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7[308043]: [NOTICE]   (308047) : path to executable is /usr/sbin/haproxy
Oct  2 09:04:55 np0005465987 neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7[308043]: [WARNING]  (308047) : Exiting Master process...
Oct  2 09:04:55 np0005465987 neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7[308043]: [ALERT]    (308047) : Current worker (308049) exited with code 143 (Terminated)
Oct  2 09:04:55 np0005465987 neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7[308043]: [WARNING]  (308047) : All workers exited. Exiting... (0)
Oct  2 09:04:55 np0005465987 systemd[1]: libpod-7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e.scope: Deactivated successfully.
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.410 2 DEBUG nova.virt.libvirt.vif [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:04:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1372956208',display_name='tempest-TestNetworkBasicOps-server-1372956208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1372956208',id=203,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGntx2wpYl+OT989Wxe0aasfIJ1pG5E2ZX4O0ejDmg1Cma3C6o13U/JATqNvaHrQNlbP1gshYOATN0s8uJj3Y+TxRGhRYBZ8rQ0XHxxjXsz/SJtDKZ+fIFr7lWoGJrYLVg==',key_name='tempest-TestNetworkBasicOps-2090096111',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ca82c03554560b55ed747ae63f1fb',ramdisk_id='',reservation_id='r-i4uy2fsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1692262680',owner_user_name='tempest-TestNetworkBasicOps-1692262680-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:04:29Z,user_data=None,user_id='fb366465e6154871b8a53c9f500105ce',uuid=80f30891-f4d0-47d1-900e-23551cdab3aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.411 2 DEBUG nova.network.os_vif_util [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converting VIF {"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.412 2 DEBUG nova.network.os_vif_util [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:55:d4,bridge_name='br-int',has_traffic_filtering=True,id=b1ddb776-e25b-4b8b-902f-db25970d2db7,network=Network(5381c09c-d446-4df3-9be4-8be3f242a8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1ddb776-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.412 2 DEBUG os_vif [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:55:d4,bridge_name='br-int',has_traffic_filtering=True,id=b1ddb776-e25b-4b8b-902f-db25970d2db7,network=Network(5381c09c-d446-4df3-9be4-8be3f242a8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1ddb776-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1ddb776-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 podman[308213]: 2025-10-02 13:04:55.415786986 +0000 UTC m=+0.049152723 container died 7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.419 2 INFO os_vif [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:55:d4,bridge_name='br-int',has_traffic_filtering=True,id=b1ddb776-e25b-4b8b-902f-db25970d2db7,network=Network(5381c09c-d446-4df3-9be4-8be3f242a8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1ddb776-e2')#033[00m
Oct  2 09:04:55 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e-userdata-shm.mount: Deactivated successfully.
Oct  2 09:04:55 np0005465987 systemd[1]: var-lib-containers-storage-overlay-3cce0d2fc1b912464d5b5cf73d6f3d304917597a150ef2183697d75ab1bca6fd-merged.mount: Deactivated successfully.
Oct  2 09:04:55 np0005465987 podman[308213]: 2025-10-02 13:04:55.462323428 +0000 UTC m=+0.095689165 container cleanup 7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:04:55 np0005465987 systemd[1]: libpod-conmon-7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e.scope: Deactivated successfully.
Oct  2 09:04:55 np0005465987 podman[308271]: 2025-10-02 13:04:55.541384942 +0000 UTC m=+0.050695456 container remove 7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.547 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fa032d-e3ab-465a-b4e1-c5f9c38a7e75]: (4, ('Thu Oct  2 01:04:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7 (7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e)\n7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e\nThu Oct  2 01:04:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7 (7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e)\n7a3594d7f625b22147e3f5108c7f35603d919a06301d45bc30a647d643ab1c1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.548 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a9dd2bf1-8d2a-42e9-adc0-057cd1c154de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.549 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5381c09c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 kernel: tap5381c09c-d0: left promiscuous mode
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.565 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b674e76c-df00-4cbf-b3cd-e06410241359]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.611 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c36de8-0e40-4866-80d3-f936d76a963b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.612 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[da727050-3fa6-4163-b10e-d23e9beb17e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.628 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a1088e2a-5844-4724-bd45-9654ff150267]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833744, 'reachable_time': 42502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308286, 'error': None, 'target': 'ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.634 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5381c09c-d446-4df3-9be4-8be3f242a8f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:04:55 np0005465987 systemd[1]: run-netns-ovnmeta\x2d5381c09c\x2dd446\x2d4df3\x2d9be4\x2d8be3f242a8f7.mount: Deactivated successfully.
Oct  2 09:04:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:04:55.634 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[5b91b0ed-a65a-4965-8d86-77ffe6340fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.772 2 DEBUG nova.compute.manager [req-2b0e5c54-88be-4385-abff-fe2b945d0a62 req-1f9e94cc-b2e5-4621-bd66-45a578f2d2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received event network-vif-unplugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.772 2 DEBUG oslo_concurrency.lockutils [req-2b0e5c54-88be-4385-abff-fe2b945d0a62 req-1f9e94cc-b2e5-4621-bd66-45a578f2d2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.773 2 DEBUG oslo_concurrency.lockutils [req-2b0e5c54-88be-4385-abff-fe2b945d0a62 req-1f9e94cc-b2e5-4621-bd66-45a578f2d2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.773 2 DEBUG oslo_concurrency.lockutils [req-2b0e5c54-88be-4385-abff-fe2b945d0a62 req-1f9e94cc-b2e5-4621-bd66-45a578f2d2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.773 2 DEBUG nova.compute.manager [req-2b0e5c54-88be-4385-abff-fe2b945d0a62 req-1f9e94cc-b2e5-4621-bd66-45a578f2d2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] No waiting events found dispatching network-vif-unplugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.773 2 DEBUG nova.compute.manager [req-2b0e5c54-88be-4385-abff-fe2b945d0a62 req-1f9e94cc-b2e5-4621-bd66-45a578f2d2f9 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received event network-vif-unplugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:04:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:55.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.910 2 INFO nova.virt.libvirt.driver [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Deleting instance files /var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa_del#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.912 2 INFO nova.virt.libvirt.driver [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Deletion of /var/lib/nova/instances/80f30891-f4d0-47d1-900e-23551cdab3aa_del complete#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.991 2 INFO nova.compute.manager [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.992 2 DEBUG oslo.service.loopingcall [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.992 2 DEBUG nova.compute.manager [-] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:04:55 np0005465987 nova_compute[230713]: 2025-10-02 13:04:55.992 2 DEBUG nova.network.neutron [-] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:04:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:56.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:56 np0005465987 nova_compute[230713]: 2025-10-02 13:04:56.732 2 DEBUG nova.network.neutron [req-507ac4eb-cc66-4043-a4a8-3a44080ec714 req-c615201a-d21e-472e-abad-8e474e5f0368 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updated VIF entry in instance network info cache for port b1ddb776-e25b-4b8b-902f-db25970d2db7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:04:56 np0005465987 nova_compute[230713]: 2025-10-02 13:04:56.733 2 DEBUG nova.network.neutron [req-507ac4eb-cc66-4043-a4a8-3a44080ec714 req-c615201a-d21e-472e-abad-8e474e5f0368 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updating instance_info_cache with network_info: [{"id": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "address": "fa:16:3e:e6:55:d4", "network": {"id": "5381c09c-d446-4df3-9be4-8be3f242a8f7", "bridge": "br-int", "label": "tempest-network-smoke--1440016652", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ca82c03554560b55ed747ae63f1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1ddb776-e2", "ovs_interfaceid": "b1ddb776-e25b-4b8b-902f-db25970d2db7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:56 np0005465987 nova_compute[230713]: 2025-10-02 13:04:56.786 2 DEBUG oslo_concurrency.lockutils [req-507ac4eb-cc66-4043-a4a8-3a44080ec714 req-c615201a-d21e-472e-abad-8e474e5f0368 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-80f30891-f4d0-47d1-900e-23551cdab3aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.093 2 DEBUG nova.network.neutron [-] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.117 2 INFO nova.compute.manager [-] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Took 1.12 seconds to deallocate network for instance.#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.159 2 DEBUG oslo_concurrency.lockutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.160 2 DEBUG oslo_concurrency.lockutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.193 2 DEBUG nova.compute.manager [req-3cc8345f-f2a4-4cdc-9e04-134ecec3ce20 req-33f98547-a2f7-4a15-bfcd-ecfbcacefdfa d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received event network-vif-deleted-b1ddb776-e25b-4b8b-902f-db25970d2db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.224 2 DEBUG oslo_concurrency.processutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2836223493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.697 2 DEBUG oslo_concurrency.processutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.702 2 DEBUG nova.compute.provider_tree [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.773 2 DEBUG nova.scheduler.client.report [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.836 2 DEBUG oslo_concurrency.lockutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:57.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.942 2 INFO nova.scheduler.client.report [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Deleted allocations for instance 80f30891-f4d0-47d1-900e-23551cdab3aa#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.963 2 DEBUG nova.compute.manager [req-3acc1489-1a05-43c3-89ca-b76bab9a1676 req-24f530db-1057-4c10-b6b8-a701415d72e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received event network-vif-plugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.963 2 DEBUG oslo_concurrency.lockutils [req-3acc1489-1a05-43c3-89ca-b76bab9a1676 req-24f530db-1057-4c10-b6b8-a701415d72e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.964 2 DEBUG oslo_concurrency.lockutils [req-3acc1489-1a05-43c3-89ca-b76bab9a1676 req-24f530db-1057-4c10-b6b8-a701415d72e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.964 2 DEBUG oslo_concurrency.lockutils [req-3acc1489-1a05-43c3-89ca-b76bab9a1676 req-24f530db-1057-4c10-b6b8-a701415d72e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.964 2 DEBUG nova.compute.manager [req-3acc1489-1a05-43c3-89ca-b76bab9a1676 req-24f530db-1057-4c10-b6b8-a701415d72e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] No waiting events found dispatching network-vif-plugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:57 np0005465987 nova_compute[230713]: 2025-10-02 13:04:57.964 2 WARNING nova.compute.manager [req-3acc1489-1a05-43c3-89ca-b76bab9a1676 req-24f530db-1057-4c10-b6b8-a701415d72e6 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Received unexpected event network-vif-plugged-b1ddb776-e25b-4b8b-902f-db25970d2db7 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:04:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:58.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:58 np0005465987 nova_compute[230713]: 2025-10-02 13:04:58.721 2 DEBUG oslo_concurrency.lockutils [None req-75f1b47a-b5dc-49dd-bd01-0f4f57ed1797 fb366465e6154871b8a53c9f500105ce ce2ca82c03554560b55ed747ae63f1fb - - default default] Lock "80f30891-f4d0-47d1-900e-23551cdab3aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:04:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:59.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:00 np0005465987 nova_compute[230713]: 2025-10-02 13:05:00.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:00.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:01.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:02 np0005465987 nova_compute[230713]: 2025-10-02 13:05:02.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:03.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:03 np0005465987 nova_compute[230713]: 2025-10-02 13:05:03.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:03 np0005465987 nova_compute[230713]: 2025-10-02 13:05:03.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:05:03 np0005465987 nova_compute[230713]: 2025-10-02 13:05:03.994 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:05:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:04.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:05 np0005465987 nova_compute[230713]: 2025-10-02 13:05:05.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:05 np0005465987 nova_compute[230713]: 2025-10-02 13:05:05.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:05 np0005465987 nova_compute[230713]: 2025-10-02 13:05:05.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:05.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:06.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.704159) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306704241, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1220, "num_deletes": 251, "total_data_size": 2685419, "memory_usage": 2729696, "flush_reason": "Manual Compaction"}
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306715022, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 1760507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75576, "largest_seqno": 76790, "table_properties": {"data_size": 1755222, "index_size": 2744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11653, "raw_average_key_size": 19, "raw_value_size": 1744585, "raw_average_value_size": 2992, "num_data_blocks": 121, "num_entries": 583, "num_filter_entries": 583, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410210, "oldest_key_time": 1759410210, "file_creation_time": 1759410306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 10901 microseconds, and 4859 cpu microseconds.
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.715070) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 1760507 bytes OK
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.715090) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.716135) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.716148) EVENT_LOG_v1 {"time_micros": 1759410306716144, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.716165) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2679628, prev total WAL file size 2679628, number of live WAL files 2.
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.716947) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(1719KB)], [153(13MB)]
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306717029, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15721309, "oldest_snapshot_seqno": -1}
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 9745 keys, 13790016 bytes, temperature: kUnknown
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306784633, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 13790016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13725302, "index_size": 39257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 257114, "raw_average_key_size": 26, "raw_value_size": 13552625, "raw_average_value_size": 1390, "num_data_blocks": 1500, "num_entries": 9745, "num_filter_entries": 9745, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759410306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.784974) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13790016 bytes
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.786220) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 232.2 rd, 203.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.3 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(16.8) write-amplify(7.8) OK, records in: 10262, records dropped: 517 output_compression: NoCompression
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.786247) EVENT_LOG_v1 {"time_micros": 1759410306786235, "job": 98, "event": "compaction_finished", "compaction_time_micros": 67702, "compaction_time_cpu_micros": 32825, "output_level": 6, "num_output_files": 1, "total_output_size": 13790016, "num_input_records": 10262, "num_output_records": 9745, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306786759, "job": 98, "event": "table_file_deletion", "file_number": 155}
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410306790175, "job": 98, "event": "table_file_deletion", "file_number": 153}
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.716779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.790232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.790238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.790239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.790241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:06 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:06.790242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:07 np0005465987 nova_compute[230713]: 2025-10-02 13:05:07.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:07.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:08.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:09.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:09 np0005465987 podman[308517]: 2025-10-02 13:05:09.967621676 +0000 UTC m=+0.051400435 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 09:05:10 np0005465987 podman[308602]: 2025-10-02 13:05:10.307678555 +0000 UTC m=+0.042916524 container create 7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 09:05:10 np0005465987 systemd[1]: Started libpod-conmon-7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df.scope.
Oct  2 09:05:10 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:05:10 np0005465987 podman[308602]: 2025-10-02 13:05:10.284998611 +0000 UTC m=+0.020236610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 09:05:10 np0005465987 podman[308602]: 2025-10-02 13:05:10.38935117 +0000 UTC m=+0.124589239 container init 7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:10 np0005465987 podman[308602]: 2025-10-02 13:05:10.398533079 +0000 UTC m=+0.133771048 container start 7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 09:05:10 np0005465987 nova_compute[230713]: 2025-10-02 13:05:10.395 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410295.3942957, 80f30891-f4d0-47d1-900e-23551cdab3aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:10 np0005465987 nova_compute[230713]: 2025-10-02 13:05:10.397 2 INFO nova.compute.manager [-] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:05:10 np0005465987 podman[308602]: 2025-10-02 13:05:10.402308881 +0000 UTC m=+0.137546900 container attach 7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 09:05:10 np0005465987 systemd[1]: libpod-7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df.scope: Deactivated successfully.
Oct  2 09:05:10 np0005465987 sharp_bartik[308618]: 167 167
Oct  2 09:05:10 np0005465987 conmon[308618]: conmon 7ce91c5aa6c79f4ea03a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df.scope/container/memory.events
Oct  2 09:05:10 np0005465987 podman[308602]: 2025-10-02 13:05:10.406861145 +0000 UTC m=+0.142099114 container died 7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 09:05:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:10 np0005465987 nova_compute[230713]: 2025-10-02 13:05:10.459 2 DEBUG nova.compute.manager [None req-6c41bc17-9aac-45e9-888d-a287a2c68895 - - - - - -] [instance: 80f30891-f4d0-47d1-900e-23551cdab3aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:10 np0005465987 nova_compute[230713]: 2025-10-02 13:05:10.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:10.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:10 np0005465987 systemd[1]: var-lib-containers-storage-overlay-f2ac3602b00b097583f6666cb663c67f2e54cc35a1a33111a7335200520c9fe4-merged.mount: Deactivated successfully.
Oct  2 09:05:10 np0005465987 podman[308602]: 2025-10-02 13:05:10.475887606 +0000 UTC m=+0.211125565 container remove 7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bartik, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 09:05:10 np0005465987 systemd[1]: libpod-conmon-7ce91c5aa6c79f4ea03a26bd98851588c3ce8724e22b698c3587d96b11bab2df.scope: Deactivated successfully.
Oct  2 09:05:10 np0005465987 podman[308644]: 2025-10-02 13:05:10.639622116 +0000 UTC m=+0.045372212 container create dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 09:05:10 np0005465987 systemd[1]: Started libpod-conmon-dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877.scope.
Oct  2 09:05:10 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:05:10 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdb18aaf76d31c1657d09a69c38b27ee2b6eaebd29f57535f5dbed785d3879a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 09:05:10 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdb18aaf76d31c1657d09a69c38b27ee2b6eaebd29f57535f5dbed785d3879a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 09:05:10 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdb18aaf76d31c1657d09a69c38b27ee2b6eaebd29f57535f5dbed785d3879a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 09:05:10 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdb18aaf76d31c1657d09a69c38b27ee2b6eaebd29f57535f5dbed785d3879a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 09:05:10 np0005465987 podman[308644]: 2025-10-02 13:05:10.708305077 +0000 UTC m=+0.114055203 container init dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 09:05:10 np0005465987 podman[308644]: 2025-10-02 13:05:10.716097439 +0000 UTC m=+0.121847525 container start dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 09:05:10 np0005465987 podman[308644]: 2025-10-02 13:05:10.623591861 +0000 UTC m=+0.029341967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 09:05:10 np0005465987 podman[308644]: 2025-10-02 13:05:10.719493051 +0000 UTC m=+0.125243157 container attach dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 09:05:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]: [
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:    {
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        "available": false,
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        "ceph_device": false,
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        "lsm_data": {},
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        "lvs": [],
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        "path": "/dev/sr0",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        "rejected_reasons": [
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "Insufficient space (<5GB)",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "Has a FileSystem"
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        ],
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        "sys_api": {
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "actuators": null,
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "device_nodes": "sr0",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "devname": "sr0",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "human_readable_size": "482.00 KB",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "id_bus": "ata",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "model": "QEMU DVD-ROM",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "nr_requests": "2",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "parent": "/dev/sr0",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "partitions": {},
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "path": "/dev/sr0",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "removable": "1",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "rev": "2.5+",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "ro": "0",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "rotational": "0",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "sas_address": "",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "sas_device_handle": "",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "scheduler_mode": "mq-deadline",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "sectors": 0,
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "sectorsize": "2048",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "size": 493568.0,
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "support_discard": "2048",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "type": "disk",
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:            "vendor": "QEMU"
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:        }
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]:    }
Oct  2 09:05:11 np0005465987 adoring_rhodes[308661]: ]
Oct  2 09:05:11 np0005465987 systemd[1]: libpod-dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877.scope: Deactivated successfully.
Oct  2 09:05:11 np0005465987 podman[308644]: 2025-10-02 13:05:11.842392445 +0000 UTC m=+1.248142551 container died dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 09:05:11 np0005465987 systemd[1]: libpod-dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877.scope: Consumed 1.124s CPU time.
Oct  2 09:05:11 np0005465987 systemd[1]: var-lib-containers-storage-overlay-dbdb18aaf76d31c1657d09a69c38b27ee2b6eaebd29f57535f5dbed785d3879a-merged.mount: Deactivated successfully.
Oct  2 09:05:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:11.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:11 np0005465987 podman[308644]: 2025-10-02 13:05:11.895073493 +0000 UTC m=+1.300823579 container remove dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 09:05:11 np0005465987 systemd[1]: libpod-conmon-dbd4f8144ab3d18a36005703ee73d10fa5a702f517da3a4b9478ae9961962877.scope: Deactivated successfully.
Oct  2 09:05:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:12.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:12 np0005465987 nova_compute[230713]: 2025-10-02 13:05:12.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:05:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:12 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.161 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.161 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.186 2 DEBUG nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.263 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.263 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.271 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.272 2 INFO nova.compute.claims [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.392 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2766716528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.806 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.812 2 DEBUG nova.compute.provider_tree [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.835 2 DEBUG nova.scheduler.client.report [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.883 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.884 2 DEBUG nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:05:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:13.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.970 2 DEBUG nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:05:13 np0005465987 nova_compute[230713]: 2025-10-02 13:05:13.971 2 DEBUG nova.network.neutron [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.008 2 INFO nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.025 2 DEBUG nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.182 2 DEBUG nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.183 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.183 2 INFO nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Creating image(s)#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.209 2 DEBUG nova.storage.rbd_utils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.238 2 DEBUG nova.storage.rbd_utils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.266 2 DEBUG nova.storage.rbd_utils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.270 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.333 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.334 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.335 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.335 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.361 2 DEBUG nova.storage.rbd_utils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.364 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:05:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:14.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:05:14 np0005465987 nova_compute[230713]: 2025-10-02 13:05:14.644 2 DEBUG nova.policy [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93facc00c95f4cbfa6cecaf3641182bc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5eceae619a6f4fdeaa8ba6fafda4912a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:05:14 np0005465987 podman[309891]: 2025-10-02 13:05:14.847148241 +0000 UTC m=+0.056473553 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct  2 09:05:14 np0005465987 podman[309892]: 2025-10-02 13:05:14.8541534 +0000 UTC m=+0.060487481 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:05:14 np0005465987 podman[309890]: 2025-10-02 13:05:14.878504751 +0000 UTC m=+0.088653895 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:05:15 np0005465987 nova_compute[230713]: 2025-10-02 13:05:15.045 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:15 np0005465987 nova_compute[230713]: 2025-10-02 13:05:15.124 2 DEBUG nova.storage.rbd_utils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] resizing rbd image 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:05:15 np0005465987 nova_compute[230713]: 2025-10-02 13:05:15.407 2 DEBUG nova.objects.instance [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'migration_context' on Instance uuid 039a9f5b-fc39-40c8-8416-4f18a87f9652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:15 np0005465987 nova_compute[230713]: 2025-10-02 13:05:15.429 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:05:15 np0005465987 nova_compute[230713]: 2025-10-02 13:05:15.429 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Ensure instance console log exists: /var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:05:15 np0005465987 nova_compute[230713]: 2025-10-02 13:05:15.429 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:15 np0005465987 nova_compute[230713]: 2025-10-02 13:05:15.430 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:15 np0005465987 nova_compute[230713]: 2025-10-02 13:05:15.430 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:15 np0005465987 nova_compute[230713]: 2025-10-02 13:05:15.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:05:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:05:16 np0005465987 nova_compute[230713]: 2025-10-02 13:05:16.154 2 DEBUG nova.network.neutron [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Successfully created port: ec129a3e-d0ce-4751-87ee-8f0da0e1b406 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:05:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:16.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:17 np0005465987 nova_compute[230713]: 2025-10-02 13:05:17.677 2 DEBUG nova.network.neutron [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Successfully updated port: ec129a3e-d0ce-4751-87ee-8f0da0e1b406 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:05:17 np0005465987 nova_compute[230713]: 2025-10-02 13:05:17.701 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:17 np0005465987 nova_compute[230713]: 2025-10-02 13:05:17.701 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquired lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:17 np0005465987 nova_compute[230713]: 2025-10-02 13:05:17.701 2 DEBUG nova.network.neutron [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:05:17 np0005465987 nova_compute[230713]: 2025-10-02 13:05:17.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:17 np0005465987 nova_compute[230713]: 2025-10-02 13:05:17.845 2 DEBUG nova.compute.manager [req-e0d8472a-3307-43d2-941a-6a8c24aba9b6 req-144b934d-bbb1-4117-ba45-d91c43f8cc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received event network-changed-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:17 np0005465987 nova_compute[230713]: 2025-10-02 13:05:17.846 2 DEBUG nova.compute.manager [req-e0d8472a-3307-43d2-941a-6a8c24aba9b6 req-144b934d-bbb1-4117-ba45-d91c43f8cc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Refreshing instance network info cache due to event network-changed-ec129a3e-d0ce-4751-87ee-8f0da0e1b406. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:05:17 np0005465987 nova_compute[230713]: 2025-10-02 13:05:17.846 2 DEBUG oslo_concurrency.lockutils [req-e0d8472a-3307-43d2-941a-6a8c24aba9b6 req-144b934d-bbb1-4117-ba45-d91c43f8cc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:17.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:18 np0005465987 nova_compute[230713]: 2025-10-02 13:05:18.008 2 DEBUG nova.network.neutron [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:05:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.716 2 DEBUG nova.network.neutron [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Updating instance_info_cache with network_info: [{"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.757 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Releasing lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.757 2 DEBUG nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Instance network_info: |[{"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.758 2 DEBUG oslo_concurrency.lockutils [req-e0d8472a-3307-43d2-941a-6a8c24aba9b6 req-144b934d-bbb1-4117-ba45-d91c43f8cc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.758 2 DEBUG nova.network.neutron [req-e0d8472a-3307-43d2-941a-6a8c24aba9b6 req-144b934d-bbb1-4117-ba45-d91c43f8cc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Refreshing network info cache for port ec129a3e-d0ce-4751-87ee-8f0da0e1b406 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.760 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Start _get_guest_xml network_info=[{"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.764 2 WARNING nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.768 2 DEBUG nova.virt.libvirt.host [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.769 2 DEBUG nova.virt.libvirt.host [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.773 2 DEBUG nova.virt.libvirt.host [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.774 2 DEBUG nova.virt.libvirt.host [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.775 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.776 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.776 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.776 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.776 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.777 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.777 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.777 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.777 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.777 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.778 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.778 2 DEBUG nova.virt.hardware [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:05:19 np0005465987 nova_compute[230713]: 2025-10-02 13:05:19.781 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:19.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:05:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3621640472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:05:20 np0005465987 nova_compute[230713]: 2025-10-02 13:05:20.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:20.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:21 np0005465987 nova_compute[230713]: 2025-10-02 13:05:21.404 2 DEBUG nova.network.neutron [req-e0d8472a-3307-43d2-941a-6a8c24aba9b6 req-144b934d-bbb1-4117-ba45-d91c43f8cc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Updated VIF entry in instance network info cache for port ec129a3e-d0ce-4751-87ee-8f0da0e1b406. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:05:21 np0005465987 nova_compute[230713]: 2025-10-02 13:05:21.405 2 DEBUG nova.network.neutron [req-e0d8472a-3307-43d2-941a-6a8c24aba9b6 req-144b934d-bbb1-4117-ba45-d91c43f8cc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Updating instance_info_cache with network_info: [{"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:21 np0005465987 nova_compute[230713]: 2025-10-02 13:05:21.425 2 DEBUG oslo_concurrency.lockutils [req-e0d8472a-3307-43d2-941a-6a8c24aba9b6 req-144b934d-bbb1-4117-ba45-d91c43f8cc24 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:21.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:22 np0005465987 ceph-mds[84089]: mds.beacon.cephfs.compute-1.zfhmgy missed beacon ack from the monitors
Oct  2 09:05:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:22.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:22 np0005465987 nova_compute[230713]: 2025-10-02 13:05:22.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:23.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:24.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:25 np0005465987 nova_compute[230713]: 2025-10-02 13:05:25.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:25 np0005465987 nova_compute[230713]: 2025-10-02 13:05:25.916 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:25 np0005465987 nova_compute[230713]: 2025-10-02 13:05:25.943 2 DEBUG nova.storage.rbd_utils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:25 np0005465987 nova_compute[230713]: 2025-10-02 13:05:25.947 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:05:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/797407119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.388 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.390 2 DEBUG nova.virt.libvirt.vif [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-272912382',display_name='tempest-AttachVolumeNegativeTest-server-272912382',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-272912382',id=204,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgTpcrCUBY2SDr+Chf1BO24xhU1c/jnJSuk6+6PV5ouAnxeaRpyZoTWoifjDabiTMv4bN1Ie6gbxVDcqVj234EFAe1RrRxbjwG1QWSihiSJrYTTHMKPLdi79ua64EEvsQ==',key_name='tempest-keypair-448042237',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5eceae619a6f4fdeaa8ba6fafda4912a',ramdisk_id='',reservation_id='r-afz1b4n4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1407980822',owner_user_name='tempest-AttachVolumeNegativeTest-1407980822-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93facc00c95f4cbfa6cecaf3641182bc',uuid=039a9f5b-fc39-40c8-8416-4f18a87f9652,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.391 2 DEBUG nova.network.os_vif_util [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converting VIF {"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.391 2 DEBUG nova.network.os_vif_util [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:13:da,bridge_name='br-int',has_traffic_filtering=True,id=ec129a3e-d0ce-4751-87ee-8f0da0e1b406,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec129a3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.393 2 DEBUG nova.objects.instance [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'pci_devices' on Instance uuid 039a9f5b-fc39-40c8-8416-4f18a87f9652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.421 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <uuid>039a9f5b-fc39-40c8-8416-4f18a87f9652</uuid>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <name>instance-000000cc</name>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <nova:name>tempest-AttachVolumeNegativeTest-server-272912382</nova:name>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:05:19</nova:creationTime>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <nova:user uuid="93facc00c95f4cbfa6cecaf3641182bc">tempest-AttachVolumeNegativeTest-1407980822-project-member</nova:user>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <nova:project uuid="5eceae619a6f4fdeaa8ba6fafda4912a">tempest-AttachVolumeNegativeTest-1407980822</nova:project>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <nova:port uuid="ec129a3e-d0ce-4751-87ee-8f0da0e1b406">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <entry name="serial">039a9f5b-fc39-40c8-8416-4f18a87f9652</entry>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <entry name="uuid">039a9f5b-fc39-40c8-8416-4f18a87f9652</entry>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/039a9f5b-fc39-40c8-8416-4f18a87f9652_disk">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/039a9f5b-fc39-40c8-8416-4f18a87f9652_disk.config">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:d6:13:da"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <target dev="tapec129a3e-d0"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652/console.log" append="off"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:05:26 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:05:26 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:05:26 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:05:26 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.423 2 DEBUG nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Preparing to wait for external event network-vif-plugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.423 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.424 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.424 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.424 2 DEBUG nova.virt.libvirt.vif [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-272912382',display_name='tempest-AttachVolumeNegativeTest-server-272912382',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-272912382',id=204,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgTpcrCUBY2SDr+Chf1BO24xhU1c/jnJSuk6+6PV5ouAnxeaRpyZoTWoifjDabiTMv4bN1Ie6gbxVDcqVj234EFAe1RrRxbjwG1QWSihiSJrYTTHMKPLdi79ua64EEvsQ==',key_name='tempest-keypair-448042237',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5eceae619a6f4fdeaa8ba6fafda4912a',ramdisk_id='',reservation_id='r-afz1b4n4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1407980822',owner_user_name='tempest-AttachVolumeNegativeTest-1407980822-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93facc00c95f4cbfa6cecaf3641182bc',uuid=039a9f5b-fc39-40c8-8416-4f18a87f9652,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.425 2 DEBUG nova.network.os_vif_util [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converting VIF {"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.425 2 DEBUG nova.network.os_vif_util [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:13:da,bridge_name='br-int',has_traffic_filtering=True,id=ec129a3e-d0ce-4751-87ee-8f0da0e1b406,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec129a3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.426 2 DEBUG os_vif [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:13:da,bridge_name='br-int',has_traffic_filtering=True,id=ec129a3e-d0ce-4751-87ee-8f0da0e1b406,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec129a3e-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.427 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.427 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.430 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec129a3e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.431 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec129a3e-d0, col_values=(('external_ids', {'iface-id': 'ec129a3e-d0ce-4751-87ee-8f0da0e1b406', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:13:da', 'vm-uuid': '039a9f5b-fc39-40c8-8416-4f18a87f9652'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:26 np0005465987 NetworkManager[44910]: <info>  [1759410326.4338] manager: (tapec129a3e-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.440 2 INFO os_vif [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:13:da,bridge_name='br-int',has_traffic_filtering=True,id=ec129a3e-d0ce-4751-87ee-8f0da0e1b406,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec129a3e-d0')#033[00m
Oct  2 09:05:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:26.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.551 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.552 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.552 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No VIF found with MAC fa:16:3e:d6:13:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.552 2 INFO nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Using config drive#033[00m
Oct  2 09:05:26 np0005465987 nova_compute[230713]: 2025-10-02 13:05:26.577 2 DEBUG nova.storage.rbd_utils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.022 2 INFO nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Creating config drive at /var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652/disk.config#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.027 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8cbqrj71 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.163 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8cbqrj71" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.191 2 DEBUG nova.storage.rbd_utils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] rbd image 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.195 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652/disk.config 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.376 2 DEBUG oslo_concurrency.processutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652/disk.config 039a9f5b-fc39-40c8-8416-4f18a87f9652_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.377 2 INFO nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Deleting local config drive /var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652/disk.config because it was imported into RBD.#033[00m
Oct  2 09:05:27 np0005465987 kernel: tapec129a3e-d0: entered promiscuous mode
Oct  2 09:05:27 np0005465987 NetworkManager[44910]: <info>  [1759410327.4291] manager: (tapec129a3e-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:27Z|00795|binding|INFO|Claiming lport ec129a3e-d0ce-4751-87ee-8f0da0e1b406 for this chassis.
Oct  2 09:05:27 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:27Z|00796|binding|INFO|ec129a3e-d0ce-4751-87ee-8f0da0e1b406: Claiming fa:16:3e:d6:13:da 10.100.0.12
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.442 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:13:da 10.100.0.12'], port_security=['fa:16:3e:d6:13:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '039a9f5b-fc39-40c8-8416-4f18a87f9652', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5eceae619a6f4fdeaa8ba6fafda4912a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90aaf7fb-5feb-4bb7-a538-2aecdcb0deab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa923984-fb22-4ee5-9bd7-5034c98e7f0a, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ec129a3e-d0ce-4751-87ee-8f0da0e1b406) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.444 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ec129a3e-d0ce-4751-87ee-8f0da0e1b406 in datapath 2471b6f7-ee51-4239-8b52-7016ab4d9fd1 bound to our chassis#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.445 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2471b6f7-ee51-4239-8b52-7016ab4d9fd1#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.456 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b1615aa7-3d46-42d4-aed3-494e9b1a3d7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.457 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2471b6f7-e1 in ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.459 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2471b6f7-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.460 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[37df63ae-9e4f-4fc0-8efa-f4e2fa814fef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.460 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a3cd25-ac70-44e8-a617-48ee8af54514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 systemd-udevd[310165]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.476 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[cf234661-8510-4cc5-b4cb-d1a53d8ecd92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 systemd-machined[188335]: New machine qemu-90-instance-000000cc.
Oct  2 09:05:27 np0005465987 NetworkManager[44910]: <info>  [1759410327.4950] device (tapec129a3e-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:05:27 np0005465987 NetworkManager[44910]: <info>  [1759410327.4959] device (tapec129a3e-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.503 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[68094cbf-2278-4ea3-b111-058ebbde18c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 systemd[1]: Started Virtual Machine qemu-90-instance-000000cc.
Oct  2 09:05:27 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:27Z|00797|binding|INFO|Setting lport ec129a3e-d0ce-4751-87ee-8f0da0e1b406 ovn-installed in OVS
Oct  2 09:05:27 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:27Z|00798|binding|INFO|Setting lport ec129a3e-d0ce-4751-87ee-8f0da0e1b406 up in Southbound
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.536 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ab6e8b-6cfc-42a5-ac8a-8f2b37e11442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 systemd-udevd[310169]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.543 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f815ac04-fe2d-4aff-b5f9-679a2fcb2737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 NetworkManager[44910]: <info>  [1759410327.5447] manager: (tap2471b6f7-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/369)
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.574 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c92c76-e330-4c29-ad5c-960bb96c4744]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.577 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d8532aa9-d343-406d-b5af-2ef3ed9503ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 NetworkManager[44910]: <info>  [1759410327.5986] device (tap2471b6f7-e0): carrier: link connected
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.604 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf209f4-baac-48b0-8f5a-f39be3d12304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.623 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebc2ea0-7e38-4449-933b-35e34c2d9543]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2471b6f7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:da:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 839617, 'reachable_time': 40141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310197, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.636 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[058a7444-a449-4f7d-8394-d2743f0dc196]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:da65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 839617, 'tstamp': 839617}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310198, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.652 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf2fb5d-720f-4499-b7e8-6be41c895039]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2471b6f7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:da:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 839617, 'reachable_time': 40141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310199, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.682 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e35e837f-96aa-4ea3-ac40-042cfaeb553f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.745 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f2d98b-567d-417d-8b5e-4d167832b27b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.747 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2471b6f7-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.748 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.748 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2471b6f7-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:27 np0005465987 NetworkManager[44910]: <info>  [1759410327.7508] manager: (tap2471b6f7-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Oct  2 09:05:27 np0005465987 kernel: tap2471b6f7-e0: entered promiscuous mode
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.753 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2471b6f7-e0, col_values=(('external_ids', {'iface-id': 'c5388d11-12a4-491d-825a-d4dc574d0a0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:27Z|00799|binding|INFO|Releasing lport c5388d11-12a4-491d-825a-d4dc574d0a0e from this chassis (sb_readonly=0)
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.772 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.773 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[88701070-af3a-496c-bb48-68ae42d4ae05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.774 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-2471b6f7-ee51-4239-8b52-7016ab4d9fd1
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.pid.haproxy
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 2471b6f7-ee51-4239-8b52-7016ab4d9fd1
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.776 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'env', 'PROCESS_TAG=haproxy-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2471b6f7-ee51-4239-8b52-7016ab4d9fd1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.894 2 DEBUG nova.compute.manager [req-296e63de-ac69-40d7-916f-ca51d51aa6de req-fc988f77-6847-4702-8db6-e4eda6de5871 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received event network-vif-plugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.900 2 DEBUG oslo_concurrency.lockutils [req-296e63de-ac69-40d7-916f-ca51d51aa6de req-fc988f77-6847-4702-8db6-e4eda6de5871 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.900 2 DEBUG oslo_concurrency.lockutils [req-296e63de-ac69-40d7-916f-ca51d51aa6de req-fc988f77-6847-4702-8db6-e4eda6de5871 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.900 2 DEBUG oslo_concurrency.lockutils [req-296e63de-ac69-40d7-916f-ca51d51aa6de req-fc988f77-6847-4702-8db6-e4eda6de5871 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:27 np0005465987 nova_compute[230713]: 2025-10-02 13:05:27.900 2 DEBUG nova.compute.manager [req-296e63de-ac69-40d7-916f-ca51d51aa6de req-fc988f77-6847-4702-8db6-e4eda6de5871 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Processing event network-vif-plugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:05:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:27.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.992 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.993 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:27.993 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:28 np0005465987 podman[310274]: 2025-10-02 13:05:28.110516637 +0000 UTC m=+0.022405259 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:05:28 np0005465987 podman[310274]: 2025-10-02 13:05:28.261283354 +0000 UTC m=+0.173171926 container create bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:05:28 np0005465987 systemd[1]: Started libpod-conmon-bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c.scope.
Oct  2 09:05:28 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:05:28 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2bfdbe3294bf8c3b121ca5612ef122987da656d4c9f87fddadad5f2d20b489/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:05:28 np0005465987 podman[310274]: 2025-10-02 13:05:28.43150636 +0000 UTC m=+0.343394932 container init bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:05:28 np0005465987 podman[310274]: 2025-10-02 13:05:28.438795587 +0000 UTC m=+0.350684159 container start bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:05:28 np0005465987 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[310289]: [NOTICE]   (310293) : New worker (310295) forked
Oct  2 09:05:28 np0005465987 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[310289]: [NOTICE]   (310293) : Loading success.
Oct  2 09:05:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:28.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.521 2 DEBUG nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.522 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410328.52133, 039a9f5b-fc39-40c8-8416-4f18a87f9652 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.523 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] VM Started (Lifecycle Event)#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.526 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.528 2 INFO nova.virt.libvirt.driver [-] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Instance spawned successfully.#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.529 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.570 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.574 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.577 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.577 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.578 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.578 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.579 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.579 2 DEBUG nova.virt.libvirt.driver [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.609 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.609 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410328.5223958, 039a9f5b-fc39-40c8-8416-4f18a87f9652 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.610 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.639 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.642 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410328.525166, 039a9f5b-fc39-40c8-8416-4f18a87f9652 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.642 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.667 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.671 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.683 2 INFO nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Took 14.50 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.684 2 DEBUG nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.697 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.757 2 INFO nova.compute.manager [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Took 15.52 seconds to build instance.#033[00m
Oct  2 09:05:28 np0005465987 nova_compute[230713]: 2025-10-02 13:05:28.780 2 DEBUG oslo_concurrency.lockutils [None req-8e5cba96-94e7-4697-8a19-0d14618ad5eb 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:29.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:29 np0005465987 nova_compute[230713]: 2025-10-02 13:05:29.988 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:30 np0005465987 nova_compute[230713]: 2025-10-02 13:05:30.026 2 DEBUG nova.compute.manager [req-7e4d6f67-34be-470d-aecd-232f7ba21c3c req-5f31444e-a626-496d-8d0c-b0cde1426c85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received event network-vif-plugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:30 np0005465987 nova_compute[230713]: 2025-10-02 13:05:30.026 2 DEBUG oslo_concurrency.lockutils [req-7e4d6f67-34be-470d-aecd-232f7ba21c3c req-5f31444e-a626-496d-8d0c-b0cde1426c85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:30 np0005465987 nova_compute[230713]: 2025-10-02 13:05:30.026 2 DEBUG oslo_concurrency.lockutils [req-7e4d6f67-34be-470d-aecd-232f7ba21c3c req-5f31444e-a626-496d-8d0c-b0cde1426c85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:30 np0005465987 nova_compute[230713]: 2025-10-02 13:05:30.027 2 DEBUG oslo_concurrency.lockutils [req-7e4d6f67-34be-470d-aecd-232f7ba21c3c req-5f31444e-a626-496d-8d0c-b0cde1426c85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:30 np0005465987 nova_compute[230713]: 2025-10-02 13:05:30.027 2 DEBUG nova.compute.manager [req-7e4d6f67-34be-470d-aecd-232f7ba21c3c req-5f31444e-a626-496d-8d0c-b0cde1426c85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] No waiting events found dispatching network-vif-plugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:30 np0005465987 nova_compute[230713]: 2025-10-02 13:05:30.027 2 WARNING nova.compute.manager [req-7e4d6f67-34be-470d-aecd-232f7ba21c3c req-5f31444e-a626-496d-8d0c-b0cde1426c85 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received unexpected event network-vif-plugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:05:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:05:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:30.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:30 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:30 np0005465987 nova_compute[230713]: 2025-10-02 13:05:30.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:31 np0005465987 nova_compute[230713]: 2025-10-02 13:05:31.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:31.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:32.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:32 np0005465987 nova_compute[230713]: 2025-10-02 13:05:32.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:33 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:33Z|00800|binding|INFO|Releasing lport c5388d11-12a4-491d-825a-d4dc574d0a0e from this chassis (sb_readonly=0)
Oct  2 09:05:33 np0005465987 NetworkManager[44910]: <info>  [1759410333.3435] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Oct  2 09:05:33 np0005465987 nova_compute[230713]: 2025-10-02 13:05:33.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:33 np0005465987 NetworkManager[44910]: <info>  [1759410333.3450] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Oct  2 09:05:33 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:33Z|00801|binding|INFO|Releasing lport c5388d11-12a4-491d-825a-d4dc574d0a0e from this chassis (sb_readonly=0)
Oct  2 09:05:33 np0005465987 nova_compute[230713]: 2025-10-02 13:05:33.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:33 np0005465987 nova_compute[230713]: 2025-10-02 13:05:33.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:33 np0005465987 nova_compute[230713]: 2025-10-02 13:05:33.758 2 DEBUG nova.compute.manager [req-50e917be-acef-422c-a7bb-b4f8b60041ac req-e425bc2a-b635-4865-a04d-322f9829ba71 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received event network-changed-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:33 np0005465987 nova_compute[230713]: 2025-10-02 13:05:33.759 2 DEBUG nova.compute.manager [req-50e917be-acef-422c-a7bb-b4f8b60041ac req-e425bc2a-b635-4865-a04d-322f9829ba71 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Refreshing instance network info cache due to event network-changed-ec129a3e-d0ce-4751-87ee-8f0da0e1b406. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:05:33 np0005465987 nova_compute[230713]: 2025-10-02 13:05:33.759 2 DEBUG oslo_concurrency.lockutils [req-50e917be-acef-422c-a7bb-b4f8b60041ac req-e425bc2a-b635-4865-a04d-322f9829ba71 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:33 np0005465987 nova_compute[230713]: 2025-10-02 13:05:33.759 2 DEBUG oslo_concurrency.lockutils [req-50e917be-acef-422c-a7bb-b4f8b60041ac req-e425bc2a-b635-4865-a04d-322f9829ba71 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:33 np0005465987 nova_compute[230713]: 2025-10-02 13:05:33.759 2 DEBUG nova.network.neutron [req-50e917be-acef-422c-a7bb-b4f8b60041ac req-e425bc2a-b635-4865-a04d-322f9829ba71 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Refreshing network info cache for port ec129a3e-d0ce-4751-87ee-8f0da0e1b406 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:05:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:33.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:34.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:34 np0005465987 nova_compute[230713]: 2025-10-02 13:05:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:34 np0005465987 nova_compute[230713]: 2025-10-02 13:05:34.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:05:34 np0005465987 nova_compute[230713]: 2025-10-02 13:05:34.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:05:35 np0005465987 nova_compute[230713]: 2025-10-02 13:05:35.442 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.787878) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335787938, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 252, "total_data_size": 890442, "memory_usage": 900976, "flush_reason": "Manual Compaction"}
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335792633, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 503389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76795, "largest_seqno": 77360, "table_properties": {"data_size": 500449, "index_size": 911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7949, "raw_average_key_size": 21, "raw_value_size": 494372, "raw_average_value_size": 1314, "num_data_blocks": 38, "num_entries": 376, "num_filter_entries": 376, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410307, "oldest_key_time": 1759410307, "file_creation_time": 1759410335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 4782 microseconds, and 1973 cpu microseconds.
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.792670) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 503389 bytes OK
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.792684) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.794945) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.794957) EVENT_LOG_v1 {"time_micros": 1759410335794953, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.794973) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 887124, prev total WAL file size 887124, number of live WAL files 2.
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.795559) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353130' seq:72057594037927935, type:22 .. '6D6772737461740032373633' seq:0, type:0; will stop at (end)
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(491KB)], [156(13MB)]
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335795591, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 14293405, "oldest_snapshot_seqno": -1}
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 9604 keys, 10455675 bytes, temperature: kUnknown
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335843058, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10455675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10396416, "index_size": 34122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 254419, "raw_average_key_size": 26, "raw_value_size": 10230825, "raw_average_value_size": 1065, "num_data_blocks": 1286, "num_entries": 9604, "num_filter_entries": 9604, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759410335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.843250) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10455675 bytes
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.844749) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 300.7 rd, 220.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(49.2) write-amplify(20.8) OK, records in: 10121, records dropped: 517 output_compression: NoCompression
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.844765) EVENT_LOG_v1 {"time_micros": 1759410335844758, "job": 100, "event": "compaction_finished", "compaction_time_micros": 47527, "compaction_time_cpu_micros": 23935, "output_level": 6, "num_output_files": 1, "total_output_size": 10455675, "num_input_records": 10121, "num_output_records": 9604, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335844912, "job": 100, "event": "table_file_deletion", "file_number": 158}
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410335846969, "job": 100, "event": "table_file_deletion", "file_number": 156}
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.795447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.847057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.847063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.847066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.847068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:05:35.847070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:05:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:35.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:36 np0005465987 nova_compute[230713]: 2025-10-02 13:05:36.041 2 DEBUG nova.network.neutron [req-50e917be-acef-422c-a7bb-b4f8b60041ac req-e425bc2a-b635-4865-a04d-322f9829ba71 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Updated VIF entry in instance network info cache for port ec129a3e-d0ce-4751-87ee-8f0da0e1b406. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:05:36 np0005465987 nova_compute[230713]: 2025-10-02 13:05:36.041 2 DEBUG nova.network.neutron [req-50e917be-acef-422c-a7bb-b4f8b60041ac req-e425bc2a-b635-4865-a04d-322f9829ba71 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Updating instance_info_cache with network_info: [{"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:36 np0005465987 nova_compute[230713]: 2025-10-02 13:05:36.073 2 DEBUG oslo_concurrency.lockutils [req-50e917be-acef-422c-a7bb-b4f8b60041ac req-e425bc2a-b635-4865-a04d-322f9829ba71 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:36 np0005465987 nova_compute[230713]: 2025-10-02 13:05:36.073 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:36 np0005465987 nova_compute[230713]: 2025-10-02 13:05:36.074 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:05:36 np0005465987 nova_compute[230713]: 2025-10-02 13:05:36.074 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 039a9f5b-fc39-40c8-8416-4f18a87f9652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:36 np0005465987 nova_compute[230713]: 2025-10-02 13:05:36.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:37 np0005465987 nova_compute[230713]: 2025-10-02 13:05:37.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:37.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.256 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Updating instance_info_cache with network_info: [{"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.290 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-039a9f5b-fc39-40c8-8416-4f18a87f9652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.290 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.290 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.291 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.989 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.989 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.990 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:05:38 np0005465987 nova_compute[230713]: 2025-10-02 13:05:38.990 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2523197540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.543 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.625 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000cc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.625 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000cc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.783 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.784 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4134MB free_disk=20.946460723876953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.784 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.785 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.874 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 039a9f5b-fc39-40c8-8416-4f18a87f9652 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.875 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.875 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.892 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.922 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.923 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:05:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:39.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.937 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:05:39 np0005465987 nova_compute[230713]: 2025-10-02 13:05:39.962 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:05:40 np0005465987 nova_compute[230713]: 2025-10-02 13:05:40.015 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:40 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/277765840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:40 np0005465987 nova_compute[230713]: 2025-10-02 13:05:40.444 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:40 np0005465987 nova_compute[230713]: 2025-10-02 13:05:40.451 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:40 np0005465987 nova_compute[230713]: 2025-10-02 13:05:40.467 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:40 np0005465987 nova_compute[230713]: 2025-10-02 13:05:40.489 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:05:40 np0005465987 nova_compute[230713]: 2025-10-02 13:05:40.490 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:40.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:40 np0005465987 podman[310400]: 2025-10-02 13:05:40.841694216 +0000 UTC m=+0.057735507 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 09:05:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:41 np0005465987 nova_compute[230713]: 2025-10-02 13:05:41.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:41 np0005465987 nova_compute[230713]: 2025-10-02 13:05:41.491 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:41.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:42 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:42Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:13:da 10.100.0.12
Oct  2 09:05:42 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:42Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:13:da 10.100.0.12
Oct  2 09:05:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:42.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:42 np0005465987 nova_compute[230713]: 2025-10-02 13:05:42.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:05:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:05:43 np0005465987 nova_compute[230713]: 2025-10-02 13:05:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:43 np0005465987 nova_compute[230713]: 2025-10-02 13:05:43.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:05:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:44.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:45 np0005465987 podman[310420]: 2025-10-02 13:05:45.834941636 +0000 UTC m=+0.054934229 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:45 np0005465987 podman[310421]: 2025-10-02 13:05:45.838475002 +0000 UTC m=+0.054864438 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:05:45 np0005465987 podman[310419]: 2025-10-02 13:05:45.881316114 +0000 UTC m=+0.102060118 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  2 09:05:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:46 np0005465987 nova_compute[230713]: 2025-10-02 13:05:46.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:46.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:47 np0005465987 nova_compute[230713]: 2025-10-02 13:05:47.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:47.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:48.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:49.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:50.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:50 np0005465987 nova_compute[230713]: 2025-10-02 13:05:50.953 2 DEBUG oslo_concurrency.lockutils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:50 np0005465987 nova_compute[230713]: 2025-10-02 13:05:50.953 2 DEBUG oslo_concurrency.lockutils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:50 np0005465987 nova_compute[230713]: 2025-10-02 13:05:50.984 2 DEBUG nova.objects.instance [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'flavor' on Instance uuid 039a9f5b-fc39-40c8-8416-4f18a87f9652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.032 2 DEBUG oslo_concurrency.lockutils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:51.044 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:51 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:51.044 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.256 2 DEBUG oslo_concurrency.lockutils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.256 2 DEBUG oslo_concurrency.lockutils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.257 2 INFO nova.compute.manager [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Attaching volume 4ca63cf3-978e-4b72-859d-58e38ddef02a to /dev/vdb#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.410 2 DEBUG os_brick.utils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.411 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.424 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.424 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a999a4-05ed-4844-858f-d9592e322001]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.425 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.434 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.434 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[86f372a3-6dfd-4898-8a93-7792d2772e8a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.436 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.447 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.447 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[18994894-fa49-478a-b849-f07c3fb8ae11]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.448 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[aab78b2f-4557-46f5-9b2b-56b66c3844c2]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.449 2 DEBUG oslo_concurrency.processutils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.486 2 DEBUG oslo_concurrency.processutils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.488 2 DEBUG os_brick.initiator.connectors.lightos [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.488 2 DEBUG os_brick.initiator.connectors.lightos [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.488 2 DEBUG os_brick.initiator.connectors.lightos [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.489 2 DEBUG os_brick.utils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:05:51 np0005465987 nova_compute[230713]: 2025-10-02 13:05:51.489 2 DEBUG nova.virt.block_device [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Updating existing volume attachment record: 40524711-8ec3-4ab4-a2eb-b60ee2925eaf _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:05:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:05:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1422738161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:05:52 np0005465987 nova_compute[230713]: 2025-10-02 13:05:52.302 2 DEBUG nova.objects.instance [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'flavor' on Instance uuid 039a9f5b-fc39-40c8-8416-4f18a87f9652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:52 np0005465987 nova_compute[230713]: 2025-10-02 13:05:52.333 2 DEBUG nova.virt.libvirt.driver [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Attempting to attach volume 4ca63cf3-978e-4b72-859d-58e38ddef02a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 09:05:52 np0005465987 nova_compute[230713]: 2025-10-02 13:05:52.336 2 DEBUG nova.virt.libvirt.guest [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 09:05:52 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:05:52 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-4ca63cf3-978e-4b72-859d-58e38ddef02a">
Oct  2 09:05:52 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:05:52 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:05:52 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:05:52 np0005465987 nova_compute[230713]:  </source>
Oct  2 09:05:52 np0005465987 nova_compute[230713]:  <auth username="openstack">
Oct  2 09:05:52 np0005465987 nova_compute[230713]:    <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:05:52 np0005465987 nova_compute[230713]:  </auth>
Oct  2 09:05:52 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:05:52 np0005465987 nova_compute[230713]:  <serial>4ca63cf3-978e-4b72-859d-58e38ddef02a</serial>
Oct  2 09:05:52 np0005465987 nova_compute[230713]: </disk>
Oct  2 09:05:52 np0005465987 nova_compute[230713]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 09:05:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:52.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:52 np0005465987 nova_compute[230713]: 2025-10-02 13:05:52.530 2 DEBUG nova.virt.libvirt.driver [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:05:52 np0005465987 nova_compute[230713]: 2025-10-02 13:05:52.531 2 DEBUG nova.virt.libvirt.driver [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:05:52 np0005465987 nova_compute[230713]: 2025-10-02 13:05:52.531 2 DEBUG nova.virt.libvirt.driver [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:05:52 np0005465987 nova_compute[230713]: 2025-10-02 13:05:52.531 2 DEBUG nova.virt.libvirt.driver [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] No VIF found with MAC fa:16:3e:d6:13:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:05:52 np0005465987 nova_compute[230713]: 2025-10-02 13:05:52.790 2 DEBUG oslo_concurrency.lockutils [None req-fe78fb1b-155a-4a5f-b413-8c17f5de5e0a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:52 np0005465987 nova_compute[230713]: 2025-10-02 13:05:52.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:53.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:54 np0005465987 nova_compute[230713]: 2025-10-02 13:05:54.251 2 DEBUG oslo_concurrency.lockutils [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:54 np0005465987 nova_compute[230713]: 2025-10-02 13:05:54.252 2 DEBUG oslo_concurrency.lockutils [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:54 np0005465987 nova_compute[230713]: 2025-10-02 13:05:54.265 2 INFO nova.compute.manager [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Detaching volume 4ca63cf3-978e-4b72-859d-58e38ddef02a#033[00m
Oct  2 09:05:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:05:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:54.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:05:54 np0005465987 nova_compute[230713]: 2025-10-02 13:05:54.659 2 INFO nova.virt.block_device [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Attempting to driver detach volume 4ca63cf3-978e-4b72-859d-58e38ddef02a from mountpoint /dev/vdb#033[00m
Oct  2 09:05:54 np0005465987 nova_compute[230713]: 2025-10-02 13:05:54.669 2 DEBUG nova.virt.libvirt.driver [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Attempting to detach device vdb from instance 039a9f5b-fc39-40c8-8416-4f18a87f9652 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:05:54 np0005465987 nova_compute[230713]: 2025-10-02 13:05:54.669 2 DEBUG nova.virt.libvirt.guest [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-4ca63cf3-978e-4b72-859d-58e38ddef02a">
Oct  2 09:05:54 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  </source>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <serial>4ca63cf3-978e-4b72-859d-58e38ddef02a</serial>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]: </disk>
Oct  2 09:05:54 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:05:54 np0005465987 nova_compute[230713]: 2025-10-02 13:05:54.985 2 INFO nova.virt.libvirt.driver [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully detached device vdb from instance 039a9f5b-fc39-40c8-8416-4f18a87f9652 from the persistent domain config.#033[00m
Oct  2 09:05:54 np0005465987 nova_compute[230713]: 2025-10-02 13:05:54.986 2 DEBUG nova.virt.libvirt.driver [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 039a9f5b-fc39-40c8-8416-4f18a87f9652 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:05:54 np0005465987 nova_compute[230713]: 2025-10-02 13:05:54.986 2 DEBUG nova.virt.libvirt.guest [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <source protocol="rbd" name="volumes/volume-4ca63cf3-978e-4b72-859d-58e38ddef02a">
Oct  2 09:05:54 np0005465987 nova_compute[230713]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  </source>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <serial>4ca63cf3-978e-4b72-859d-58e38ddef02a</serial>
Oct  2 09:05:54 np0005465987 nova_compute[230713]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:05:54 np0005465987 nova_compute[230713]: </disk>
Oct  2 09:05:54 np0005465987 nova_compute[230713]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.092 2 DEBUG nova.virt.libvirt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Received event <DeviceRemovedEvent: 1759410355.0922601, 039a9f5b-fc39-40c8-8416-4f18a87f9652 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.094 2 DEBUG nova.virt.libvirt.driver [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 039a9f5b-fc39-40c8-8416-4f18a87f9652 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.096 2 INFO nova.virt.libvirt.driver [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully detached device vdb from instance 039a9f5b-fc39-40c8-8416-4f18a87f9652 from the live domain config.#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.444 2 DEBUG nova.objects.instance [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'flavor' on Instance uuid 039a9f5b-fc39-40c8-8416-4f18a87f9652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.488 2 DEBUG oslo_concurrency.lockutils [None req-2eed968f-2337-48be-b39b-192d8eb6881a 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.738 2 DEBUG oslo_concurrency.lockutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.739 2 DEBUG oslo_concurrency.lockutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.739 2 DEBUG oslo_concurrency.lockutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.739 2 DEBUG oslo_concurrency.lockutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.740 2 DEBUG oslo_concurrency.lockutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.741 2 INFO nova.compute.manager [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Terminating instance#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.742 2 DEBUG nova.compute.manager [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:05:55 np0005465987 kernel: tapec129a3e-d0 (unregistering): left promiscuous mode
Oct  2 09:05:55 np0005465987 NetworkManager[44910]: <info>  [1759410355.8075] device (tapec129a3e-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:05:55 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:55Z|00802|binding|INFO|Releasing lport ec129a3e-d0ce-4751-87ee-8f0da0e1b406 from this chassis (sb_readonly=0)
Oct  2 09:05:55 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:55Z|00803|binding|INFO|Setting lport ec129a3e-d0ce-4751-87ee-8f0da0e1b406 down in Southbound
Oct  2 09:05:55 np0005465987 ovn_controller[129172]: 2025-10-02T13:05:55Z|00804|binding|INFO|Removing iface tapec129a3e-d0 ovn-installed in OVS
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:55 np0005465987 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000cc.scope: Deactivated successfully.
Oct  2 09:05:55 np0005465987 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000cc.scope: Consumed 14.318s CPU time.
Oct  2 09:05:55 np0005465987 systemd-machined[188335]: Machine qemu-90-instance-000000cc terminated.
Oct  2 09:05:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:55.934 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:13:da 10.100.0.12'], port_security=['fa:16:3e:d6:13:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '039a9f5b-fc39-40c8-8416-4f18a87f9652', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5eceae619a6f4fdeaa8ba6fafda4912a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90aaf7fb-5feb-4bb7-a538-2aecdcb0deab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa923984-fb22-4ee5-9bd7-5034c98e7f0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ec129a3e-d0ce-4751-87ee-8f0da0e1b406) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:55.936 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ec129a3e-d0ce-4751-87ee-8f0da0e1b406 in datapath 2471b6f7-ee51-4239-8b52-7016ab4d9fd1 unbound from our chassis#033[00m
Oct  2 09:05:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:55.937 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2471b6f7-ee51-4239-8b52-7016ab4d9fd1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:05:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:55.938 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d6818562-20b3-4353-8d21-795138f01ca8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:55.939 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 namespace which is not needed anymore#033[00m
Oct  2 09:05:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:05:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:55.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.979 2 INFO nova.virt.libvirt.driver [-] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Instance destroyed successfully.#033[00m
Oct  2 09:05:55 np0005465987 nova_compute[230713]: 2025-10-02 13:05:55.980 2 DEBUG nova.objects.instance [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lazy-loading 'resources' on Instance uuid 039a9f5b-fc39-40c8-8416-4f18a87f9652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.035 2 DEBUG nova.virt.libvirt.vif [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:05:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-272912382',display_name='tempest-AttachVolumeNegativeTest-server-272912382',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-272912382',id=204,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgTpcrCUBY2SDr+Chf1BO24xhU1c/jnJSuk6+6PV5ouAnxeaRpyZoTWoifjDabiTMv4bN1Ie6gbxVDcqVj234EFAe1RrRxbjwG1QWSihiSJrYTTHMKPLdi79ua64EEvsQ==',key_name='tempest-keypair-448042237',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:05:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5eceae619a6f4fdeaa8ba6fafda4912a',ramdisk_id='',reservation_id='r-afz1b4n4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1407980822',owner_user_name='tempest-AttachVolumeNegativeTest-1407980822-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:05:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93facc00c95f4cbfa6cecaf3641182bc',uuid=039a9f5b-fc39-40c8-8416-4f18a87f9652,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.036 2 DEBUG nova.network.os_vif_util [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converting VIF {"id": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "address": "fa:16:3e:d6:13:da", "network": {"id": "2471b6f7-ee51-4239-8b52-7016ab4d9fd1", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1867797555-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5eceae619a6f4fdeaa8ba6fafda4912a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec129a3e-d0", "ovs_interfaceid": "ec129a3e-d0ce-4751-87ee-8f0da0e1b406", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.037 2 DEBUG nova.network.os_vif_util [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:13:da,bridge_name='br-int',has_traffic_filtering=True,id=ec129a3e-d0ce-4751-87ee-8f0da0e1b406,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec129a3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.037 2 DEBUG os_vif [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:13:da,bridge_name='br-int',has_traffic_filtering=True,id=ec129a3e-d0ce-4751-87ee-8f0da0e1b406,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec129a3e-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.039 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec129a3e-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.045 2 INFO os_vif [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:13:da,bridge_name='br-int',has_traffic_filtering=True,id=ec129a3e-d0ce-4751-87ee-8f0da0e1b406,network=Network(2471b6f7-ee51-4239-8b52-7016ab4d9fd1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec129a3e-d0')#033[00m
Oct  2 09:05:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:56.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:56 np0005465987 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[310289]: [NOTICE]   (310293) : haproxy version is 2.8.14-c23fe91
Oct  2 09:05:56 np0005465987 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[310289]: [NOTICE]   (310293) : path to executable is /usr/sbin/haproxy
Oct  2 09:05:56 np0005465987 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[310289]: [WARNING]  (310293) : Exiting Master process...
Oct  2 09:05:56 np0005465987 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[310289]: [WARNING]  (310293) : Exiting Master process...
Oct  2 09:05:56 np0005465987 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[310289]: [ALERT]    (310293) : Current worker (310295) exited with code 143 (Terminated)
Oct  2 09:05:56 np0005465987 neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1[310289]: [WARNING]  (310293) : All workers exited. Exiting... (0)
Oct  2 09:05:56 np0005465987 systemd[1]: libpod-bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c.scope: Deactivated successfully.
Oct  2 09:05:56 np0005465987 podman[310546]: 2025-10-02 13:05:56.600206055 +0000 UTC m=+0.572000680 container died bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.640 2 DEBUG nova.compute.manager [req-d71c7d36-0088-4707-8814-e8dc4a2b4c9e req-0f6a913e-0a9f-4c0d-9dbf-2d022dac2e15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received event network-vif-unplugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.640 2 DEBUG oslo_concurrency.lockutils [req-d71c7d36-0088-4707-8814-e8dc4a2b4c9e req-0f6a913e-0a9f-4c0d-9dbf-2d022dac2e15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.641 2 DEBUG oslo_concurrency.lockutils [req-d71c7d36-0088-4707-8814-e8dc4a2b4c9e req-0f6a913e-0a9f-4c0d-9dbf-2d022dac2e15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.641 2 DEBUG oslo_concurrency.lockutils [req-d71c7d36-0088-4707-8814-e8dc4a2b4c9e req-0f6a913e-0a9f-4c0d-9dbf-2d022dac2e15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.641 2 DEBUG nova.compute.manager [req-d71c7d36-0088-4707-8814-e8dc4a2b4c9e req-0f6a913e-0a9f-4c0d-9dbf-2d022dac2e15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] No waiting events found dispatching network-vif-unplugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:56 np0005465987 nova_compute[230713]: 2025-10-02 13:05:56.641 2 DEBUG nova.compute.manager [req-d71c7d36-0088-4707-8814-e8dc4a2b4c9e req-0f6a913e-0a9f-4c0d-9dbf-2d022dac2e15 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received event network-vif-unplugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:05:56 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c-userdata-shm.mount: Deactivated successfully.
Oct  2 09:05:56 np0005465987 systemd[1]: var-lib-containers-storage-overlay-8c2bfdbe3294bf8c3b121ca5612ef122987da656d4c9f87fddadad5f2d20b489-merged.mount: Deactivated successfully.
Oct  2 09:05:57 np0005465987 podman[310546]: 2025-10-02 13:05:57.437628059 +0000 UTC m=+1.409422684 container cleanup bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 09:05:57 np0005465987 systemd[1]: libpod-conmon-bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c.scope: Deactivated successfully.
Oct  2 09:05:57 np0005465987 podman[310593]: 2025-10-02 13:05:57.863342611 +0000 UTC m=+0.406535993 container remove bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:05:57 np0005465987 nova_compute[230713]: 2025-10-02 13:05:57.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:57.871 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1f55986d-fba0-45ba-9776-dc7a7f534bf7]: (4, ('Thu Oct  2 01:05:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 (bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c)\nbca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c\nThu Oct  2 01:05:57 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 (bca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c)\nbca25f8b2b496198e9437c3ea32834d8be33d16fe615b82e78476d28a85f486c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:57.874 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b4204138-a5ea-43af-8468-ba3b5b3e7539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:57.875 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2471b6f7-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:57 np0005465987 nova_compute[230713]: 2025-10-02 13:05:57.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:57 np0005465987 kernel: tap2471b6f7-e0: left promiscuous mode
Oct  2 09:05:57 np0005465987 nova_compute[230713]: 2025-10-02 13:05:57.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:57.894 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb75a68-4db1-4e2b-9492-68c1d1c2988a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:57.931 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3d019e94-7747-4487-8ac8-f0481d55b5f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:57.932 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[be21db7d-1cba-4430-b5f8-04dbd3034796]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:57.954 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0c6a82-f8ac-4bac-841d-16742feb416f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 839610, 'reachable_time': 41686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310611, 'error': None, 'target': 'ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:57 np0005465987 systemd[1]: run-netns-ovnmeta\x2d2471b6f7\x2dee51\x2d4239\x2d8b52\x2d7016ab4d9fd1.mount: Deactivated successfully.
Oct  2 09:05:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:57.958 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2471b6f7-ee51-4239-8b52-7016ab4d9fd1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:05:57 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:05:57.958 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[d87b2da9-f563-46d3-9e0a-de07a9856b89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:57.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.297 2 DEBUG nova.compute.manager [req-7cb53aab-288b-48ac-97c1-6b76640ac38d req-03a2480b-3ae6-473d-9296-fc10aad1c19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received event network-vif-plugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.298 2 DEBUG oslo_concurrency.lockutils [req-7cb53aab-288b-48ac-97c1-6b76640ac38d req-03a2480b-3ae6-473d-9296-fc10aad1c19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.298 2 DEBUG oslo_concurrency.lockutils [req-7cb53aab-288b-48ac-97c1-6b76640ac38d req-03a2480b-3ae6-473d-9296-fc10aad1c19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.298 2 DEBUG oslo_concurrency.lockutils [req-7cb53aab-288b-48ac-97c1-6b76640ac38d req-03a2480b-3ae6-473d-9296-fc10aad1c19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.298 2 DEBUG nova.compute.manager [req-7cb53aab-288b-48ac-97c1-6b76640ac38d req-03a2480b-3ae6-473d-9296-fc10aad1c19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] No waiting events found dispatching network-vif-plugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.299 2 WARNING nova.compute.manager [req-7cb53aab-288b-48ac-97c1-6b76640ac38d req-03a2480b-3ae6-473d-9296-fc10aad1c19a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received unexpected event network-vif-plugged-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.387 2 INFO nova.virt.libvirt.driver [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Deleting instance files /var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652_del#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.387 2 INFO nova.virt.libvirt.driver [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Deletion of /var/lib/nova/instances/039a9f5b-fc39-40c8-8416-4f18a87f9652_del complete#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.454 2 INFO nova.compute.manager [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Took 2.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.455 2 DEBUG oslo.service.loopingcall [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.455 2 DEBUG nova.compute.manager [-] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:05:58 np0005465987 nova_compute[230713]: 2025-10-02 13:05:58.455 2 DEBUG nova.network.neutron [-] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:05:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:58.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:59 np0005465987 nova_compute[230713]: 2025-10-02 13:05:59.463 2 DEBUG nova.network.neutron [-] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:59 np0005465987 nova_compute[230713]: 2025-10-02 13:05:59.491 2 INFO nova.compute.manager [-] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Took 1.04 seconds to deallocate network for instance.#033[00m
Oct  2 09:05:59 np0005465987 nova_compute[230713]: 2025-10-02 13:05:59.545 2 DEBUG oslo_concurrency.lockutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:59 np0005465987 nova_compute[230713]: 2025-10-02 13:05:59.545 2 DEBUG oslo_concurrency.lockutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:59 np0005465987 nova_compute[230713]: 2025-10-02 13:05:59.607 2 DEBUG oslo_concurrency.processutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:59 np0005465987 nova_compute[230713]: 2025-10-02 13:05:59.637 2 DEBUG nova.compute.manager [req-04e9aaff-5c16-4af4-95d7-d40bc53985d2 req-724c02ef-a33f-42a9-9503-f3db004c86df d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Received event network-vif-deleted-ec129a3e-d0ce-4751-87ee-8f0da0e1b406 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:05:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:59.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:06:00.046 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:06:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4114918587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:06:00 np0005465987 nova_compute[230713]: 2025-10-02 13:06:00.090 2 DEBUG oslo_concurrency.processutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:00 np0005465987 nova_compute[230713]: 2025-10-02 13:06:00.096 2 DEBUG nova.compute.provider_tree [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:06:00 np0005465987 nova_compute[230713]: 2025-10-02 13:06:00.116 2 DEBUG nova.scheduler.client.report [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:06:00 np0005465987 nova_compute[230713]: 2025-10-02 13:06:00.138 2 DEBUG oslo_concurrency.lockutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:00 np0005465987 nova_compute[230713]: 2025-10-02 13:06:00.165 2 INFO nova.scheduler.client.report [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Deleted allocations for instance 039a9f5b-fc39-40c8-8416-4f18a87f9652#033[00m
Oct  2 09:06:00 np0005465987 nova_compute[230713]: 2025-10-02 13:06:00.251 2 DEBUG oslo_concurrency.lockutils [None req-093c0b87-6bd2-4832-8ce5-8b6ba9cb61da 93facc00c95f4cbfa6cecaf3641182bc 5eceae619a6f4fdeaa8ba6fafda4912a - - default default] Lock "039a9f5b-fc39-40c8-8416-4f18a87f9652" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:00.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:01 np0005465987 nova_compute[230713]: 2025-10-02 13:06:01.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:01 np0005465987 nova_compute[230713]: 2025-10-02 13:06:01.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:01.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:02.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:02 np0005465987 nova_compute[230713]: 2025-10-02 13:06:02.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:06:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2961042422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:06:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:06:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2961042422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:06:03 np0005465987 nova_compute[230713]: 2025-10-02 13:06:03.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:03.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:04.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:05.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:06 np0005465987 nova_compute[230713]: 2025-10-02 13:06:06.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:06.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:07 np0005465987 nova_compute[230713]: 2025-10-02 13:06:07.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:08.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:09.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:09 np0005465987 nova_compute[230713]: 2025-10-02 13:06:09.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:10.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:10 np0005465987 nova_compute[230713]: 2025-10-02 13:06:10.978 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410355.9771829, 039a9f5b-fc39-40c8-8416-4f18a87f9652 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:10 np0005465987 nova_compute[230713]: 2025-10-02 13:06:10.978 2 INFO nova.compute.manager [-] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:06:11 np0005465987 nova_compute[230713]: 2025-10-02 13:06:11.009 2 DEBUG nova.compute.manager [None req-17672fd5-0ffd-417b-b361-b16bc5a0c205 - - - - - -] [instance: 039a9f5b-fc39-40c8-8416-4f18a87f9652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:11 np0005465987 nova_compute[230713]: 2025-10-02 13:06:11.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:11 np0005465987 nova_compute[230713]: 2025-10-02 13:06:11.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:11 np0005465987 nova_compute[230713]: 2025-10-02 13:06:11.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:11 np0005465987 podman[310635]: 2025-10-02 13:06:11.83378423 +0000 UTC m=+0.052563656 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:06:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:11.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:12 np0005465987 nova_compute[230713]: 2025-10-02 13:06:12.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:13.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:06:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:06:16 np0005465987 nova_compute[230713]: 2025-10-02 13:06:16.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:16.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:16 np0005465987 podman[310657]: 2025-10-02 13:06:16.845527549 +0000 UTC m=+0.059198726 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 09:06:16 np0005465987 podman[310656]: 2025-10-02 13:06:16.863567628 +0000 UTC m=+0.078155980 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 09:06:16 np0005465987 podman[310655]: 2025-10-02 13:06:16.900673495 +0000 UTC m=+0.111042862 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:06:17 np0005465987 nova_compute[230713]: 2025-10-02 13:06:17.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:17.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:18.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:06:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:06:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:21 np0005465987 nova_compute[230713]: 2025-10-02 13:06:21.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:21.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:22.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:22 np0005465987 nova_compute[230713]: 2025-10-02 13:06:22.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:24.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:24.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:26.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:26 np0005465987 nova_compute[230713]: 2025-10-02 13:06:26.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:26.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:27 np0005465987 nova_compute[230713]: 2025-10-02 13:06:27.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:06:27.993 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:06:27.993 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:06:27.993 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:28.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:28.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:29 np0005465987 nova_compute[230713]: 2025-10-02 13:06:29.960 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:30.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:30.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:30 np0005465987 nova_compute[230713]: 2025-10-02 13:06:30.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:31 np0005465987 nova_compute[230713]: 2025-10-02 13:06:31.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:32.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:32.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:32 np0005465987 nova_compute[230713]: 2025-10-02 13:06:32.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:34.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:34.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:06:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:06:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:06:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:06:35 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:06:35 np0005465987 nova_compute[230713]: 2025-10-02 13:06:35.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:35 np0005465987 nova_compute[230713]: 2025-10-02 13:06:35.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:06:35 np0005465987 nova_compute[230713]: 2025-10-02 13:06:35.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:06:35 np0005465987 nova_compute[230713]: 2025-10-02 13:06:35.985 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:06:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:36.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:36 np0005465987 nova_compute[230713]: 2025-10-02 13:06:36.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:36 np0005465987 nova_compute[230713]: 2025-10-02 13:06:36.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:37 np0005465987 nova_compute[230713]: 2025-10-02 13:06:37.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:38.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:38 np0005465987 nova_compute[230713]: 2025-10-02 13:06:38.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:38 np0005465987 nova_compute[230713]: 2025-10-02 13:06:38.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:38 np0005465987 nova_compute[230713]: 2025-10-02 13:06:38.991 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:38 np0005465987 nova_compute[230713]: 2025-10-02 13:06:38.991 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:38 np0005465987 nova_compute[230713]: 2025-10-02 13:06:38.992 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:38 np0005465987 nova_compute[230713]: 2025-10-02 13:06:38.992 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:06:38 np0005465987 nova_compute[230713]: 2025-10-02 13:06:38.992 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:06:39 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2940961265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:06:39 np0005465987 nova_compute[230713]: 2025-10-02 13:06:39.455 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:39 np0005465987 nova_compute[230713]: 2025-10-02 13:06:39.675 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:06:39 np0005465987 nova_compute[230713]: 2025-10-02 13:06:39.677 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4340MB free_disk=20.946632385253906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:06:39 np0005465987 nova_compute[230713]: 2025-10-02 13:06:39.678 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:39 np0005465987 nova_compute[230713]: 2025-10-02 13:06:39.678 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:39 np0005465987 nova_compute[230713]: 2025-10-02 13:06:39.898 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:06:39 np0005465987 nova_compute[230713]: 2025-10-02 13:06:39.898 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:06:39 np0005465987 nova_compute[230713]: 2025-10-02 13:06:39.924 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:40.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:06:40 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2993262296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:06:40 np0005465987 nova_compute[230713]: 2025-10-02 13:06:40.360 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:40 np0005465987 nova_compute[230713]: 2025-10-02 13:06:40.366 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:06:40 np0005465987 nova_compute[230713]: 2025-10-02 13:06:40.391 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:06:40 np0005465987 nova_compute[230713]: 2025-10-02 13:06:40.424 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:06:40 np0005465987 nova_compute[230713]: 2025-10-02 13:06:40.424 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:41 np0005465987 nova_compute[230713]: 2025-10-02 13:06:41.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:41 np0005465987 nova_compute[230713]: 2025-10-02 13:06:41.423 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:41 np0005465987 nova_compute[230713]: 2025-10-02 13:06:41.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:42.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:42.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:42 np0005465987 podman[310895]: 2025-10-02 13:06:42.862012371 +0000 UTC m=+0.071490420 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:06:43 np0005465987 nova_compute[230713]: 2025-10-02 13:06:43.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:43 np0005465987 nova_compute[230713]: 2025-10-02 13:06:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:43 np0005465987 nova_compute[230713]: 2025-10-02 13:06:43.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:06:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:44.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:44.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:06:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:06:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:46.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:46 np0005465987 nova_compute[230713]: 2025-10-02 13:06:46.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:46.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:47 np0005465987 podman[310972]: 2025-10-02 13:06:47.857232732 +0000 UTC m=+0.056560764 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:06:47 np0005465987 podman[310966]: 2025-10-02 13:06:47.878870939 +0000 UTC m=+0.084734869 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:06:47 np0005465987 podman[310965]: 2025-10-02 13:06:47.882453806 +0000 UTC m=+0.094910424 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:06:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:48.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:48 np0005465987 nova_compute[230713]: 2025-10-02 13:06:48.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:48.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:50.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:06:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:50.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:06:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:51 np0005465987 nova_compute[230713]: 2025-10-02 13:06:51.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:52.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:53 np0005465987 nova_compute[230713]: 2025-10-02 13:06:53.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:54.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:54.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:06:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:56.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:56 np0005465987 nova_compute[230713]: 2025-10-02 13:06:56.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:56.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:57 np0005465987 nova_compute[230713]: 2025-10-02 13:06:57.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:58.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:58 np0005465987 nova_compute[230713]: 2025-10-02 13:06:58.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:06:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:06:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:58.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:00.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:00.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:01 np0005465987 nova_compute[230713]: 2025-10-02 13:07:01.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:02.001 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:07:02 np0005465987 nova_compute[230713]: 2025-10-02 13:07:02.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:02 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:02.003 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:07:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:02.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:02.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:03 np0005465987 nova_compute[230713]: 2025-10-02 13:07:03.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:04.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:07:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:04.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:07:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:05.005 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:06.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:06 np0005465987 nova_compute[230713]: 2025-10-02 13:07:06.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:07:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:06.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.184 2 DEBUG nova.compute.manager [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.284 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.285 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.316 2 DEBUG nova.objects.instance [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lazy-loading 'pci_requests' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.337 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.338 2 INFO nova.compute.claims [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.338 2 DEBUG nova.objects.instance [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lazy-loading 'resources' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.349 2 DEBUG nova.objects.instance [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lazy-loading 'numa_topology' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.363 2 DEBUG nova.objects.instance [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lazy-loading 'pci_devices' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.436 2 INFO nova.compute.resource_tracker [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating resource usage from migration 1bfd5135-23f5-42f1-b10d-5b0477925fb3#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.437 2 DEBUG nova.compute.resource_tracker [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Starting to track incoming migration 1bfd5135-23f5-42f1-b10d-5b0477925fb3 with flavor cef129e5-cce4-4465-9674-03d3559e8a14 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.513 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:07 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:07Z|00805|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  2 09:07:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/433554995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.926 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.932 2 DEBUG nova.compute.provider_tree [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.957 2 DEBUG nova.scheduler.client.report [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.982 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:07 np0005465987 nova_compute[230713]: 2025-10-02 13:07:07.982 2 INFO nova.compute.manager [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Migrating#033[00m
Oct  2 09:07:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:07:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:08.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:07:08 np0005465987 nova_compute[230713]: 2025-10-02 13:07:08.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:08.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:10.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:10.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:10 np0005465987 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 09:07:10 np0005465987 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 09:07:10 np0005465987 systemd-logind[794]: New session 62 of user nova.
Oct  2 09:07:10 np0005465987 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 09:07:10 np0005465987 systemd[1]: Starting User Manager for UID 42436...
Oct  2 09:07:10 np0005465987 systemd[311057]: Queued start job for default target Main User Target.
Oct  2 09:07:10 np0005465987 systemd[311057]: Created slice User Application Slice.
Oct  2 09:07:10 np0005465987 systemd[311057]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 09:07:10 np0005465987 systemd[311057]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 09:07:10 np0005465987 systemd[311057]: Reached target Paths.
Oct  2 09:07:10 np0005465987 systemd[311057]: Reached target Timers.
Oct  2 09:07:10 np0005465987 systemd[311057]: Starting D-Bus User Message Bus Socket...
Oct  2 09:07:10 np0005465987 systemd[311057]: Starting Create User's Volatile Files and Directories...
Oct  2 09:07:10 np0005465987 systemd[311057]: Listening on D-Bus User Message Bus Socket.
Oct  2 09:07:10 np0005465987 systemd[311057]: Reached target Sockets.
Oct  2 09:07:10 np0005465987 systemd[311057]: Finished Create User's Volatile Files and Directories.
Oct  2 09:07:10 np0005465987 systemd[311057]: Reached target Basic System.
Oct  2 09:07:10 np0005465987 systemd[311057]: Reached target Main User Target.
Oct  2 09:07:10 np0005465987 systemd[311057]: Startup finished in 119ms.
Oct  2 09:07:10 np0005465987 systemd[1]: Started User Manager for UID 42436.
Oct  2 09:07:10 np0005465987 systemd[1]: Started Session 62 of User nova.
Oct  2 09:07:11 np0005465987 systemd[1]: session-62.scope: Deactivated successfully.
Oct  2 09:07:11 np0005465987 systemd-logind[794]: Session 62 logged out. Waiting for processes to exit.
Oct  2 09:07:11 np0005465987 systemd-logind[794]: Removed session 62.
Oct  2 09:07:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:11 np0005465987 nova_compute[230713]: 2025-10-02 13:07:11.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:11 np0005465987 systemd-logind[794]: New session 64 of user nova.
Oct  2 09:07:11 np0005465987 systemd[1]: Started Session 64 of User nova.
Oct  2 09:07:11 np0005465987 systemd[1]: session-64.scope: Deactivated successfully.
Oct  2 09:07:11 np0005465987 systemd-logind[794]: Session 64 logged out. Waiting for processes to exit.
Oct  2 09:07:11 np0005465987 systemd-logind[794]: Removed session 64.
Oct  2 09:07:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:12.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:13 np0005465987 nova_compute[230713]: 2025-10-02 13:07:13.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:13 np0005465987 podman[311079]: 2025-10-02 13:07:13.899769693 +0000 UTC m=+0.105936273 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:07:13 np0005465987 nova_compute[230713]: 2025-10-02 13:07:13.947 2 DEBUG nova.compute.manager [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-unplugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:13 np0005465987 nova_compute[230713]: 2025-10-02 13:07:13.948 2 DEBUG oslo_concurrency.lockutils [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:13 np0005465987 nova_compute[230713]: 2025-10-02 13:07:13.948 2 DEBUG oslo_concurrency.lockutils [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:13 np0005465987 nova_compute[230713]: 2025-10-02 13:07:13.948 2 DEBUG oslo_concurrency.lockutils [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:13 np0005465987 nova_compute[230713]: 2025-10-02 13:07:13.948 2 DEBUG nova.compute.manager [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-unplugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:13 np0005465987 nova_compute[230713]: 2025-10-02 13:07:13.949 2 WARNING nova.compute.manager [req-7d65c751-cbbc-4a90-bc91-e8be03328b35 req-2e1c8910-d6c7-4abd-b6fa-9de1fef5c423 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-unplugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 09:07:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:07:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:07:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:14.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:15 np0005465987 nova_compute[230713]: 2025-10-02 13:07:15.081 2 INFO nova.network.neutron [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating port ccb82929-088d-402f-87fd-a1b429d52ff7 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 09:07:15 np0005465987 nova_compute[230713]: 2025-10-02 13:07:15.649 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:07:15 np0005465987 nova_compute[230713]: 2025-10-02 13:07:15.650 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:07:15 np0005465987 nova_compute[230713]: 2025-10-02 13:07:15.650 2 DEBUG nova.network.neutron [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:07:15 np0005465987 nova_compute[230713]: 2025-10-02 13:07:15.779 2 DEBUG nova.compute.manager [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:15 np0005465987 nova_compute[230713]: 2025-10-02 13:07:15.780 2 DEBUG nova.compute.manager [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing instance network info cache due to event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:07:15 np0005465987 nova_compute[230713]: 2025-10-02 13:07:15.780 2 DEBUG oslo_concurrency.lockutils [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:07:16 np0005465987 nova_compute[230713]: 2025-10-02 13:07:16.069 2 DEBUG nova.compute.manager [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:16 np0005465987 nova_compute[230713]: 2025-10-02 13:07:16.069 2 DEBUG oslo_concurrency.lockutils [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:16 np0005465987 nova_compute[230713]: 2025-10-02 13:07:16.070 2 DEBUG oslo_concurrency.lockutils [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:16 np0005465987 nova_compute[230713]: 2025-10-02 13:07:16.070 2 DEBUG oslo_concurrency.lockutils [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:16 np0005465987 nova_compute[230713]: 2025-10-02 13:07:16.070 2 DEBUG nova.compute.manager [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:16 np0005465987 nova_compute[230713]: 2025-10-02 13:07:16.070 2 WARNING nova.compute.manager [req-475c81d9-e001-4491-809f-83e303810757 req-222d8dc9-b656-481d-9d10-8ad968ae4c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 09:07:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:16 np0005465987 nova_compute[230713]: 2025-10-02 13:07:16.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:16.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:17 np0005465987 nova_compute[230713]: 2025-10-02 13:07:17.401 2 DEBUG nova.network.neutron [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:07:17 np0005465987 nova_compute[230713]: 2025-10-02 13:07:17.443 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:07:17 np0005465987 nova_compute[230713]: 2025-10-02 13:07:17.446 2 DEBUG oslo_concurrency.lockutils [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:07:17 np0005465987 nova_compute[230713]: 2025-10-02 13:07:17.446 2 DEBUG nova.network.neutron [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:07:17 np0005465987 nova_compute[230713]: 2025-10-02 13:07:17.524 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 09:07:17 np0005465987 nova_compute[230713]: 2025-10-02 13:07:17.526 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 09:07:17 np0005465987 nova_compute[230713]: 2025-10-02 13:07:17.526 2 INFO nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Creating image(s)#033[00m
Oct  2 09:07:17 np0005465987 nova_compute[230713]: 2025-10-02 13:07:17.563 2 DEBUG nova.storage.rbd_utils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] creating snapshot(nova-resize) on rbd image(ededa679-1161-406f-bccd-e571b80d8f7f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:07:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e400 e400: 3 total, 3 up, 3 in
Oct  2 09:07:17 np0005465987 nova_compute[230713]: 2025-10-02 13:07:17.966 2 DEBUG nova.objects.instance [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lazy-loading 'trusted_certs' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.072 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.072 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Ensure instance console log exists: /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.073 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.073 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.073 2 DEBUG oslo_concurrency.lockutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.076 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Start _get_guest_xml network_info=[{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1276596428", "vif_mac": "fa:16:3e:25:9d:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.079 2 WARNING nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.084 2 DEBUG nova.virt.libvirt.host [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.085 2 DEBUG nova.virt.libvirt.host [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.088 2 DEBUG nova.virt.libvirt.host [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.088 2 DEBUG nova.virt.libvirt.host [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.089 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.089 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.090 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.090 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.091 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.091 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.091 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.091 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.092 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.092 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.092 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.092 2 DEBUG nova.virt.hardware [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.093 2 DEBUG nova.objects.instance [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lazy-loading 'vcpu_model' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:18.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.118 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:07:18 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1094018913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.543 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.579 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.646 2 DEBUG nova.network.neutron [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updated VIF entry in instance network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.647 2 DEBUG nova.network.neutron [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:07:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:18.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:18 np0005465987 nova_compute[230713]: 2025-10-02 13:07:18.661 2 DEBUG oslo_concurrency.lockutils [req-0417012a-6a40-468e-b7fb-c5e00f20fd17 req-923c8020-ba64-4797-82bc-a559b6844665 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:07:18 np0005465987 podman[311232]: 2025-10-02 13:07:18.869378861 +0000 UTC m=+0.080544474 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:07:18 np0005465987 podman[311233]: 2025-10-02 13:07:18.877542352 +0000 UTC m=+0.088319275 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:07:18 np0005465987 podman[311231]: 2025-10-02 13:07:18.881438238 +0000 UTC m=+0.096133948 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 09:07:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:07:18 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3034389863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.000 2 DEBUG oslo_concurrency.processutils [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.002 2 DEBUG nova.virt.libvirt.vif [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-84071708',display_name='tempest-TestNetworkAdvancedServerOps-server-84071708',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-84071708',id=207,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIX/+5SRLUhs0eDaLcolgjqYkVwsuk4D/uvy0QHY7DM2c673uNSh9aodkpDeRsWNUXML+zucDvvL1LRs9C6jNeVRJuMPEGHmaQ46I/VMjFKhJP1gLMaRGktNaEffxctaUQ==',key_name='tempest-TestNetworkAdvancedServerOps-1720569281',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-znqtzf0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:07:14Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ededa679-1161-406f-bccd-e571b80d8f7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1276596428", "vif_mac": "fa:16:3e:25:9d:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.002 2 DEBUG nova.network.os_vif_util [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converting VIF {"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1276596428", "vif_mac": "fa:16:3e:25:9d:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.003 2 DEBUG nova.network.os_vif_util [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.005 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <uuid>ededa679-1161-406f-bccd-e571b80d8f7f</uuid>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <name>instance-000000cf</name>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-84071708</nova:name>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:07:18</nova:creationTime>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <nova:port uuid="ccb82929-088d-402f-87fd-a1b429d52ff7">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <entry name="serial">ededa679-1161-406f-bccd-e571b80d8f7f</entry>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <entry name="uuid">ededa679-1161-406f-bccd-e571b80d8f7f</entry>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/ededa679-1161-406f-bccd-e571b80d8f7f_disk">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/ededa679-1161-406f-bccd-e571b80d8f7f_disk.config">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:25:9d:a2"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <target dev="tapccb82929-08"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f/console.log" append="off"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:07:19 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:07:19 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:07:19 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:07:19 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.006 2 DEBUG nova.virt.libvirt.vif [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-84071708',display_name='tempest-TestNetworkAdvancedServerOps-server-84071708',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-84071708',id=207,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIX/+5SRLUhs0eDaLcolgjqYkVwsuk4D/uvy0QHY7DM2c673uNSh9aodkpDeRsWNUXML+zucDvvL1LRs9C6jNeVRJuMPEGHmaQ46I/VMjFKhJP1gLMaRGktNaEffxctaUQ==',key_name='tempest-TestNetworkAdvancedServerOps-1720569281',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-znqtzf0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:07:14Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ededa679-1161-406f-bccd-e571b80d8f7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1276596428", "vif_mac": "fa:16:3e:25:9d:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.007 2 DEBUG nova.network.os_vif_util [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converting VIF {"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1276596428", "vif_mac": "fa:16:3e:25:9d:a2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.007 2 DEBUG nova.network.os_vif_util [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.007 2 DEBUG os_vif [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccb82929-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapccb82929-08, col_values=(('external_ids', {'iface-id': 'ccb82929-088d-402f-87fd-a1b429d52ff7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:9d:a2', 'vm-uuid': 'ededa679-1161-406f-bccd-e571b80d8f7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 NetworkManager[44910]: <info>  [1759410439.0137] manager: (tapccb82929-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.022 2 INFO os_vif [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08')#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.080 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.080 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.080 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] No VIF found with MAC fa:16:3e:25:9d:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.081 2 INFO nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Using config drive#033[00m
Oct  2 09:07:19 np0005465987 kernel: tapccb82929-08: entered promiscuous mode
Oct  2 09:07:19 np0005465987 NetworkManager[44910]: <info>  [1759410439.1741] manager: (tapccb82929-08): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:19Z|00806|binding|INFO|Claiming lport ccb82929-088d-402f-87fd-a1b429d52ff7 for this chassis.
Oct  2 09:07:19 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:19Z|00807|binding|INFO|ccb82929-088d-402f-87fd-a1b429d52ff7: Claiming fa:16:3e:25:9d:a2 10.100.0.5
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 NetworkManager[44910]: <info>  [1759410439.2395] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Oct  2 09:07:19 np0005465987 NetworkManager[44910]: <info>  [1759410439.2401] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.240 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:9d:a2 10.100.0.5'], port_security=['fa:16:3e:25:9d:a2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ededa679-1161-406f-bccd-e571b80d8f7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f0130049-f161-49c4-b94d-f41f73f65259', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=916b0cf7-14c0-4414-a8be-ed0d8198832b, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ccb82929-088d-402f-87fd-a1b429d52ff7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.241 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ccb82929-088d-402f-87fd-a1b429d52ff7 in datapath 6ac71ae6-e349-4379-83eb-f06b6ed86426 bound to our chassis#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.242 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ac71ae6-e349-4379-83eb-f06b6ed86426#033[00m
Oct  2 09:07:19 np0005465987 systemd-udevd[311330]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:07:19 np0005465987 systemd-machined[188335]: New machine qemu-91-instance-000000cf.
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.254 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[420c2bac-2cf0-499c-9523-a05131a6448e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.254 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ac71ae6-e1 in ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.256 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ac71ae6-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.256 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdb0549-28e4-416e-8794-b78ca2c29b6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.257 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[298c4281-f898-4cba-b225-f4e7153e83d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 NetworkManager[44910]: <info>  [1759410439.2617] device (tapccb82929-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:07:19 np0005465987 NetworkManager[44910]: <info>  [1759410439.2626] device (tapccb82929-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.267 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[035b0c5c-2ebf-4303-8aa3-da968f136f9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.293 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5de87e01-3740-42d0-a332-d77fdd1e6c8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 systemd[1]: Started Virtual Machine qemu-91-instance-000000cf.
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.328 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b20e73-e68c-4135-8d30-a5d1e089d2ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 NetworkManager[44910]: <info>  [1759410439.3411] manager: (tap6ac71ae6-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/377)
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.340 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6d82e4f8-1845-4970-baaf-c2a1abc761a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.367 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[12fa143f-03d0-4358-87cf-12fd0737babd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.370 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cea6c2-3e93-4919-af59-9e4e92dca203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 NetworkManager[44910]: <info>  [1759410439.3921] device (tap6ac71ae6-e0): carrier: link connected
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.398 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[250d4113-c274-4ae3-99cb-2c12467e0251]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.414 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4399d927-632f-4bce-b175-1b97099f60fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ac71ae6-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:84:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850796, 'reachable_time': 30553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311363, 'error': None, 'target': 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.437 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2df1b12c-dbf9-48f7-96c2-4c8b054ebaf5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:8441'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 850796, 'tstamp': 850796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311364, 'error': None, 'target': 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:19Z|00808|binding|INFO|Setting lport ccb82929-088d-402f-87fd-a1b429d52ff7 ovn-installed in OVS
Oct  2 09:07:19 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:19Z|00809|binding|INFO|Setting lport ccb82929-088d-402f-87fd-a1b429d52ff7 up in Southbound
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.453 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ae6a4c-cae2-43d1-aa14-bba9a208af79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ac71ae6-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:84:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850796, 'reachable_time': 30553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311365, 'error': None, 'target': 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.484 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[86a6f7b7-acb6-4239-af7d-bb8d37d91de4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.544 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a51bd9bb-8eff-4920-8d37-51574df710a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.546 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ac71ae6-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.546 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.546 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ac71ae6-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 kernel: tap6ac71ae6-e0: entered promiscuous mode
Oct  2 09:07:19 np0005465987 NetworkManager[44910]: <info>  [1759410439.5483] manager: (tap6ac71ae6-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.550 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ac71ae6-e0, col_values=(('external_ids', {'iface-id': '0f9dbaf2-40b8-4571-bd96-64d17a764b41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:19Z|00810|binding|INFO|Releasing lport 0f9dbaf2-40b8-4571-bd96-64d17a764b41 from this chassis (sb_readonly=0)
Oct  2 09:07:19 np0005465987 nova_compute[230713]: 2025-10-02 13:07:19.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.563 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ac71ae6-e349-4379-83eb-f06b6ed86426.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ac71ae6-e349-4379-83eb-f06b6ed86426.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.564 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[076e43ed-caf0-4d51-96d4-85bf38e10377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.564 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-6ac71ae6-e349-4379-83eb-f06b6ed86426
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/6ac71ae6-e349-4379-83eb-f06b6ed86426.pid.haproxy
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 6ac71ae6-e349-4379-83eb-f06b6ed86426
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:07:19 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:19.565 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'env', 'PROCESS_TAG=haproxy-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ac71ae6-e349-4379-83eb-f06b6ed86426.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:07:19 np0005465987 podman[311439]: 2025-10-02 13:07:19.920932841 +0000 UTC m=+0.046634016 container create 2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:07:19 np0005465987 systemd[1]: Started libpod-conmon-2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647.scope.
Oct  2 09:07:19 np0005465987 podman[311439]: 2025-10-02 13:07:19.895336647 +0000 UTC m=+0.021037852 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:07:19 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:07:19 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6589b92586b948fbaa578c0273eccabc6cd60fdc7ec34f105d7a60a486da69f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:07:20 np0005465987 podman[311439]: 2025-10-02 13:07:20.019156844 +0000 UTC m=+0.144858029 container init 2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:07:20 np0005465987 podman[311439]: 2025-10-02 13:07:20.024498579 +0000 UTC m=+0.150199754 container start 2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:07:20 np0005465987 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[311455]: [NOTICE]   (311459) : New worker (311461) forked
Oct  2 09:07:20 np0005465987 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[311455]: [NOTICE]   (311459) : Loading success.
Oct  2 09:07:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:20.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.260 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410440.2600114, ededa679-1161-406f-bccd-e571b80d8f7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.261 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.262 2 DEBUG nova.compute.manager [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.265 2 INFO nova.virt.libvirt.driver [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance running successfully.#033[00m
Oct  2 09:07:20 np0005465987 virtqemud[230442]: argument unsupported: QEMU guest agent is not configured
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.268 2 DEBUG nova.virt.libvirt.guest [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.268 2 DEBUG nova.virt.libvirt.driver [None req-f15bd1e4-3bf6-4e4b-bae8-dce40de191a8 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.297 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.303 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.370 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.371 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410440.2625828, ededa679-1161-406f-bccd-e571b80d8f7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.371 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] VM Started (Lifecycle Event)#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.418 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.422 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.641 2 DEBUG nova.compute.manager [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.641 2 DEBUG oslo_concurrency.lockutils [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.641 2 DEBUG oslo_concurrency.lockutils [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.641 2 DEBUG oslo_concurrency.lockutils [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.642 2 DEBUG nova.compute.manager [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:20 np0005465987 nova_compute[230713]: 2025-10-02 13:07:20.642 2 WARNING nova.compute.manager [req-550ecdc5-4550-49bf-ab8a-310343b063ea req-a1f06c9e-4106-4237-bab9-7d9a8c8d753b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state resized and task_state None.#033[00m
Oct  2 09:07:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:20.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:21 np0005465987 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 09:07:21 np0005465987 systemd[311057]: Activating special unit Exit the Session...
Oct  2 09:07:21 np0005465987 systemd[311057]: Stopped target Main User Target.
Oct  2 09:07:21 np0005465987 systemd[311057]: Stopped target Basic System.
Oct  2 09:07:21 np0005465987 systemd[311057]: Stopped target Paths.
Oct  2 09:07:21 np0005465987 systemd[311057]: Stopped target Sockets.
Oct  2 09:07:21 np0005465987 systemd[311057]: Stopped target Timers.
Oct  2 09:07:21 np0005465987 systemd[311057]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 09:07:21 np0005465987 systemd[311057]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 09:07:21 np0005465987 systemd[311057]: Closed D-Bus User Message Bus Socket.
Oct  2 09:07:21 np0005465987 systemd[311057]: Stopped Create User's Volatile Files and Directories.
Oct  2 09:07:21 np0005465987 systemd[311057]: Removed slice User Application Slice.
Oct  2 09:07:21 np0005465987 systemd[311057]: Reached target Shutdown.
Oct  2 09:07:21 np0005465987 systemd[311057]: Finished Exit the Session.
Oct  2 09:07:21 np0005465987 systemd[311057]: Reached target Exit the Session.
Oct  2 09:07:21 np0005465987 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 09:07:21 np0005465987 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 09:07:21 np0005465987 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 09:07:21 np0005465987 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 09:07:21 np0005465987 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 09:07:21 np0005465987 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 09:07:21 np0005465987 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 09:07:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:22.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:22.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:22 np0005465987 nova_compute[230713]: 2025-10-02 13:07:22.770 2 DEBUG nova.compute.manager [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:22 np0005465987 nova_compute[230713]: 2025-10-02 13:07:22.770 2 DEBUG oslo_concurrency.lockutils [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:22 np0005465987 nova_compute[230713]: 2025-10-02 13:07:22.770 2 DEBUG oslo_concurrency.lockutils [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:22 np0005465987 nova_compute[230713]: 2025-10-02 13:07:22.771 2 DEBUG oslo_concurrency.lockutils [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:22 np0005465987 nova_compute[230713]: 2025-10-02 13:07:22.771 2 DEBUG nova.compute.manager [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:22 np0005465987 nova_compute[230713]: 2025-10-02 13:07:22.771 2 WARNING nova.compute.manager [req-66336729-4d1d-4b7f-a70e-7378baa81b3c req-1477125e-6ae3-4d52-b580-4d55d32bc011 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state resized and task_state None.#033[00m
Oct  2 09:07:23 np0005465987 nova_compute[230713]: 2025-10-02 13:07:23.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:24 np0005465987 nova_compute[230713]: 2025-10-02 13:07:24.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:24.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:24.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:26.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:27 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:27Z|00811|binding|INFO|Releasing lport 0f9dbaf2-40b8-4571-bd96-64d17a764b41 from this chassis (sb_readonly=0)
Oct  2 09:07:27 np0005465987 nova_compute[230713]: 2025-10-02 13:07:27.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:27.993 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:27.994 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:27.994 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:28.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:28 np0005465987 nova_compute[230713]: 2025-10-02 13:07:28.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:28.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e401 e401: 3 total, 3 up, 3 in
Oct  2 09:07:29 np0005465987 nova_compute[230713]: 2025-10-02 13:07:29.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:30.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:31 np0005465987 nova_compute[230713]: 2025-10-02 13:07:31.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:31 np0005465987 nova_compute[230713]: 2025-10-02 13:07:31.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:31 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:31Z|00812|binding|INFO|Releasing lport 0f9dbaf2-40b8-4571-bd96-64d17a764b41 from this chassis (sb_readonly=0)
Oct  2 09:07:32 np0005465987 nova_compute[230713]: 2025-10-02 13:07:32.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:32.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:32.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:33 np0005465987 nova_compute[230713]: 2025-10-02 13:07:33.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:33 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:33Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:9d:a2 10.100.0.5
Oct  2 09:07:34 np0005465987 nova_compute[230713]: 2025-10-02 13:07:34.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:34.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:34.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:35 np0005465987 nova_compute[230713]: 2025-10-02 13:07:35.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:35 np0005465987 nova_compute[230713]: 2025-10-02 13:07:35.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:07:35 np0005465987 nova_compute[230713]: 2025-10-02 13:07:35.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:07:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 e402: 3 total, 3 up, 3 in
Oct  2 09:07:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:36.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:36 np0005465987 nova_compute[230713]: 2025-10-02 13:07:36.215 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:07:36 np0005465987 nova_compute[230713]: 2025-10-02 13:07:36.215 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:07:36 np0005465987 nova_compute[230713]: 2025-10-02 13:07:36.216 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:07:36 np0005465987 nova_compute[230713]: 2025-10-02 13:07:36.216 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:36.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:37 np0005465987 nova_compute[230713]: 2025-10-02 13:07:37.528 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:07:37 np0005465987 nova_compute[230713]: 2025-10-02 13:07:37.543 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:07:37 np0005465987 nova_compute[230713]: 2025-10-02 13:07:37.543 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:07:37 np0005465987 nova_compute[230713]: 2025-10-02 13:07:37.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:38 np0005465987 nova_compute[230713]: 2025-10-02 13:07:38.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:38.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:39 np0005465987 nova_compute[230713]: 2025-10-02 13:07:39.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:39 np0005465987 nova_compute[230713]: 2025-10-02 13:07:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:39 np0005465987 nova_compute[230713]: 2025-10-02 13:07:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:40 np0005465987 nova_compute[230713]: 2025-10-02 13:07:40.311 2 INFO nova.compute.manager [None req-3c061f97-6d19-4039-b12f-a86a5c877094 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Get console output#033[00m
Oct  2 09:07:40 np0005465987 nova_compute[230713]: 2025-10-02 13:07:40.316 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:07:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:40.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:40 np0005465987 nova_compute[230713]: 2025-10-02 13:07:40.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:40 np0005465987 nova_compute[230713]: 2025-10-02 13:07:40.996 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:40 np0005465987 nova_compute[230713]: 2025-10-02 13:07:40.997 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:40 np0005465987 nova_compute[230713]: 2025-10-02 13:07:40.997 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:40 np0005465987 nova_compute[230713]: 2025-10-02 13:07:40.997 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:07:40 np0005465987 nova_compute[230713]: 2025-10-02 13:07:40.998 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:41 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1075869113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.437 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.556 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.557 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.723 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.724 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4123MB free_disk=20.94268035888672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.725 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.725 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.887 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance ededa679-1161-406f-bccd-e571b80d8f7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.888 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:07:41 np0005465987 nova_compute[230713]: 2025-10-02 13:07:41.888 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:07:42 np0005465987 nova_compute[230713]: 2025-10-02 13:07:42.074 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:42.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1506125174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:42 np0005465987 nova_compute[230713]: 2025-10-02 13:07:42.586 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:42 np0005465987 nova_compute[230713]: 2025-10-02 13:07:42.591 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:07:42 np0005465987 nova_compute[230713]: 2025-10-02 13:07:42.611 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:07:42 np0005465987 nova_compute[230713]: 2025-10-02 13:07:42.646 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:07:42 np0005465987 nova_compute[230713]: 2025-10-02 13:07:42.646 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:07:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:42.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.027 2 DEBUG nova.compute.manager [req-a1e1b4fc-de52-44d5-9fde-ad823e61b110 req-fc5d4993-305f-4c74-bea8-5b418f4c2a31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.028 2 DEBUG nova.compute.manager [req-a1e1b4fc-de52-44d5-9fde-ad823e61b110 req-fc5d4993-305f-4c74-bea8-5b418f4c2a31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing instance network info cache due to event network-changed-ccb82929-088d-402f-87fd-a1b429d52ff7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.028 2 DEBUG oslo_concurrency.lockutils [req-a1e1b4fc-de52-44d5-9fde-ad823e61b110 req-fc5d4993-305f-4c74-bea8-5b418f4c2a31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.028 2 DEBUG oslo_concurrency.lockutils [req-a1e1b4fc-de52-44d5-9fde-ad823e61b110 req-fc5d4993-305f-4c74-bea8-5b418f4c2a31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.028 2 DEBUG nova.network.neutron [req-a1e1b4fc-de52-44d5-9fde-ad823e61b110 req-fc5d4993-305f-4c74-bea8-5b418f4c2a31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Refreshing network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.154 2 DEBUG oslo_concurrency.lockutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.154 2 DEBUG oslo_concurrency.lockutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.155 2 DEBUG oslo_concurrency.lockutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.155 2 DEBUG oslo_concurrency.lockutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.155 2 DEBUG oslo_concurrency.lockutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.156 2 INFO nova.compute.manager [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Terminating instance#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.157 2 DEBUG nova.compute.manager [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 kernel: tapccb82929-08 (unregistering): left promiscuous mode
Oct  2 09:07:43 np0005465987 NetworkManager[44910]: <info>  [1759410463.2074] device (tapccb82929-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:07:43 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:43Z|00813|binding|INFO|Releasing lport ccb82929-088d-402f-87fd-a1b429d52ff7 from this chassis (sb_readonly=0)
Oct  2 09:07:43 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:43Z|00814|binding|INFO|Setting lport ccb82929-088d-402f-87fd-a1b429d52ff7 down in Southbound
Oct  2 09:07:43 np0005465987 ovn_controller[129172]: 2025-10-02T13:07:43Z|00815|binding|INFO|Removing iface tapccb82929-08 ovn-installed in OVS
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.239 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:9d:a2 10.100.0.5'], port_security=['fa:16:3e:25:9d:a2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ededa679-1161-406f-bccd-e571b80d8f7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f0130049-f161-49c4-b94d-f41f73f65259', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=916b0cf7-14c0-4414-a8be-ed0d8198832b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=ccb82929-088d-402f-87fd-a1b429d52ff7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.240 138325 INFO neutron.agent.ovn.metadata.agent [-] Port ccb82929-088d-402f-87fd-a1b429d52ff7 in datapath 6ac71ae6-e349-4379-83eb-f06b6ed86426 unbound from our chassis#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.242 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ac71ae6-e349-4379-83eb-f06b6ed86426, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.244 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a9da323b-c51a-48f1-9ba5-79017cda4532]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.244 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 namespace which is not needed anymore#033[00m
Oct  2 09:07:43 np0005465987 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000cf.scope: Deactivated successfully.
Oct  2 09:07:43 np0005465987 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000cf.scope: Consumed 14.350s CPU time.
Oct  2 09:07:43 np0005465987 systemd-machined[188335]: Machine qemu-91-instance-000000cf terminated.
Oct  2 09:07:43 np0005465987 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[311455]: [NOTICE]   (311459) : haproxy version is 2.8.14-c23fe91
Oct  2 09:07:43 np0005465987 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[311455]: [NOTICE]   (311459) : path to executable is /usr/sbin/haproxy
Oct  2 09:07:43 np0005465987 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[311455]: [WARNING]  (311459) : Exiting Master process...
Oct  2 09:07:43 np0005465987 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[311455]: [ALERT]    (311459) : Current worker (311461) exited with code 143 (Terminated)
Oct  2 09:07:43 np0005465987 neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426[311455]: [WARNING]  (311459) : All workers exited. Exiting... (0)
Oct  2 09:07:43 np0005465987 systemd[1]: libpod-2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647.scope: Deactivated successfully.
Oct  2 09:07:43 np0005465987 podman[311543]: 2025-10-02 13:07:43.368517746 +0000 UTC m=+0.043088010 container died 2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.402 2 INFO nova.virt.libvirt.driver [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Instance destroyed successfully.#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.403 2 DEBUG nova.objects.instance [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid ededa679-1161-406f-bccd-e571b80d8f7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:43 np0005465987 systemd[1]: var-lib-containers-storage-overlay-6589b92586b948fbaa578c0273eccabc6cd60fdc7ec34f105d7a60a486da69f7-merged.mount: Deactivated successfully.
Oct  2 09:07:43 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647-userdata-shm.mount: Deactivated successfully.
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.423 2 DEBUG nova.virt.libvirt.vif [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-84071708',display_name='tempest-TestNetworkAdvancedServerOps-server-84071708',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-84071708',id=207,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIX/+5SRLUhs0eDaLcolgjqYkVwsuk4D/uvy0QHY7DM2c673uNSh9aodkpDeRsWNUXML+zucDvvL1LRs9C6jNeVRJuMPEGHmaQ46I/VMjFKhJP1gLMaRGktNaEffxctaUQ==',key_name='tempest-TestNetworkAdvancedServerOps-1720569281',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:07:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-znqtzf0g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:07:29Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ededa679-1161-406f-bccd-e571b80d8f7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.424 2 DEBUG nova.network.os_vif_util [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.424 2 DEBUG nova.network.os_vif_util [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.425 2 DEBUG os_vif [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:07:43 np0005465987 podman[311543]: 2025-10-02 13:07:43.425874401 +0000 UTC m=+0.100444665 container cleanup 2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccb82929-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.434 2 INFO os_vif [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:9d:a2,bridge_name='br-int',has_traffic_filtering=True,id=ccb82929-088d-402f-87fd-a1b429d52ff7,network=Network(6ac71ae6-e349-4379-83eb-f06b6ed86426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccb82929-08')#033[00m
Oct  2 09:07:43 np0005465987 systemd[1]: libpod-conmon-2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647.scope: Deactivated successfully.
Oct  2 09:07:43 np0005465987 podman[311580]: 2025-10-02 13:07:43.497890054 +0000 UTC m=+0.048860437 container remove 2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.504 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2e4670bd-4fab-4f98-af45-cc942a624e17]: (4, ('Thu Oct  2 01:07:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 (2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647)\n2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647\nThu Oct  2 01:07:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 (2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647)\n2aaf9140ca2e0165a56f7aa5833334a5708c6f3cadb20132197258a71e4f3647\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.507 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[92637a15-7118-4fef-a31d-56c6571655d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.508 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ac71ae6-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 kernel: tap6ac71ae6-e0: left promiscuous mode
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.529 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[196f3df2-2a0a-45c9-8d84-9e9fd9aa3d84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.563 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[65ff0282-b378-474a-8adc-fe9c33ae2c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.568 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[56913a57-ef28-4c76-bd93-6946c2f76474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.588 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bf44c4c3-96a5-488d-a1c7-d1c28f2b965a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850789, 'reachable_time': 18921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311611, 'error': None, 'target': 'ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:43 np0005465987 systemd[1]: run-netns-ovnmeta\x2d6ac71ae6\x2de349\x2d4379\x2d83eb\x2df06b6ed86426.mount: Deactivated successfully.
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.592 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ac71ae6-e349-4379-83eb-f06b6ed86426 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:07:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:07:43.592 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[31fd3325-2f59-45d0-9df5-150e72e7da8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.613 2 DEBUG nova.compute.manager [req-8e43800f-555f-4f75-86d8-13858de56a2c req-8e1a7ee7-2293-447b-abf7-7c9c63f97b50 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-unplugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.614 2 DEBUG oslo_concurrency.lockutils [req-8e43800f-555f-4f75-86d8-13858de56a2c req-8e1a7ee7-2293-447b-abf7-7c9c63f97b50 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.614 2 DEBUG oslo_concurrency.lockutils [req-8e43800f-555f-4f75-86d8-13858de56a2c req-8e1a7ee7-2293-447b-abf7-7c9c63f97b50 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.614 2 DEBUG oslo_concurrency.lockutils [req-8e43800f-555f-4f75-86d8-13858de56a2c req-8e1a7ee7-2293-447b-abf7-7c9c63f97b50 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.615 2 DEBUG nova.compute.manager [req-8e43800f-555f-4f75-86d8-13858de56a2c req-8e1a7ee7-2293-447b-abf7-7c9c63f97b50 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-unplugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.615 2 DEBUG nova.compute.manager [req-8e43800f-555f-4f75-86d8-13858de56a2c req-8e1a7ee7-2293-447b-abf7-7c9c63f97b50 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-unplugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:07:43 np0005465987 nova_compute[230713]: 2025-10-02 13:07:43.647 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:44 np0005465987 nova_compute[230713]: 2025-10-02 13:07:44.034 2 INFO nova.virt.libvirt.driver [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Deleting instance files /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f_del#033[00m
Oct  2 09:07:44 np0005465987 nova_compute[230713]: 2025-10-02 13:07:44.035 2 INFO nova.virt.libvirt.driver [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Deletion of /var/lib/nova/instances/ededa679-1161-406f-bccd-e571b80d8f7f_del complete#033[00m
Oct  2 09:07:44 np0005465987 nova_compute[230713]: 2025-10-02 13:07:44.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:44 np0005465987 nova_compute[230713]: 2025-10-02 13:07:44.098 2 INFO nova.compute.manager [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:07:44 np0005465987 nova_compute[230713]: 2025-10-02 13:07:44.099 2 DEBUG oslo.service.loopingcall [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:07:44 np0005465987 nova_compute[230713]: 2025-10-02 13:07:44.099 2 DEBUG nova.compute.manager [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:07:44 np0005465987 nova_compute[230713]: 2025-10-02 13:07:44.099 2 DEBUG nova.network.neutron [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:07:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:44.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:44.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:44 np0005465987 podman[311614]: 2025-10-02 13:07:44.838598543 +0000 UTC m=+0.060196193 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:07:44 np0005465987 nova_compute[230713]: 2025-10-02 13:07:44.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:44 np0005465987 nova_compute[230713]: 2025-10-02 13:07:44.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:07:45 np0005465987 nova_compute[230713]: 2025-10-02 13:07:45.385 2 DEBUG nova.network.neutron [req-a1e1b4fc-de52-44d5-9fde-ad823e61b110 req-fc5d4993-305f-4c74-bea8-5b418f4c2a31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updated VIF entry in instance network info cache for port ccb82929-088d-402f-87fd-a1b429d52ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:07:45 np0005465987 nova_compute[230713]: 2025-10-02 13:07:45.386 2 DEBUG nova.network.neutron [req-a1e1b4fc-de52-44d5-9fde-ad823e61b110 req-fc5d4993-305f-4c74-bea8-5b418f4c2a31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [{"id": "ccb82929-088d-402f-87fd-a1b429d52ff7", "address": "fa:16:3e:25:9d:a2", "network": {"id": "6ac71ae6-e349-4379-83eb-f06b6ed86426", "bridge": "br-int", "label": "tempest-network-smoke--1276596428", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccb82929-08", "ovs_interfaceid": "ccb82929-088d-402f-87fd-a1b429d52ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:07:45 np0005465987 nova_compute[230713]: 2025-10-02 13:07:45.433 2 DEBUG oslo_concurrency.lockutils [req-a1e1b4fc-de52-44d5-9fde-ad823e61b110 req-fc5d4993-305f-4c74-bea8-5b418f4c2a31 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ededa679-1161-406f-bccd-e571b80d8f7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:07:45 np0005465987 nova_compute[230713]: 2025-10-02 13:07:45.694 2 DEBUG nova.network.neutron [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:07:45 np0005465987 nova_compute[230713]: 2025-10-02 13:07:45.723 2 INFO nova.compute.manager [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Took 1.62 seconds to deallocate network for instance.#033[00m
Oct  2 09:07:45 np0005465987 nova_compute[230713]: 2025-10-02 13:07:45.750 2 DEBUG nova.compute.manager [req-0c3cc233-b671-4679-bb2f-dd769a71da40 req-c5e35595-3499-469d-8782-2e2d15407e95 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-deleted-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:45 np0005465987 nova_compute[230713]: 2025-10-02 13:07:45.779 2 DEBUG oslo_concurrency.lockutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:45 np0005465987 nova_compute[230713]: 2025-10-02 13:07:45.779 2 DEBUG oslo_concurrency.lockutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:45 np0005465987 nova_compute[230713]: 2025-10-02 13:07:45.834 2 DEBUG oslo_concurrency.processutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.046 2 DEBUG nova.compute.manager [req-b7bed82f-5a78-49b7-aa66-7ebcce1ababc req-8099aea0-3bbc-4abb-ab77-4f432464bb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.047 2 DEBUG oslo_concurrency.lockutils [req-b7bed82f-5a78-49b7-aa66-7ebcce1ababc req-8099aea0-3bbc-4abb-ab77-4f432464bb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.047 2 DEBUG oslo_concurrency.lockutils [req-b7bed82f-5a78-49b7-aa66-7ebcce1ababc req-8099aea0-3bbc-4abb-ab77-4f432464bb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.047 2 DEBUG oslo_concurrency.lockutils [req-b7bed82f-5a78-49b7-aa66-7ebcce1ababc req-8099aea0-3bbc-4abb-ab77-4f432464bb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.048 2 DEBUG nova.compute.manager [req-b7bed82f-5a78-49b7-aa66-7ebcce1ababc req-8099aea0-3bbc-4abb-ab77-4f432464bb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] No waiting events found dispatching network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.048 2 WARNING nova.compute.manager [req-b7bed82f-5a78-49b7-aa66-7ebcce1ababc req-8099aea0-3bbc-4abb-ab77-4f432464bb0d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Received unexpected event network-vif-plugged-ccb82929-088d-402f-87fd-a1b429d52ff7 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:46.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3887397663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.322 2 DEBUG oslo_concurrency.processutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.331 2 DEBUG nova.compute.provider_tree [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.370 2 DEBUG nova.scheduler.client.report [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.394 2 DEBUG oslo_concurrency.lockutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.418 2 INFO nova.scheduler.client.report [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocations for instance ededa679-1161-406f-bccd-e571b80d8f7f#033[00m
Oct  2 09:07:46 np0005465987 nova_compute[230713]: 2025-10-02 13:07:46.471 2 DEBUG oslo_concurrency.lockutils [None req-3945c6b1-4ba8-4db3-a7d5-050afcc8eb1e ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ededa679-1161-406f-bccd-e571b80d8f7f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:46.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:07:46 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:07:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:07:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:48.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:07:48 np0005465987 nova_compute[230713]: 2025-10-02 13:07:48.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005465987 nova_compute[230713]: 2025-10-02 13:07:48.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005465987 nova_compute[230713]: 2025-10-02 13:07:48.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:48.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:49 np0005465987 podman[311913]: 2025-10-02 13:07:49.871582507 +0000 UTC m=+0.075392295 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:07:49 np0005465987 podman[311914]: 2025-10-02 13:07:49.879833861 +0000 UTC m=+0.077679937 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:07:49 np0005465987 podman[311912]: 2025-10-02 13:07:49.927770501 +0000 UTC m=+0.132060192 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:07:50 np0005465987 nova_compute[230713]: 2025-10-02 13:07:50.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:50 np0005465987 nova_compute[230713]: 2025-10-02 13:07:50.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:50.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:52.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:53 np0005465987 nova_compute[230713]: 2025-10-02 13:07:53.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:53 np0005465987 nova_compute[230713]: 2025-10-02 13:07:53.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:07:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:07:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:54.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:56.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:58 np0005465987 nova_compute[230713]: 2025-10-02 13:07:58.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:58 np0005465987 nova_compute[230713]: 2025-10-02 13:07:58.400 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410463.3994725, ededa679-1161-406f-bccd-e571b80d8f7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:07:58 np0005465987 nova_compute[230713]: 2025-10-02 13:07:58.401 2 INFO nova.compute.manager [-] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:07:58 np0005465987 nova_compute[230713]: 2025-10-02 13:07:58.422 2 DEBUG nova.compute.manager [None req-c4ab7f97-9fa9-45e0-b595-0809895195a5 - - - - - -] [instance: ededa679-1161-406f-bccd-e571b80d8f7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:07:58 np0005465987 nova_compute[230713]: 2025-10-02 13:07:58.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:07:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:58.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:00.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:00.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:02.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:02.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.236 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquiring lock "fc398644-66fc-44e3-9a6a-7389f5a542b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.236 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.251 2 DEBUG nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.343 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.343 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.351 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.351 2 INFO nova.compute.claims [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.436 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:03.636 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:08:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:03.637 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:08:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2014427386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.863 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.870 2 DEBUG nova.compute.provider_tree [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.898 2 DEBUG nova.scheduler.client.report [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.925 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.925 2 DEBUG nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.975 2 DEBUG nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.976 2 DEBUG nova.network.neutron [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:08:03 np0005465987 nova_compute[230713]: 2025-10-02 13:08:03.995 2 INFO nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.013 2 DEBUG nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.102 2 DEBUG nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.103 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.104 2 INFO nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Creating image(s)#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.134 2 DEBUG nova.storage.rbd_utils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] rbd image fc398644-66fc-44e3-9a6a-7389f5a542b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.166 2 DEBUG nova.storage.rbd_utils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] rbd image fc398644-66fc-44e3-9a6a-7389f5a542b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:04.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.196 2 DEBUG nova.storage.rbd_utils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] rbd image fc398644-66fc-44e3-9a6a-7389f5a542b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.199 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.232 2 DEBUG nova.policy [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '29c8a28c5bdd4feb9412127428bf0c3b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '60bfd415ee154615b20dd99528061614', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.270 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.271 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.272 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.272 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.295 2 DEBUG nova.storage.rbd_utils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] rbd image fc398644-66fc-44e3-9a6a-7389f5a542b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.298 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 fc398644-66fc-44e3-9a6a-7389f5a542b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.629 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 fc398644-66fc-44e3-9a6a-7389f5a542b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.720 2 DEBUG nova.storage.rbd_utils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] resizing rbd image fc398644-66fc-44e3-9a6a-7389f5a542b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:08:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:04.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.835 2 DEBUG nova.objects.instance [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lazy-loading 'migration_context' on Instance uuid fc398644-66fc-44e3-9a6a-7389f5a542b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.923 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.923 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Ensure instance console log exists: /var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.923 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.924 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:04 np0005465987 nova_compute[230713]: 2025-10-02 13:08:04.924 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:05 np0005465987 nova_compute[230713]: 2025-10-02 13:08:05.076 2 DEBUG nova.network.neutron [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Successfully created port: 0d204e32-92c1-4d57-9011-267d812ddfe9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:08:05 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:05.639 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:06 np0005465987 nova_compute[230713]: 2025-10-02 13:08:06.166 2 DEBUG nova.network.neutron [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Successfully updated port: 0d204e32-92c1-4d57-9011-267d812ddfe9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:08:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:06.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:06 np0005465987 nova_compute[230713]: 2025-10-02 13:08:06.187 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquiring lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:06 np0005465987 nova_compute[230713]: 2025-10-02 13:08:06.188 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquired lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:06 np0005465987 nova_compute[230713]: 2025-10-02 13:08:06.188 2 DEBUG nova.network.neutron [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:08:06 np0005465987 nova_compute[230713]: 2025-10-02 13:08:06.277 2 DEBUG nova.compute.manager [req-3c2b95a6-ee6a-4473-bfd8-651aecc548ff req-178f821d-f6a8-449f-88b4-57d8ffe7e371 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received event network-changed-0d204e32-92c1-4d57-9011-267d812ddfe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:06 np0005465987 nova_compute[230713]: 2025-10-02 13:08:06.278 2 DEBUG nova.compute.manager [req-3c2b95a6-ee6a-4473-bfd8-651aecc548ff req-178f821d-f6a8-449f-88b4-57d8ffe7e371 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Refreshing instance network info cache due to event network-changed-0d204e32-92c1-4d57-9011-267d812ddfe9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:08:06 np0005465987 nova_compute[230713]: 2025-10-02 13:08:06.279 2 DEBUG oslo_concurrency.lockutils [req-3c2b95a6-ee6a-4473-bfd8-651aecc548ff req-178f821d-f6a8-449f-88b4-57d8ffe7e371 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:06 np0005465987 nova_compute[230713]: 2025-10-02 13:08:06.327 2 DEBUG nova.network.neutron [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:08:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:06.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.204 2 DEBUG nova.network.neutron [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updating instance_info_cache with network_info: [{"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.240 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Releasing lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.240 2 DEBUG nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Instance network_info: |[{"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.241 2 DEBUG oslo_concurrency.lockutils [req-3c2b95a6-ee6a-4473-bfd8-651aecc548ff req-178f821d-f6a8-449f-88b4-57d8ffe7e371 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.241 2 DEBUG nova.network.neutron [req-3c2b95a6-ee6a-4473-bfd8-651aecc548ff req-178f821d-f6a8-449f-88b4-57d8ffe7e371 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Refreshing network info cache for port 0d204e32-92c1-4d57-9011-267d812ddfe9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.247 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Start _get_guest_xml network_info=[{"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.251 2 WARNING nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.256 2 DEBUG nova.virt.libvirt.host [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.257 2 DEBUG nova.virt.libvirt.host [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.265 2 DEBUG nova.virt.libvirt.host [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.266 2 DEBUG nova.virt.libvirt.host [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.268 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.268 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.269 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.270 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.271 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.271 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.272 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.273 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.273 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.274 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.274 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.275 2 DEBUG nova.virt.hardware [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.280 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:08:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3638219158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.725 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.756 2 DEBUG nova.storage.rbd_utils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] rbd image fc398644-66fc-44e3-9a6a-7389f5a542b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:07 np0005465987 nova_compute[230713]: 2025-10-02 13:08:07.760 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:08:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1576955163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:08:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:08.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.191 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.193 2 DEBUG nova.virt.libvirt.vif [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-619026020',display_name='tempest-TestSnapshotPattern-server-619026020',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-619026020',id=208,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMla0NYshozjIhqPshVDB7V9QQUdVkTPcrZs6EBQpt1OKq+5dB8xVPlwumBL6FU6d1oBZUn9yPH7sBT9aKh0ThWjhX3hBNPhKbMCSDMfEKO24D2SpIWzAY4pSc/pr8RlQ==',key_name='tempest-TestSnapshotPattern-747682339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='60bfd415ee154615b20dd99528061614',ramdisk_id='',reservation_id='r-ohg1oi4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-400150385',owner_user_name='tempest-TestSnapshotPattern-400150385-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:04Z,user_data=None,user_id='29c8a28c5bdd4feb9412127428bf0c3b',uuid=fc398644-66fc-44e3-9a6a-7389f5a542b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.193 2 DEBUG nova.network.os_vif_util [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Converting VIF {"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.194 2 DEBUG nova.network.os_vif_util [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:55:18,bridge_name='br-int',has_traffic_filtering=True,id=0d204e32-92c1-4d57-9011-267d812ddfe9,network=Network(14e748fc-2d6a-4d69-b120-68c995d49660),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d204e32-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.195 2 DEBUG nova.objects.instance [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc398644-66fc-44e3-9a6a-7389f5a542b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.211 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <uuid>fc398644-66fc-44e3-9a6a-7389f5a542b8</uuid>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <name>instance-000000d0</name>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestSnapshotPattern-server-619026020</nova:name>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:08:07</nova:creationTime>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <nova:user uuid="29c8a28c5bdd4feb9412127428bf0c3b">tempest-TestSnapshotPattern-400150385-project-member</nova:user>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <nova:project uuid="60bfd415ee154615b20dd99528061614">tempest-TestSnapshotPattern-400150385</nova:project>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <nova:port uuid="0d204e32-92c1-4d57-9011-267d812ddfe9">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <entry name="serial">fc398644-66fc-44e3-9a6a-7389f5a542b8</entry>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <entry name="uuid">fc398644-66fc-44e3-9a6a-7389f5a542b8</entry>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/fc398644-66fc-44e3-9a6a-7389f5a542b8_disk">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/fc398644-66fc-44e3-9a6a-7389f5a542b8_disk.config">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:08:55:18"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <target dev="tap0d204e32-92"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8/console.log" append="off"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:08:08 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:08:08 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:08:08 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:08:08 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.213 2 DEBUG nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Preparing to wait for external event network-vif-plugged-0d204e32-92c1-4d57-9011-267d812ddfe9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.213 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquiring lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.213 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.214 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.214 2 DEBUG nova.virt.libvirt.vif [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-619026020',display_name='tempest-TestSnapshotPattern-server-619026020',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-619026020',id=208,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMla0NYshozjIhqPshVDB7V9QQUdVkTPcrZs6EBQpt1OKq+5dB8xVPlwumBL6FU6d1oBZUn9yPH7sBT9aKh0ThWjhX3hBNPhKbMCSDMfEKO24D2SpIWzAY4pSc/pr8RlQ==',key_name='tempest-TestSnapshotPattern-747682339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='60bfd415ee154615b20dd99528061614',ramdisk_id='',reservation_id='r-ohg1oi4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-400150385',owner_user_name='tempest-TestSnapshotPattern-400150385-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:04Z,user_data=None,user_id='29c8a28c5bdd4feb9412127428bf0c3b',uuid=fc398644-66fc-44e3-9a6a-7389f5a542b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.215 2 DEBUG nova.network.os_vif_util [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Converting VIF {"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.215 2 DEBUG nova.network.os_vif_util [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:55:18,bridge_name='br-int',has_traffic_filtering=True,id=0d204e32-92c1-4d57-9011-267d812ddfe9,network=Network(14e748fc-2d6a-4d69-b120-68c995d49660),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d204e32-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.216 2 DEBUG os_vif [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:55:18,bridge_name='br-int',has_traffic_filtering=True,id=0d204e32-92c1-4d57-9011-267d812ddfe9,network=Network(14e748fc-2d6a-4d69-b120-68c995d49660),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d204e32-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d204e32-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.222 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d204e32-92, col_values=(('external_ids', {'iface-id': '0d204e32-92c1-4d57-9011-267d812ddfe9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:55:18', 'vm-uuid': 'fc398644-66fc-44e3-9a6a-7389f5a542b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:08 np0005465987 NetworkManager[44910]: <info>  [1759410488.2241] manager: (tap0d204e32-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.230 2 INFO os_vif [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:55:18,bridge_name='br-int',has_traffic_filtering=True,id=0d204e32-92c1-4d57-9011-267d812ddfe9,network=Network(14e748fc-2d6a-4d69-b120-68c995d49660),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d204e32-92')#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.285 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.286 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.286 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] No VIF found with MAC fa:16:3e:08:55:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.287 2 INFO nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Using config drive#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.310 2 DEBUG nova.storage.rbd_utils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] rbd image fc398644-66fc-44e3-9a6a-7389f5a542b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.573 2 DEBUG nova.network.neutron [req-3c2b95a6-ee6a-4473-bfd8-651aecc548ff req-178f821d-f6a8-449f-88b4-57d8ffe7e371 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updated VIF entry in instance network info cache for port 0d204e32-92c1-4d57-9011-267d812ddfe9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.573 2 DEBUG nova.network.neutron [req-3c2b95a6-ee6a-4473-bfd8-651aecc548ff req-178f821d-f6a8-449f-88b4-57d8ffe7e371 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updating instance_info_cache with network_info: [{"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.595 2 DEBUG oslo_concurrency.lockutils [req-3c2b95a6-ee6a-4473-bfd8-651aecc548ff req-178f821d-f6a8-449f-88b4-57d8ffe7e371 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:08.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.782 2 INFO nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Creating config drive at /var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8/disk.config#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.787 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp637vg9o6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.928 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp637vg9o6" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.953 2 DEBUG nova.storage.rbd_utils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] rbd image fc398644-66fc-44e3-9a6a-7389f5a542b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:08 np0005465987 nova_compute[230713]: 2025-10-02 13:08:08.956 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8/disk.config fc398644-66fc-44e3-9a6a-7389f5a542b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.117 2 DEBUG oslo_concurrency.processutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8/disk.config fc398644-66fc-44e3-9a6a-7389f5a542b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.118 2 INFO nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Deleting local config drive /var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8/disk.config because it was imported into RBD.#033[00m
Oct  2 09:08:09 np0005465987 kernel: tap0d204e32-92: entered promiscuous mode
Oct  2 09:08:09 np0005465987 NetworkManager[44910]: <info>  [1759410489.1714] manager: (tap0d204e32-92): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Oct  2 09:08:09 np0005465987 systemd-udevd[312344]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:08:09 np0005465987 ovn_controller[129172]: 2025-10-02T13:08:09Z|00816|binding|INFO|Claiming lport 0d204e32-92c1-4d57-9011-267d812ddfe9 for this chassis.
Oct  2 09:08:09 np0005465987 ovn_controller[129172]: 2025-10-02T13:08:09Z|00817|binding|INFO|0d204e32-92c1-4d57-9011-267d812ddfe9: Claiming fa:16:3e:08:55:18 10.100.0.6
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:09 np0005465987 NetworkManager[44910]: <info>  [1759410489.2154] device (tap0d204e32-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:08:09 np0005465987 NetworkManager[44910]: <info>  [1759410489.2162] device (tap0d204e32-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.214 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:55:18 10.100.0.6'], port_security=['fa:16:3e:08:55:18 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fc398644-66fc-44e3-9a6a-7389f5a542b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e748fc-2d6a-4d69-b120-68c995d49660', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '60bfd415ee154615b20dd99528061614', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6863a9d1-67a6-432a-b497-906487ecb0c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93cd313a-87a8-4538-814c-550ca73d3eca, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0d204e32-92c1-4d57-9011-267d812ddfe9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.216 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0d204e32-92c1-4d57-9011-267d812ddfe9 in datapath 14e748fc-2d6a-4d69-b120-68c995d49660 bound to our chassis#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.218 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e748fc-2d6a-4d69-b120-68c995d49660#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.230 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8848af17-85f2-446a-9157-eec3b4271759]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.231 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e748fc-21 in ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:08:09 np0005465987 systemd-machined[188335]: New machine qemu-92-instance-000000d0.
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.233 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e748fc-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.233 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8836fbfc-93f4-4447-ad41-111e86d2d483]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.233 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cff8befb-7ae4-4464-b095-ee1d03269644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.244 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[f17c751f-ac1e-461c-b9bf-c059e9aa801e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 systemd[1]: Started Virtual Machine qemu-92-instance-000000d0.
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:09 np0005465987 ovn_controller[129172]: 2025-10-02T13:08:09Z|00818|binding|INFO|Setting lport 0d204e32-92c1-4d57-9011-267d812ddfe9 ovn-installed in OVS
Oct  2 09:08:09 np0005465987 ovn_controller[129172]: 2025-10-02T13:08:09Z|00819|binding|INFO|Setting lport 0d204e32-92c1-4d57-9011-267d812ddfe9 up in Southbound
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.270 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fa296de7-7cdb-4e33-ae99-fdb30032c8e0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.296 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b4af9a7d-03ca-46b1-ad59-2bfa2230b998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 NetworkManager[44910]: <info>  [1759410489.3026] manager: (tap14e748fc-20): new Veth device (/org/freedesktop/NetworkManager/Devices/381)
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.301 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e959ace6-93f9-42f9-9c63-8e453ab52df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.339 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d5793a-4f20-4973-b90e-bf6bcf1ebce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.342 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[692f526a-48c5-46b3-af79-274157d968a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 NetworkManager[44910]: <info>  [1759410489.3643] device (tap14e748fc-20): carrier: link connected
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.368 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b46083fe-3480-4191-9d49-57a07bb41e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.382 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7e367368-9fde-4c34-9ccf-b13b0019a976]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e748fc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:1c:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855793, 'reachable_time': 23997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312380, 'error': None, 'target': 'ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.395 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1a946bc7-969d-46f0-86e2-167c1dc990ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:1c79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 855793, 'tstamp': 855793}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312381, 'error': None, 'target': 'ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.409 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[eb90dc10-6d7c-4ea4-aaff-ee807213dabc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e748fc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:1c:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855793, 'reachable_time': 23997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312382, 'error': None, 'target': 'ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.438 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f2bfdb87-8ced-49a1-9913-285c6ec424a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.508 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9f363e7d-74f1-4fcc-bd18-24b48600b1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.510 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e748fc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.510 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.511 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e748fc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:09 np0005465987 NetworkManager[44910]: <info>  [1759410489.5141] manager: (tap14e748fc-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Oct  2 09:08:09 np0005465987 kernel: tap14e748fc-20: entered promiscuous mode
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.520 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e748fc-20, col_values=(('external_ids', {'iface-id': '9fb48fb0-3960-4f8b-94a5-d7518fe00fe8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:09 np0005465987 ovn_controller[129172]: 2025-10-02T13:08:09Z|00820|binding|INFO|Releasing lport 9fb48fb0-3960-4f8b-94a5-d7518fe00fe8 from this chassis (sb_readonly=0)
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.525 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e748fc-2d6a-4d69-b120-68c995d49660.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e748fc-2d6a-4d69-b120-68c995d49660.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.526 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[36d9c0c5-55c4-4acc-85f5-f9cbfb3244b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.527 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-14e748fc-2d6a-4d69-b120-68c995d49660
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/14e748fc-2d6a-4d69-b120-68c995d49660.pid.haproxy
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 14e748fc-2d6a-4d69-b120-68c995d49660
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:08:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:09.529 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660', 'env', 'PROCESS_TAG=haproxy-14e748fc-2d6a-4d69-b120-68c995d49660', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e748fc-2d6a-4d69-b120-68c995d49660.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.599 2 DEBUG nova.compute.manager [req-2d679bb4-ced1-4e2e-90cb-1f58dac81398 req-2f9eb5d0-2d81-4570-8164-23b5f84cb70a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received event network-vif-plugged-0d204e32-92c1-4d57-9011-267d812ddfe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.600 2 DEBUG oslo_concurrency.lockutils [req-2d679bb4-ced1-4e2e-90cb-1f58dac81398 req-2f9eb5d0-2d81-4570-8164-23b5f84cb70a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.601 2 DEBUG oslo_concurrency.lockutils [req-2d679bb4-ced1-4e2e-90cb-1f58dac81398 req-2f9eb5d0-2d81-4570-8164-23b5f84cb70a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.601 2 DEBUG oslo_concurrency.lockutils [req-2d679bb4-ced1-4e2e-90cb-1f58dac81398 req-2f9eb5d0-2d81-4570-8164-23b5f84cb70a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:09 np0005465987 nova_compute[230713]: 2025-10-02 13:08:09.602 2 DEBUG nova.compute.manager [req-2d679bb4-ced1-4e2e-90cb-1f58dac81398 req-2f9eb5d0-2d81-4570-8164-23b5f84cb70a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Processing event network-vif-plugged-0d204e32-92c1-4d57-9011-267d812ddfe9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:08:09 np0005465987 podman[312456]: 2025-10-02 13:08:09.899311463 +0000 UTC m=+0.059202066 container create 3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:08:09 np0005465987 systemd[1]: Started libpod-conmon-3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488.scope.
Oct  2 09:08:09 np0005465987 podman[312456]: 2025-10-02 13:08:09.865828195 +0000 UTC m=+0.025718828 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:08:09 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:08:09 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6b01ab29539512aa647866d57aa75e18107850bffd32e8c4cb2b240e7f1462/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:08:09 np0005465987 podman[312456]: 2025-10-02 13:08:09.990562837 +0000 UTC m=+0.150453470 container init 3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:08:09 np0005465987 podman[312456]: 2025-10-02 13:08:09.995882881 +0000 UTC m=+0.155773494 container start 3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:08:10 np0005465987 neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660[312471]: [NOTICE]   (312475) : New worker (312477) forked
Oct  2 09:08:10 np0005465987 neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660[312471]: [NOTICE]   (312475) : Loading success.
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.171 2 DEBUG nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.172 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410490.171635, fc398644-66fc-44e3-9a6a-7389f5a542b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.172 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] VM Started (Lifecycle Event)#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.178 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.181 2 INFO nova.virt.libvirt.driver [-] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Instance spawned successfully.#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.182 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:08:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:10.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.230 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.235 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.265 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.265 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.266 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.266 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.267 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.267 2 DEBUG nova.virt.libvirt.driver [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.273 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.274 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410490.1716912, fc398644-66fc-44e3-9a6a-7389f5a542b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.274 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.323 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.327 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410490.175029, fc398644-66fc-44e3-9a6a-7389f5a542b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.327 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.380 2 INFO nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Took 6.28 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.381 2 DEBUG nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.397 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.400 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.690 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:08:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:10.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.750 2 INFO nova.compute.manager [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Took 7.44 seconds to build instance.#033[00m
Oct  2 09:08:10 np0005465987 nova_compute[230713]: 2025-10-02 13:08:10.955 2 DEBUG oslo_concurrency.lockutils [None req-c0290d57-c871-4e51-b9c3-79c964f6a3c6 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:11 np0005465987 nova_compute[230713]: 2025-10-02 13:08:11.809 2 DEBUG nova.compute.manager [req-de06ece9-e40b-4ca6-a3ee-c586cf9edfd6 req-f6d270fc-37f2-43c7-a8cd-95a2c86b7ec1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received event network-vif-plugged-0d204e32-92c1-4d57-9011-267d812ddfe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:11 np0005465987 nova_compute[230713]: 2025-10-02 13:08:11.810 2 DEBUG oslo_concurrency.lockutils [req-de06ece9-e40b-4ca6-a3ee-c586cf9edfd6 req-f6d270fc-37f2-43c7-a8cd-95a2c86b7ec1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:11 np0005465987 nova_compute[230713]: 2025-10-02 13:08:11.811 2 DEBUG oslo_concurrency.lockutils [req-de06ece9-e40b-4ca6-a3ee-c586cf9edfd6 req-f6d270fc-37f2-43c7-a8cd-95a2c86b7ec1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:11 np0005465987 nova_compute[230713]: 2025-10-02 13:08:11.811 2 DEBUG oslo_concurrency.lockutils [req-de06ece9-e40b-4ca6-a3ee-c586cf9edfd6 req-f6d270fc-37f2-43c7-a8cd-95a2c86b7ec1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:11 np0005465987 nova_compute[230713]: 2025-10-02 13:08:11.811 2 DEBUG nova.compute.manager [req-de06ece9-e40b-4ca6-a3ee-c586cf9edfd6 req-f6d270fc-37f2-43c7-a8cd-95a2c86b7ec1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] No waiting events found dispatching network-vif-plugged-0d204e32-92c1-4d57-9011-267d812ddfe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:08:11 np0005465987 nova_compute[230713]: 2025-10-02 13:08:11.811 2 WARNING nova.compute.manager [req-de06ece9-e40b-4ca6-a3ee-c586cf9edfd6 req-f6d270fc-37f2-43c7-a8cd-95a2c86b7ec1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received unexpected event network-vif-plugged-0d204e32-92c1-4d57-9011-267d812ddfe9 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:08:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:12.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:12.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:13 np0005465987 nova_compute[230713]: 2025-10-02 13:08:13.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:14.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:14.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:15 np0005465987 podman[312486]: 2025-10-02 13:08:15.833665206 +0000 UTC m=+0.052871735 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:08:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:16 np0005465987 nova_compute[230713]: 2025-10-02 13:08:16.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:16 np0005465987 NetworkManager[44910]: <info>  [1759410496.1778] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct  2 09:08:16 np0005465987 NetworkManager[44910]: <info>  [1759410496.1786] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Oct  2 09:08:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:16.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:16 np0005465987 nova_compute[230713]: 2025-10-02 13:08:16.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:16 np0005465987 ovn_controller[129172]: 2025-10-02T13:08:16Z|00821|binding|INFO|Releasing lport 9fb48fb0-3960-4f8b-94a5-d7518fe00fe8 from this chassis (sb_readonly=0)
Oct  2 09:08:16 np0005465987 nova_compute[230713]: 2025-10-02 13:08:16.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:16.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:17 np0005465987 nova_compute[230713]: 2025-10-02 13:08:17.016 2 DEBUG nova.compute.manager [req-79a5d8c1-f73d-4708-90b6-d2e4aef8a54c req-e508d895-af5d-4f4a-ba09-da2ffc6a0c0a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received event network-changed-0d204e32-92c1-4d57-9011-267d812ddfe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:17 np0005465987 nova_compute[230713]: 2025-10-02 13:08:17.017 2 DEBUG nova.compute.manager [req-79a5d8c1-f73d-4708-90b6-d2e4aef8a54c req-e508d895-af5d-4f4a-ba09-da2ffc6a0c0a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Refreshing instance network info cache due to event network-changed-0d204e32-92c1-4d57-9011-267d812ddfe9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:08:17 np0005465987 nova_compute[230713]: 2025-10-02 13:08:17.017 2 DEBUG oslo_concurrency.lockutils [req-79a5d8c1-f73d-4708-90b6-d2e4aef8a54c req-e508d895-af5d-4f4a-ba09-da2ffc6a0c0a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:17 np0005465987 nova_compute[230713]: 2025-10-02 13:08:17.017 2 DEBUG oslo_concurrency.lockutils [req-79a5d8c1-f73d-4708-90b6-d2e4aef8a54c req-e508d895-af5d-4f4a-ba09-da2ffc6a0c0a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:17 np0005465987 nova_compute[230713]: 2025-10-02 13:08:17.017 2 DEBUG nova.network.neutron [req-79a5d8c1-f73d-4708-90b6-d2e4aef8a54c req-e508d895-af5d-4f4a-ba09-da2ffc6a0c0a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Refreshing network info cache for port 0d204e32-92c1-4d57-9011-267d812ddfe9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:08:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:18.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:18 np0005465987 nova_compute[230713]: 2025-10-02 13:08:18.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:18 np0005465987 nova_compute[230713]: 2025-10-02 13:08:18.482 2 DEBUG nova.network.neutron [req-79a5d8c1-f73d-4708-90b6-d2e4aef8a54c req-e508d895-af5d-4f4a-ba09-da2ffc6a0c0a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updated VIF entry in instance network info cache for port 0d204e32-92c1-4d57-9011-267d812ddfe9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:08:18 np0005465987 nova_compute[230713]: 2025-10-02 13:08:18.483 2 DEBUG nova.network.neutron [req-79a5d8c1-f73d-4708-90b6-d2e4aef8a54c req-e508d895-af5d-4f4a-ba09-da2ffc6a0c0a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updating instance_info_cache with network_info: [{"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:18 np0005465987 nova_compute[230713]: 2025-10-02 13:08:18.561 2 DEBUG oslo_concurrency.lockutils [req-79a5d8c1-f73d-4708-90b6-d2e4aef8a54c req-e508d895-af5d-4f4a-ba09-da2ffc6a0c0a d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:18.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:20.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:20.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:20 np0005465987 podman[312508]: 2025-10-02 13:08:20.836508073 +0000 UTC m=+0.046397979 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:08:20 np0005465987 podman[312509]: 2025-10-02 13:08:20.846616667 +0000 UTC m=+0.052565676 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 09:08:20 np0005465987 podman[312507]: 2025-10-02 13:08:20.86368675 +0000 UTC m=+0.077181073 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:08:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:22.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:22 np0005465987 ovn_controller[129172]: 2025-10-02T13:08:22Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:55:18 10.100.0.6
Oct  2 09:08:22 np0005465987 ovn_controller[129172]: 2025-10-02T13:08:22Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:55:18 10.100.0.6
Oct  2 09:08:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:22.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:23 np0005465987 nova_compute[230713]: 2025-10-02 13:08:23.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:24.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:24.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:26.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:08:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:26.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:08:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:27.994 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:27.995 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:08:27.995 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:08:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:28.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:08:28 np0005465987 nova_compute[230713]: 2025-10-02 13:08:28.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:28.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:30.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:08:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:08:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:31 np0005465987 nova_compute[230713]: 2025-10-02 13:08:31.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:31 np0005465987 nova_compute[230713]: 2025-10-02 13:08:31.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:32.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:32.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:33 np0005465987 nova_compute[230713]: 2025-10-02 13:08:33.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:33 np0005465987 nova_compute[230713]: 2025-10-02 13:08:33.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:33 np0005465987 nova_compute[230713]: 2025-10-02 13:08:33.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 09:08:33 np0005465987 nova_compute[230713]: 2025-10-02 13:08:33.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:08:33 np0005465987 nova_compute[230713]: 2025-10-02 13:08:33.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:33 np0005465987 nova_compute[230713]: 2025-10-02 13:08:33.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:08:34 np0005465987 nova_compute[230713]: 2025-10-02 13:08:34.175 2 DEBUG nova.compute.manager [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:34.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:34 np0005465987 nova_compute[230713]: 2025-10-02 13:08:34.247 2 INFO nova.compute.manager [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] instance snapshotting#033[00m
Oct  2 09:08:34 np0005465987 nova_compute[230713]: 2025-10-02 13:08:34.513 2 INFO nova.virt.libvirt.driver [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Beginning live snapshot process#033[00m
Oct  2 09:08:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:34 np0005465987 nova_compute[230713]: 2025-10-02 13:08:34.858 2 DEBUG nova.virt.libvirt.imagebackend [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] No parent info for c2d0c2bc-fe21-4689-86ae-d6728c15874c; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 09:08:35 np0005465987 nova_compute[230713]: 2025-10-02 13:08:35.117 2 DEBUG nova.storage.rbd_utils [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] creating snapshot(2f685a3bc9f446f9b458d9103860aa4a) on rbd image(fc398644-66fc-44e3-9a6a-7389f5a542b8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e403 e403: 3 total, 3 up, 3 in
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.797231) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515797315, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2087, "num_deletes": 252, "total_data_size": 4940479, "memory_usage": 5002768, "flush_reason": "Manual Compaction"}
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515827196, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3236545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77366, "largest_seqno": 79447, "table_properties": {"data_size": 3227977, "index_size": 5253, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17984, "raw_average_key_size": 20, "raw_value_size": 3210826, "raw_average_value_size": 3656, "num_data_blocks": 228, "num_entries": 878, "num_filter_entries": 878, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410336, "oldest_key_time": 1759410336, "file_creation_time": 1759410515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 30050 microseconds, and 14918 cpu microseconds.
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.827278) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3236545 bytes OK
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.827316) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.829592) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.829672) EVENT_LOG_v1 {"time_micros": 1759410515829662, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.829737) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 4931258, prev total WAL file size 4931258, number of live WAL files 2.
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.831303) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3160KB)], [159(10210KB)]
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515831336, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 13692220, "oldest_snapshot_seqno": -1}
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 9957 keys, 11731989 bytes, temperature: kUnknown
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515903568, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 11731989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11669423, "index_size": 36597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24901, "raw_key_size": 262523, "raw_average_key_size": 26, "raw_value_size": 11496749, "raw_average_value_size": 1154, "num_data_blocks": 1386, "num_entries": 9957, "num_filter_entries": 9957, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759410515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.903928) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 11731989 bytes
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.905259) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.0 rd, 162.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.0 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(7.9) write-amplify(3.6) OK, records in: 10482, records dropped: 525 output_compression: NoCompression
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.905281) EVENT_LOG_v1 {"time_micros": 1759410515905272, "job": 102, "event": "compaction_finished", "compaction_time_micros": 72441, "compaction_time_cpu_micros": 27092, "output_level": 6, "num_output_files": 1, "total_output_size": 11731989, "num_input_records": 10482, "num_output_records": 9957, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515906493, "job": 102, "event": "table_file_deletion", "file_number": 161}
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410515908677, "job": 102, "event": "table_file_deletion", "file_number": 159}
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.831179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.908806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.908811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.908812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.908814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:35 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:08:35.908815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:35 np0005465987 nova_compute[230713]: 2025-10-02 13:08:35.924 2 DEBUG nova.storage.rbd_utils [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] cloning vms/fc398644-66fc-44e3-9a6a-7389f5a542b8_disk@2f685a3bc9f446f9b458d9103860aa4a to images/a1241e22-2f30-42e3-8072-a21ad0ab0f69 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 09:08:35 np0005465987 nova_compute[230713]: 2025-10-02 13:08:35.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:35 np0005465987 nova_compute[230713]: 2025-10-02 13:08:35.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:08:35 np0005465987 nova_compute[230713]: 2025-10-02 13:08:35.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:08:35 np0005465987 nova_compute[230713]: 2025-10-02 13:08:35.983 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:35 np0005465987 nova_compute[230713]: 2025-10-02 13:08:35.984 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:35 np0005465987 nova_compute[230713]: 2025-10-02 13:08:35.984 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:08:35 np0005465987 nova_compute[230713]: 2025-10-02 13:08:35.984 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc398644-66fc-44e3-9a6a-7389f5a542b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:36 np0005465987 nova_compute[230713]: 2025-10-02 13:08:36.045 2 DEBUG nova.storage.rbd_utils [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] flattening images/a1241e22-2f30-42e3-8072-a21ad0ab0f69 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 09:08:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:36.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:36 np0005465987 nova_compute[230713]: 2025-10-02 13:08:36.483 2 DEBUG nova.storage.rbd_utils [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] removing snapshot(2f685a3bc9f446f9b458d9103860aa4a) on rbd image(fc398644-66fc-44e3-9a6a-7389f5a542b8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 09:08:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e404 e404: 3 total, 3 up, 3 in
Oct  2 09:08:36 np0005465987 nova_compute[230713]: 2025-10-02 13:08:36.841 2 DEBUG nova.storage.rbd_utils [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] creating snapshot(snap) on rbd image(a1241e22-2f30-42e3-8072-a21ad0ab0f69) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:08:37 np0005465987 nova_compute[230713]: 2025-10-02 13:08:37.319 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updating instance_info_cache with network_info: [{"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:37 np0005465987 nova_compute[230713]: 2025-10-02 13:08:37.340 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:37 np0005465987 nova_compute[230713]: 2025-10-02 13:08:37.340 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:08:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e405 e405: 3 total, 3 up, 3 in
Oct  2 09:08:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:38.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:38 np0005465987 nova_compute[230713]: 2025-10-02 13:08:38.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:38 np0005465987 nova_compute[230713]: 2025-10-02 13:08:38.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:38 np0005465987 nova_compute[230713]: 2025-10-02 13:08:38.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 09:08:38 np0005465987 nova_compute[230713]: 2025-10-02 13:08:38.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:08:38 np0005465987 nova_compute[230713]: 2025-10-02 13:08:38.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:38 np0005465987 nova_compute[230713]: 2025-10-02 13:08:38.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:08:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:38.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:39 np0005465987 nova_compute[230713]: 2025-10-02 13:08:39.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:39 np0005465987 nova_compute[230713]: 2025-10-02 13:08:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:40 np0005465987 nova_compute[230713]: 2025-10-02 13:08:40.055 2 INFO nova.virt.libvirt.driver [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Snapshot image upload complete#033[00m
Oct  2 09:08:40 np0005465987 nova_compute[230713]: 2025-10-02 13:08:40.056 2 INFO nova.compute.manager [None req-b78887dd-cf7b-4c60-ba02-a565d8e16991 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Took 5.81 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 09:08:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:08:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:40.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:08:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:40.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:41 np0005465987 nova_compute[230713]: 2025-10-02 13:08:41.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:41 np0005465987 nova_compute[230713]: 2025-10-02 13:08:41.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:41 np0005465987 nova_compute[230713]: 2025-10-02 13:08:41.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.006 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.008 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.009 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.010 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:42.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/726771028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.458 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.702 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.703 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:08:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:42.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.862 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.863 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4103MB free_disk=20.897117614746094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.864 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:42 np0005465987 nova_compute[230713]: 2025-10-02 13:08:42.864 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:43 np0005465987 nova_compute[230713]: 2025-10-02 13:08:43.047 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance fc398644-66fc-44e3-9a6a-7389f5a542b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:08:43 np0005465987 nova_compute[230713]: 2025-10-02 13:08:43.048 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:08:43 np0005465987 nova_compute[230713]: 2025-10-02 13:08:43.048 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:08:43 np0005465987 nova_compute[230713]: 2025-10-02 13:08:43.087 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:43 np0005465987 nova_compute[230713]: 2025-10-02 13:08:43.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3315371559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:43 np0005465987 nova_compute[230713]: 2025-10-02 13:08:43.545 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:43 np0005465987 nova_compute[230713]: 2025-10-02 13:08:43.549 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:08:43 np0005465987 nova_compute[230713]: 2025-10-02 13:08:43.602 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:08:44 np0005465987 nova_compute[230713]: 2025-10-02 13:08:44.020 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:08:44 np0005465987 nova_compute[230713]: 2025-10-02 13:08:44.021 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:08:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:44.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:08:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:44.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e406 e406: 3 total, 3 up, 3 in
Oct  2 09:08:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:46.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:46 np0005465987 ovn_controller[129172]: 2025-10-02T13:08:46Z|00822|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  2 09:08:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:46.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:46 np0005465987 podman[312756]: 2025-10-02 13:08:46.842172819 +0000 UTC m=+0.053549764 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:08:47 np0005465987 nova_compute[230713]: 2025-10-02 13:08:47.021 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:47 np0005465987 nova_compute[230713]: 2025-10-02 13:08:47.021 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:08:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:48.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:48 np0005465987 nova_compute[230713]: 2025-10-02 13:08:48.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:48.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:50.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:50.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:51 np0005465987 podman[312776]: 2025-10-02 13:08:51.866353936 +0000 UTC m=+0.081940943 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:08:51 np0005465987 podman[312777]: 2025-10-02 13:08:51.889519613 +0000 UTC m=+0.101029750 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:08:51 np0005465987 podman[312775]: 2025-10-02 13:08:51.899544456 +0000 UTC m=+0.121049363 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct  2 09:08:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:52.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:52.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:53 np0005465987 nova_compute[230713]: 2025-10-02 13:08:53.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e407 e407: 3 total, 3 up, 3 in
Oct  2 09:08:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:08:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:08:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:08:53 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:08:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:54.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:54.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:08:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:08:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:08:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:08:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:08:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1808770692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:08:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:08:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1808770692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:08:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:08:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:56.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:08:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:08:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:58.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:08:58 np0005465987 nova_compute[230713]: 2025-10-02 13:08:58.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:08:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:58.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:59 np0005465987 nova_compute[230713]: 2025-10-02 13:08:59.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:00.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:00.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:09:01 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:09:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:02.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:02.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:03 np0005465987 nova_compute[230713]: 2025-10-02 13:09:03.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:04 np0005465987 nova_compute[230713]: 2025-10-02 13:09:04.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:04.079 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:04.080 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:09:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:09:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:04.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:09:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:04.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:06.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e408 e408: 3 total, 3 up, 3 in
Oct  2 09:09:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:06.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:08.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:08 np0005465987 nova_compute[230713]: 2025-10-02 13:09:08.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:08.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:09.082 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:10.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e409 e409: 3 total, 3 up, 3 in
Oct  2 09:09:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:12.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:12.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:13 np0005465987 nova_compute[230713]: 2025-10-02 13:09:13.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:09:13 np0005465987 nova_compute[230713]: 2025-10-02 13:09:13.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:09:13 np0005465987 nova_compute[230713]: 2025-10-02 13:09:13.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 09:09:13 np0005465987 nova_compute[230713]: 2025-10-02 13:09:13.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:09:13 np0005465987 nova_compute[230713]: 2025-10-02 13:09:13.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:13 np0005465987 nova_compute[230713]: 2025-10-02 13:09:13.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:09:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:14.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:14.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:16.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:16.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:17 np0005465987 podman[313020]: 2025-10-02 13:09:17.838871982 +0000 UTC m=+0.052276468 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 09:09:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:18.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:18 np0005465987 nova_compute[230713]: 2025-10-02 13:09:18.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:09:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:18.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:20.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:20.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:22 np0005465987 podman[313040]: 2025-10-02 13:09:22.847355232 +0000 UTC m=+0.057906740 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct  2 09:09:22 np0005465987 podman[313039]: 2025-10-02 13:09:22.864686762 +0000 UTC m=+0.081406398 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Oct  2 09:09:22 np0005465987 podman[313041]: 2025-10-02 13:09:22.875489405 +0000 UTC m=+0.083504005 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:09:23 np0005465987 nova_compute[230713]: 2025-10-02 13:09:23.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:09:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:24.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:26.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:26.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:27.995 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:27.995 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:27.996 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:28.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:28 np0005465987 nova_compute[230713]: 2025-10-02 13:09:28.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:09:28 np0005465987 nova_compute[230713]: 2025-10-02 13:09:28.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:28 np0005465987 nova_compute[230713]: 2025-10-02 13:09:28.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 09:09:28 np0005465987 nova_compute[230713]: 2025-10-02 13:09:28.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:09:28 np0005465987 nova_compute[230713]: 2025-10-02 13:09:28.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 09:09:28 np0005465987 nova_compute[230713]: 2025-10-02 13:09:28.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:28.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:30.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:30.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:32 np0005465987 nova_compute[230713]: 2025-10-02 13:09:32.106 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:09:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:32.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:09:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:32.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:33 np0005465987 nova_compute[230713]: 2025-10-02 13:09:33.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:33 np0005465987 nova_compute[230713]: 2025-10-02 13:09:33.993 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:34.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:34.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:36 np0005465987 ovn_controller[129172]: 2025-10-02T13:09:36Z|00823|binding|INFO|Releasing lport 9fb48fb0-3960-4f8b-94a5-d7518fe00fe8 from this chassis (sb_readonly=0)
Oct  2 09:09:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:36.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:36 np0005465987 nova_compute[230713]: 2025-10-02 13:09:36.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:37 np0005465987 nova_compute[230713]: 2025-10-02 13:09:37.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:37 np0005465987 nova_compute[230713]: 2025-10-02 13:09:37.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:09:37 np0005465987 nova_compute[230713]: 2025-10-02 13:09:37.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:09:38 np0005465987 nova_compute[230713]: 2025-10-02 13:09:38.192 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:38 np0005465987 nova_compute[230713]: 2025-10-02 13:09:38.192 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:38 np0005465987 nova_compute[230713]: 2025-10-02 13:09:38.192 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:09:38 np0005465987 nova_compute[230713]: 2025-10-02 13:09:38.193 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc398644-66fc-44e3-9a6a-7389f5a542b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:09:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:38.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:09:38 np0005465987 nova_compute[230713]: 2025-10-02 13:09:38.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:38.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:39 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e410 e410: 3 total, 3 up, 3 in
Oct  2 09:09:39 np0005465987 nova_compute[230713]: 2025-10-02 13:09:39.407 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updating instance_info_cache with network_info: [{"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:40 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e411 e411: 3 total, 3 up, 3 in
Oct  2 09:09:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:40.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:40 np0005465987 nova_compute[230713]: 2025-10-02 13:09:40.568 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:40 np0005465987 nova_compute[230713]: 2025-10-02 13:09:40.569 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:09:40 np0005465987 nova_compute[230713]: 2025-10-02 13:09:40.569 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:40 np0005465987 nova_compute[230713]: 2025-10-02 13:09:40.569 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:40.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e412 e412: 3 total, 3 up, 3 in
Oct  2 09:09:41 np0005465987 nova_compute[230713]: 2025-10-02 13:09:41.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:42.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:42.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:42 np0005465987 nova_compute[230713]: 2025-10-02 13:09:42.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.010 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.011 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.012 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.012 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.012 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3380021634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.478 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.589 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.590 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.730 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.731 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4057MB free_disk=20.93640899658203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.731 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.732 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.822 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance fc398644-66fc-44e3-9a6a-7389f5a542b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.822 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.822 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:09:43 np0005465987 nova_compute[230713]: 2025-10-02 13:09:43.883 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:44 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2231093600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:44 np0005465987 nova_compute[230713]: 2025-10-02 13:09:44.306 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:44 np0005465987 nova_compute[230713]: 2025-10-02 13:09:44.312 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:09:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:44.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:44 np0005465987 nova_compute[230713]: 2025-10-02 13:09:44.352 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:09:44 np0005465987 nova_compute[230713]: 2025-10-02 13:09:44.354 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:09:44 np0005465987 nova_compute[230713]: 2025-10-02 13:09:44.354 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e413 e413: 3 total, 3 up, 3 in
Oct  2 09:09:45 np0005465987 nova_compute[230713]: 2025-10-02 13:09:45.356 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:45 np0005465987 nova_compute[230713]: 2025-10-02 13:09:45.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:45.918 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:45 np0005465987 nova_compute[230713]: 2025-10-02 13:09:45.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:45.919 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:09:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:09:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:46.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:09:46 np0005465987 nova_compute[230713]: 2025-10-02 13:09:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:46 np0005465987 nova_compute[230713]: 2025-10-02 13:09:46.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:09:47 np0005465987 nova_compute[230713]: 2025-10-02 13:09:47.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:48.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:48 np0005465987 nova_compute[230713]: 2025-10-02 13:09:48.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:48 np0005465987 podman[313147]: 2025-10-02 13:09:48.861782918 +0000 UTC m=+0.080175255 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 09:09:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:48.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:50 np0005465987 nova_compute[230713]: 2025-10-02 13:09:50.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:50.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e414 e414: 3 total, 3 up, 3 in
Oct  2 09:09:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:50.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e415 e415: 3 total, 3 up, 3 in
Oct  2 09:09:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.062 2 DEBUG nova.compute.manager [req-8c5edba0-bac2-4da6-97e9-7134c0caa29e req-ff0f941f-e061-485f-aeb6-40f53821198d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received event network-changed-0d204e32-92c1-4d57-9011-267d812ddfe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.062 2 DEBUG nova.compute.manager [req-8c5edba0-bac2-4da6-97e9-7134c0caa29e req-ff0f941f-e061-485f-aeb6-40f53821198d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Refreshing instance network info cache due to event network-changed-0d204e32-92c1-4d57-9011-267d812ddfe9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.063 2 DEBUG oslo_concurrency.lockutils [req-8c5edba0-bac2-4da6-97e9-7134c0caa29e req-ff0f941f-e061-485f-aeb6-40f53821198d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.063 2 DEBUG oslo_concurrency.lockutils [req-8c5edba0-bac2-4da6-97e9-7134c0caa29e req-ff0f941f-e061-485f-aeb6-40f53821198d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.063 2 DEBUG nova.network.neutron [req-8c5edba0-bac2-4da6-97e9-7134c0caa29e req-ff0f941f-e061-485f-aeb6-40f53821198d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Refreshing network info cache for port 0d204e32-92c1-4d57-9011-267d812ddfe9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.283 2 DEBUG oslo_concurrency.lockutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquiring lock "fc398644-66fc-44e3-9a6a-7389f5a542b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.284 2 DEBUG oslo_concurrency.lockutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.284 2 DEBUG oslo_concurrency.lockutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquiring lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.285 2 DEBUG oslo_concurrency.lockutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.285 2 DEBUG oslo_concurrency.lockutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.286 2 INFO nova.compute.manager [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Terminating instance#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.287 2 DEBUG nova.compute.manager [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:09:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:52.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:52 np0005465987 kernel: tap0d204e32-92 (unregistering): left promiscuous mode
Oct  2 09:09:52 np0005465987 NetworkManager[44910]: <info>  [1759410592.3602] device (tap0d204e32-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:52 np0005465987 ovn_controller[129172]: 2025-10-02T13:09:52Z|00824|binding|INFO|Releasing lport 0d204e32-92c1-4d57-9011-267d812ddfe9 from this chassis (sb_readonly=0)
Oct  2 09:09:52 np0005465987 ovn_controller[129172]: 2025-10-02T13:09:52Z|00825|binding|INFO|Setting lport 0d204e32-92c1-4d57-9011-267d812ddfe9 down in Southbound
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:52 np0005465987 ovn_controller[129172]: 2025-10-02T13:09:52Z|00826|binding|INFO|Removing iface tap0d204e32-92 ovn-installed in OVS
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.440 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:55:18 10.100.0.6'], port_security=['fa:16:3e:08:55:18 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'fc398644-66fc-44e3-9a6a-7389f5a542b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e748fc-2d6a-4d69-b120-68c995d49660', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '60bfd415ee154615b20dd99528061614', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6863a9d1-67a6-432a-b497-906487ecb0c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93cd313a-87a8-4538-814c-550ca73d3eca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=0d204e32-92c1-4d57-9011-267d812ddfe9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.441 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 0d204e32-92c1-4d57-9011-267d812ddfe9 in datapath 14e748fc-2d6a-4d69-b120-68c995d49660 unbound from our chassis#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.442 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e748fc-2d6a-4d69-b120-68c995d49660, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.443 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[19007319-b6ae-4b3d-b4fa-62b73220aee8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.444 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660 namespace which is not needed anymore#033[00m
Oct  2 09:09:52 np0005465987 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000d0.scope: Deactivated successfully.
Oct  2 09:09:52 np0005465987 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000d0.scope: Consumed 16.700s CPU time.
Oct  2 09:09:52 np0005465987 systemd-machined[188335]: Machine qemu-92-instance-000000d0 terminated.
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.526 2 INFO nova.virt.libvirt.driver [-] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Instance destroyed successfully.#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.527 2 DEBUG nova.objects.instance [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lazy-loading 'resources' on Instance uuid fc398644-66fc-44e3-9a6a-7389f5a542b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.585 2 DEBUG nova.virt.libvirt.vif [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-619026020',display_name='tempest-TestSnapshotPattern-server-619026020',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-619026020',id=208,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMla0NYshozjIhqPshVDB7V9QQUdVkTPcrZs6EBQpt1OKq+5dB8xVPlwumBL6FU6d1oBZUn9yPH7sBT9aKh0ThWjhX3hBNPhKbMCSDMfEKO24D2SpIWzAY4pSc/pr8RlQ==',key_name='tempest-TestSnapshotPattern-747682339',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='60bfd415ee154615b20dd99528061614',ramdisk_id='',reservation_id='r-ohg1oi4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-400150385',owner_user_name='tempest-TestSnapshotPattern-400150385-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:08:40Z,user_data=None,user_id='29c8a28c5bdd4feb9412127428bf0c3b',uuid=fc398644-66fc-44e3-9a6a-7389f5a542b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.585 2 DEBUG nova.network.os_vif_util [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Converting VIF {"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.586 2 DEBUG nova.network.os_vif_util [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:55:18,bridge_name='br-int',has_traffic_filtering=True,id=0d204e32-92c1-4d57-9011-267d812ddfe9,network=Network(14e748fc-2d6a-4d69-b120-68c995d49660),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d204e32-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.587 2 DEBUG os_vif [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:55:18,bridge_name='br-int',has_traffic_filtering=True,id=0d204e32-92c1-4d57-9011-267d812ddfe9,network=Network(14e748fc-2d6a-4d69-b120-68c995d49660),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d204e32-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:09:52 np0005465987 neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660[312471]: [NOTICE]   (312475) : haproxy version is 2.8.14-c23fe91
Oct  2 09:09:52 np0005465987 neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660[312471]: [NOTICE]   (312475) : path to executable is /usr/sbin/haproxy
Oct  2 09:09:52 np0005465987 neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660[312471]: [WARNING]  (312475) : Exiting Master process...
Oct  2 09:09:52 np0005465987 neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660[312471]: [WARNING]  (312475) : Exiting Master process...
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.588 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d204e32-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:52 np0005465987 neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660[312471]: [ALERT]    (312475) : Current worker (312477) exited with code 143 (Terminated)
Oct  2 09:09:52 np0005465987 neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660[312471]: [WARNING]  (312475) : All workers exited. Exiting... (0)
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:52 np0005465987 systemd[1]: libpod-3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488.scope: Deactivated successfully.
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.594 2 INFO os_vif [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:55:18,bridge_name='br-int',has_traffic_filtering=True,id=0d204e32-92c1-4d57-9011-267d812ddfe9,network=Network(14e748fc-2d6a-4d69-b120-68c995d49660),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d204e32-92')#033[00m
Oct  2 09:09:52 np0005465987 podman[313197]: 2025-10-02 13:09:52.600205894 +0000 UTC m=+0.050724586 container died 3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:09:52 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488-userdata-shm.mount: Deactivated successfully.
Oct  2 09:09:52 np0005465987 systemd[1]: var-lib-containers-storage-overlay-dd6b01ab29539512aa647866d57aa75e18107850bffd32e8c4cb2b240e7f1462-merged.mount: Deactivated successfully.
Oct  2 09:09:52 np0005465987 podman[313197]: 2025-10-02 13:09:52.63696564 +0000 UTC m=+0.087484352 container cleanup 3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:09:52 np0005465987 systemd[1]: libpod-conmon-3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488.scope: Deactivated successfully.
Oct  2 09:09:52 np0005465987 podman[313246]: 2025-10-02 13:09:52.700040221 +0000 UTC m=+0.038346221 container remove 3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.705 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cae7f249-d511-4de5-a770-451ec1d834c5]: (4, ('Thu Oct  2 01:09:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660 (3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488)\n3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488\nThu Oct  2 01:09:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660 (3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488)\n3fbfeeba0bc7016ff87c1ef72ed3f86b9f62fa3d948ba09efd5be81cb6232488\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.706 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[969c9f9d-9788-4126-9d46-668ca3f80c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.707 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e748fc-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:52 np0005465987 kernel: tap14e748fc-20: left promiscuous mode
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.713 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[756f14ac-a754-47a2-ab59-300415788e22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.740 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0bf7f7-82ba-4369-924e-66b042dcf59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.741 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[46cbd021-0333-4f20-925d-6ae42481edbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.755 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab0f909-ebc1-4613-a445-2cbdcd0c7e71]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855786, 'reachable_time': 31151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313261, 'error': None, 'target': 'ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:52 np0005465987 systemd[1]: run-netns-ovnmeta\x2d14e748fc\x2d2d6a\x2d4d69\x2db120\x2d68c995d49660.mount: Deactivated successfully.
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.758 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e748fc-2d6a-4d69-b120-68c995d49660 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:09:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:52.758 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f801ba-9e08-4228-9ff7-4e2706419b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.796 2 DEBUG nova.compute.manager [req-77ba6f5b-5634-44cb-85b6-f0d8845a39e9 req-fe0ffd33-e0cd-42fe-942c-ab764bac280b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received event network-vif-unplugged-0d204e32-92c1-4d57-9011-267d812ddfe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.797 2 DEBUG oslo_concurrency.lockutils [req-77ba6f5b-5634-44cb-85b6-f0d8845a39e9 req-fe0ffd33-e0cd-42fe-942c-ab764bac280b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.797 2 DEBUG oslo_concurrency.lockutils [req-77ba6f5b-5634-44cb-85b6-f0d8845a39e9 req-fe0ffd33-e0cd-42fe-942c-ab764bac280b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.798 2 DEBUG oslo_concurrency.lockutils [req-77ba6f5b-5634-44cb-85b6-f0d8845a39e9 req-fe0ffd33-e0cd-42fe-942c-ab764bac280b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.799 2 DEBUG nova.compute.manager [req-77ba6f5b-5634-44cb-85b6-f0d8845a39e9 req-fe0ffd33-e0cd-42fe-942c-ab764bac280b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] No waiting events found dispatching network-vif-unplugged-0d204e32-92c1-4d57-9011-267d812ddfe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:09:52 np0005465987 nova_compute[230713]: 2025-10-02 13:09:52.799 2 DEBUG nova.compute.manager [req-77ba6f5b-5634-44cb-85b6-f0d8845a39e9 req-fe0ffd33-e0cd-42fe-942c-ab764bac280b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received event network-vif-unplugged-0d204e32-92c1-4d57-9011-267d812ddfe9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:09:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:52.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.094 2 INFO nova.virt.libvirt.driver [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Deleting instance files /var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8_del#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.094 2 INFO nova.virt.libvirt.driver [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Deletion of /var/lib/nova/instances/fc398644-66fc-44e3-9a6a-7389f5a542b8_del complete#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.155 2 INFO nova.compute.manager [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.156 2 DEBUG oslo.service.loopingcall [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.156 2 DEBUG nova.compute.manager [-] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.156 2 DEBUG nova.network.neutron [-] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.695 2 DEBUG nova.network.neutron [req-8c5edba0-bac2-4da6-97e9-7134c0caa29e req-ff0f941f-e061-485f-aeb6-40f53821198d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updated VIF entry in instance network info cache for port 0d204e32-92c1-4d57-9011-267d812ddfe9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.696 2 DEBUG nova.network.neutron [req-8c5edba0-bac2-4da6-97e9-7134c0caa29e req-ff0f941f-e061-485f-aeb6-40f53821198d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updating instance_info_cache with network_info: [{"id": "0d204e32-92c1-4d57-9011-267d812ddfe9", "address": "fa:16:3e:08:55:18", "network": {"id": "14e748fc-2d6a-4d69-b120-68c995d49660", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1401412803-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "60bfd415ee154615b20dd99528061614", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d204e32-92", "ovs_interfaceid": "0d204e32-92c1-4d57-9011-267d812ddfe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.715 2 DEBUG oslo_concurrency.lockutils [req-8c5edba0-bac2-4da6-97e9-7134c0caa29e req-ff0f941f-e061-485f-aeb6-40f53821198d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-fc398644-66fc-44e3-9a6a-7389f5a542b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:53 np0005465987 podman[313264]: 2025-10-02 13:09:53.85191936 +0000 UTC m=+0.064825389 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:09:53 np0005465987 podman[313265]: 2025-10-02 13:09:53.85193529 +0000 UTC m=+0.059143524 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:09:53 np0005465987 podman[313263]: 2025-10-02 13:09:53.87958543 +0000 UTC m=+0.097437203 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.967 2 DEBUG nova.network.neutron [-] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.977 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.977 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:09:53 np0005465987 nova_compute[230713]: 2025-10-02 13:09:53.995 2 INFO nova.compute.manager [-] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Took 0.84 seconds to deallocate network for instance.#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.034 2 DEBUG oslo_concurrency.lockutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.034 2 DEBUG oslo_concurrency.lockutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.093 2 DEBUG oslo_concurrency.processutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.187 2 DEBUG nova.compute.manager [req-2d04fc4b-4b5b-454f-894d-b0e48d18e82e req-2ff7325a-9e64-441b-a2ef-4a89be7404c5 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received event network-vif-deleted-0d204e32-92c1-4d57-9011-267d812ddfe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:09:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:54.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:09:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2302425358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.522 2 DEBUG oslo_concurrency.processutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.527 2 DEBUG nova.compute.provider_tree [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.552 2 DEBUG nova.scheduler.client.report [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.574 2 DEBUG oslo_concurrency.lockutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.604 2 INFO nova.scheduler.client.report [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Deleted allocations for instance fc398644-66fc-44e3-9a6a-7389f5a542b8#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.763 2 DEBUG oslo_concurrency.lockutils [None req-6f8f35fc-b5f4-440e-acaf-257bfe1707fd 29c8a28c5bdd4feb9412127428bf0c3b 60bfd415ee154615b20dd99528061614 - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:54.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.892 2 DEBUG nova.compute.manager [req-1a00a5ae-6022-4c6f-bc17-e856c97f90fc req-8cc77a86-4016-4e18-824a-44c2df0797be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received event network-vif-plugged-0d204e32-92c1-4d57-9011-267d812ddfe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.893 2 DEBUG oslo_concurrency.lockutils [req-1a00a5ae-6022-4c6f-bc17-e856c97f90fc req-8cc77a86-4016-4e18-824a-44c2df0797be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.893 2 DEBUG oslo_concurrency.lockutils [req-1a00a5ae-6022-4c6f-bc17-e856c97f90fc req-8cc77a86-4016-4e18-824a-44c2df0797be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.893 2 DEBUG oslo_concurrency.lockutils [req-1a00a5ae-6022-4c6f-bc17-e856c97f90fc req-8cc77a86-4016-4e18-824a-44c2df0797be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "fc398644-66fc-44e3-9a6a-7389f5a542b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.893 2 DEBUG nova.compute.manager [req-1a00a5ae-6022-4c6f-bc17-e856c97f90fc req-8cc77a86-4016-4e18-824a-44c2df0797be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] No waiting events found dispatching network-vif-plugged-0d204e32-92c1-4d57-9011-267d812ddfe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:09:54 np0005465987 nova_compute[230713]: 2025-10-02 13:09:54.894 2 WARNING nova.compute.manager [req-1a00a5ae-6022-4c6f-bc17-e856c97f90fc req-8cc77a86-4016-4e18-824a-44c2df0797be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Received unexpected event network-vif-plugged-0d204e32-92c1-4d57-9011-267d812ddfe9 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:09:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:09:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640839144' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:09:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:09:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640839144' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:09:55 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:09:55.921 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:56.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:56.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:57 np0005465987 nova_compute[230713]: 2025-10-02 13:09:57.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:58.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:58 np0005465987 nova_compute[230713]: 2025-10-02 13:09:58.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:09:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:09:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:58.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:10:00 np0005465987 nova_compute[230713]: 2025-10-02 13:10:00.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:00 np0005465987 nova_compute[230713]: 2025-10-02 13:10:00.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:00.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:00.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:01 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 09:10:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 e416: 3 total, 3 up, 3 in
Oct  2 09:10:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:10:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:10:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:10:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:02.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:02 np0005465987 nova_compute[230713]: 2025-10-02 13:10:02.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:02.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:03 np0005465987 nova_compute[230713]: 2025-10-02 13:10:03.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:04.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:04.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:06.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:06.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:06 np0005465987 nova_compute[230713]: 2025-10-02 13:10:06.987 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:06 np0005465987 nova_compute[230713]: 2025-10-02 13:10:06.987 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:10:07 np0005465987 nova_compute[230713]: 2025-10-02 13:10:07.003 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:10:07 np0005465987 nova_compute[230713]: 2025-10-02 13:10:07.525 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410592.524524, fc398644-66fc-44e3-9a6a-7389f5a542b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:07 np0005465987 nova_compute[230713]: 2025-10-02 13:10:07.526 2 INFO nova.compute.manager [-] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:10:07 np0005465987 nova_compute[230713]: 2025-10-02 13:10:07.551 2 DEBUG nova.compute.manager [None req-c7bdfe99-0a36-405c-ae5f-2f94c4824018 - - - - - -] [instance: fc398644-66fc-44e3-9a6a-7389f5a542b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:07 np0005465987 nova_compute[230713]: 2025-10-02 13:10:07.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:08.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:10:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:10:08 np0005465987 nova_compute[230713]: 2025-10-02 13:10:08.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:08.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:10.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:10.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:12.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:12 np0005465987 nova_compute[230713]: 2025-10-02 13:10:12.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:12.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:13 np0005465987 nova_compute[230713]: 2025-10-02 13:10:13.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:14.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:14.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:14 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.062686) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616062927, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1473, "num_deletes": 259, "total_data_size": 3051928, "memory_usage": 3099744, "flush_reason": "Manual Compaction"}
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616075433, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 1999361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79452, "largest_seqno": 80920, "table_properties": {"data_size": 1993082, "index_size": 3481, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13921, "raw_average_key_size": 20, "raw_value_size": 1980139, "raw_average_value_size": 2886, "num_data_blocks": 152, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410516, "oldest_key_time": 1759410516, "file_creation_time": 1759410616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 12602 microseconds, and 4972 cpu microseconds.
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.075475) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 1999361 bytes OK
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.075493) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.077200) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.077211) EVENT_LOG_v1 {"time_micros": 1759410616077208, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.077228) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 3044994, prev total WAL file size 3044994, number of live WAL files 2.
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.078091) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303130' seq:72057594037927935, type:22 .. '6C6F676D0033323631' seq:0, type:0; will stop at (end)
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(1952KB)], [162(11MB)]
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616078182, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 13731350, "oldest_snapshot_seqno": -1}
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10109 keys, 13596387 bytes, temperature: kUnknown
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616158240, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 13596387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13530471, "index_size": 39563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 266798, "raw_average_key_size": 26, "raw_value_size": 13352806, "raw_average_value_size": 1320, "num_data_blocks": 1509, "num_entries": 10109, "num_filter_entries": 10109, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759410616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.158650) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 13596387 bytes
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.160113) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.1 rd, 169.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.2 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(13.7) write-amplify(6.8) OK, records in: 10643, records dropped: 534 output_compression: NoCompression
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.160136) EVENT_LOG_v1 {"time_micros": 1759410616160125, "job": 104, "event": "compaction_finished", "compaction_time_micros": 80269, "compaction_time_cpu_micros": 33258, "output_level": 6, "num_output_files": 1, "total_output_size": 13596387, "num_input_records": 10643, "num_output_records": 10109, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616160865, "job": 104, "event": "table_file_deletion", "file_number": 164}
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410616162914, "job": 104, "event": "table_file_deletion", "file_number": 162}
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.077935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.162950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.162954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.162956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.162958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:10:16.162960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:16.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:16.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:17 np0005465987 nova_compute[230713]: 2025-10-02 13:10:17.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:18 np0005465987 nova_compute[230713]: 2025-10-02 13:10:18.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:18.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:19 np0005465987 podman[313529]: 2025-10-02 13:10:19.851575014 +0000 UTC m=+0.066006491 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:10:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:20.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:20.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:22.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:22 np0005465987 nova_compute[230713]: 2025-10-02 13:10:22.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:22.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:23 np0005465987 nova_compute[230713]: 2025-10-02 13:10:23.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:24.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:24 np0005465987 podman[313550]: 2025-10-02 13:10:24.846481696 +0000 UTC m=+0.057634484 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 09:10:24 np0005465987 podman[313551]: 2025-10-02 13:10:24.874590568 +0000 UTC m=+0.078658463 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:10:24 np0005465987 podman[313549]: 2025-10-02 13:10:24.875605156 +0000 UTC m=+0.091444740 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  2 09:10:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:24.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:26.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:10:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:26.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:10:27 np0005465987 nova_compute[230713]: 2025-10-02 13:10:27.538 2 DEBUG nova.virt.libvirt.driver [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Creating tmpfile /var/lib/nova/instances/tmp4_a5vvlk to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct  2 09:10:27 np0005465987 nova_compute[230713]: 2025-10-02 13:10:27.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:27 np0005465987 nova_compute[230713]: 2025-10-02 13:10:27.647 2 DEBUG nova.compute.manager [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=17408,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp4_a5vvlk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct  2 09:10:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:27.997 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:27.997 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:27.997 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:28.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:28 np0005465987 nova_compute[230713]: 2025-10-02 13:10:28.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:28 np0005465987 nova_compute[230713]: 2025-10-02 13:10:28.498 2 DEBUG nova.compute.manager [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=17408,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp4_a5vvlk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6087853f-327c-46b2-baa6-c24854d98b97',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct  2 09:10:28 np0005465987 nova_compute[230713]: 2025-10-02 13:10:28.564 2 DEBUG oslo_concurrency.lockutils [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "refresh_cache-6087853f-327c-46b2-baa6-c24854d98b97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:28 np0005465987 nova_compute[230713]: 2025-10-02 13:10:28.565 2 DEBUG oslo_concurrency.lockutils [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquired lock "refresh_cache-6087853f-327c-46b2-baa6-c24854d98b97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:28 np0005465987 nova_compute[230713]: 2025-10-02 13:10:28.565 2 DEBUG nova.network.neutron [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:10:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:28.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:30.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:32.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.932 2 DEBUG nova.network.neutron [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Updating instance_info_cache with network_info: [{"id": "994b5877-4440-4c86-be4e-fd60053ca206", "address": "fa:16:3e:80:bc:0d", "network": {"id": "75dc16ae-c86f-409d-a774-fc174172fb9a", "bridge": "br-int", "label": "tempest-network-smoke--1179677419", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994b5877-44", "ovs_interfaceid": "994b5877-4440-4c86-be4e-fd60053ca206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:32.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.973 2 DEBUG oslo_concurrency.lockutils [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Releasing lock "refresh_cache-6087853f-327c-46b2-baa6-c24854d98b97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.975 2 DEBUG nova.virt.libvirt.driver [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=17408,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp4_a5vvlk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6087853f-327c-46b2-baa6-c24854d98b97',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.976 2 DEBUG nova.virt.libvirt.driver [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Creating instance directory: /var/lib/nova/instances/6087853f-327c-46b2-baa6-c24854d98b97 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.976 2 DEBUG nova.virt.libvirt.driver [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Ensure instance console log exists: /var/lib/nova/instances/6087853f-327c-46b2-baa6-c24854d98b97/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.976 2 DEBUG nova.virt.libvirt.driver [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.977 2 DEBUG nova.virt.libvirt.vif [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1574812382',display_name='tempest-TestNetworkAdvancedServerOps-server-1574812382',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1574812382',id=211,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCSYCMEZ6q/lvfSzWU/Eg4JQitDnhXCQauYJ5tidoiUmnnwGO4dpjXw3f4qFvlN8HpQGCKccR0Wgvw5uEOFmQ9+8YrZuneTD88dhlCtVTonVS+twysO7TRzsLhP5B6Zj9g==',key_name='tempest-TestNetworkAdvancedServerOps-842167042',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:10:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-bs5w7gov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:02Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=6087853f-327c-46b2-baa6-c24854d98b97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "994b5877-4440-4c86-be4e-fd60053ca206", "address": "fa:16:3e:80:bc:0d", "network": {"id": "75dc16ae-c86f-409d-a774-fc174172fb9a", "bridge": "br-int", "label": "tempest-network-smoke--1179677419", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap994b5877-44", "ovs_interfaceid": "994b5877-4440-4c86-be4e-fd60053ca206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.978 2 DEBUG nova.network.os_vif_util [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converting VIF {"id": "994b5877-4440-4c86-be4e-fd60053ca206", "address": "fa:16:3e:80:bc:0d", "network": {"id": "75dc16ae-c86f-409d-a774-fc174172fb9a", "bridge": "br-int", "label": "tempest-network-smoke--1179677419", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap994b5877-44", "ovs_interfaceid": "994b5877-4440-4c86-be4e-fd60053ca206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.979 2 DEBUG nova.network.os_vif_util [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:bc:0d,bridge_name='br-int',has_traffic_filtering=True,id=994b5877-4440-4c86-be4e-fd60053ca206,network=Network(75dc16ae-c86f-409d-a774-fc174172fb9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994b5877-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.979 2 DEBUG os_vif [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:bc:0d,bridge_name='br-int',has_traffic_filtering=True,id=994b5877-4440-4c86-be4e-fd60053ca206,network=Network(75dc16ae-c86f-409d-a774-fc174172fb9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994b5877-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap994b5877-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap994b5877-44, col_values=(('external_ids', {'iface-id': '994b5877-4440-4c86-be4e-fd60053ca206', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:bc:0d', 'vm-uuid': '6087853f-327c-46b2-baa6-c24854d98b97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:32 np0005465987 NetworkManager[44910]: <info>  [1759410632.9864] manager: (tap994b5877-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.995 2 INFO os_vif [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:bc:0d,bridge_name='br-int',has_traffic_filtering=True,id=994b5877-4440-4c86-be4e-fd60053ca206,network=Network(75dc16ae-c86f-409d-a774-fc174172fb9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994b5877-44')#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.996 2 DEBUG nova.virt.libvirt.driver [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct  2 09:10:32 np0005465987 nova_compute[230713]: 2025-10-02 13:10:32.996 2 DEBUG nova.compute.manager [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=17408,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp4_a5vvlk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6087853f-327c-46b2-baa6-c24854d98b97',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct  2 09:10:33 np0005465987 nova_compute[230713]: 2025-10-02 13:10:33.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:33 np0005465987 nova_compute[230713]: 2025-10-02 13:10:33.975 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:34.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:34 np0005465987 nova_compute[230713]: 2025-10-02 13:10:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:35.498 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:35 np0005465987 nova_compute[230713]: 2025-10-02 13:10:35.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:35 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:35.499 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:10:35 np0005465987 nova_compute[230713]: 2025-10-02 13:10:35.929 2 DEBUG nova.network.neutron [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Port 994b5877-4440-4c86-be4e-fd60053ca206 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct  2 09:10:35 np0005465987 nova_compute[230713]: 2025-10-02 13:10:35.931 2 DEBUG nova.compute.manager [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=17408,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp4_a5vvlk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='6087853f-327c-46b2-baa6-c24854d98b97',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct  2 09:10:36 np0005465987 systemd[1]: Starting libvirt proxy daemon...
Oct  2 09:10:36 np0005465987 systemd[1]: Started libvirt proxy daemon.
Oct  2 09:10:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:36 np0005465987 kernel: tap994b5877-44: entered promiscuous mode
Oct  2 09:10:36 np0005465987 NetworkManager[44910]: <info>  [1759410636.2295] manager: (tap994b5877-44): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Oct  2 09:10:36 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:36Z|00827|binding|INFO|Claiming lport 994b5877-4440-4c86-be4e-fd60053ca206 for this additional chassis.
Oct  2 09:10:36 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:36Z|00828|binding|INFO|994b5877-4440-4c86-be4e-fd60053ca206: Claiming fa:16:3e:80:bc:0d 10.100.0.7
Oct  2 09:10:36 np0005465987 nova_compute[230713]: 2025-10-02 13:10:36.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:36 np0005465987 nova_compute[230713]: 2025-10-02 13:10:36.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:36 np0005465987 nova_compute[230713]: 2025-10-02 13:10:36.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:36 np0005465987 nova_compute[230713]: 2025-10-02 13:10:36.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:36 np0005465987 NetworkManager[44910]: <info>  [1759410636.2585] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Oct  2 09:10:36 np0005465987 NetworkManager[44910]: <info>  [1759410636.2596] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Oct  2 09:10:36 np0005465987 systemd-udevd[313649]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:10:36 np0005465987 systemd-machined[188335]: New machine qemu-93-instance-000000d3.
Oct  2 09:10:36 np0005465987 NetworkManager[44910]: <info>  [1759410636.2847] device (tap994b5877-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:10:36 np0005465987 NetworkManager[44910]: <info>  [1759410636.2857] device (tap994b5877-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:10:36 np0005465987 systemd[1]: Started Virtual Machine qemu-93-instance-000000d3.
Oct  2 09:10:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:36 np0005465987 nova_compute[230713]: 2025-10-02 13:10:36.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:36 np0005465987 nova_compute[230713]: 2025-10-02 13:10:36.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:36 np0005465987 nova_compute[230713]: 2025-10-02 13:10:36.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:36 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:36Z|00829|binding|INFO|Setting lport 994b5877-4440-4c86-be4e-fd60053ca206 ovn-installed in OVS
Oct  2 09:10:36 np0005465987 nova_compute[230713]: 2025-10-02 13:10:36.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:36.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:37 np0005465987 nova_compute[230713]: 2025-10-02 13:10:37.420 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410637.4203184, 6087853f-327c-46b2-baa6-c24854d98b97 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:37 np0005465987 nova_compute[230713]: 2025-10-02 13:10:37.421 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] VM Started (Lifecycle Event)#033[00m
Oct  2 09:10:37 np0005465987 nova_compute[230713]: 2025-10-02 13:10:37.442 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:37 np0005465987 nova_compute[230713]: 2025-10-02 13:10:37.848 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410637.848476, 6087853f-327c-46b2-baa6-c24854d98b97 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:37 np0005465987 nova_compute[230713]: 2025-10-02 13:10:37.849 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:10:37 np0005465987 nova_compute[230713]: 2025-10-02 13:10:37.910 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:37 np0005465987 nova_compute[230713]: 2025-10-02 13:10:37.914 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:10:37 np0005465987 nova_compute[230713]: 2025-10-02 13:10:37.952 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 09:10:37 np0005465987 nova_compute[230713]: 2025-10-02 13:10:37.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:38.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:38 np0005465987 nova_compute[230713]: 2025-10-02 13:10:38.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:38 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:38Z|00830|binding|INFO|Claiming lport 994b5877-4440-4c86-be4e-fd60053ca206 for this chassis.
Oct  2 09:10:38 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:38Z|00831|binding|INFO|994b5877-4440-4c86-be4e-fd60053ca206: Claiming fa:16:3e:80:bc:0d 10.100.0.7
Oct  2 09:10:38 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:38Z|00832|binding|INFO|Setting lport 994b5877-4440-4c86-be4e-fd60053ca206 up in Southbound
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.808 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:bc:0d 10.100.0.7'], port_security=['fa:16:3e:80:bc:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6087853f-327c-46b2-baa6-c24854d98b97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75dc16ae-c86f-409d-a774-fc174172fb9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '11', 'neutron:security_group_ids': '46f4bf8b-f955-4599-aca9-099d0db9d91d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22de97a1-eaec-475f-a946-6832b4d8dec3, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=994b5877-4440-4c86-be4e-fd60053ca206) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.811 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 994b5877-4440-4c86-be4e-fd60053ca206 in datapath 75dc16ae-c86f-409d-a774-fc174172fb9a bound to our chassis#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.813 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75dc16ae-c86f-409d-a774-fc174172fb9a#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.829 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[de393d44-d056-4c4a-b425-ba351b028cf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.830 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75dc16ae-c1 in ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.833 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75dc16ae-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.833 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dec79c5b-d45a-4141-8097-7bf66503d877]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.835 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a597e0-44c6-4ee6-a598-7249d41e01b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.853 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[e40f3ec8-6e9b-49a0-a562-8485716592b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.879 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d5436470-1d3a-448f-baec-f2d7038c7e4d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.918 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[49feb409-c58e-4986-8af8-66cd706639ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:38 np0005465987 systemd-udevd[313652]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:10:38 np0005465987 NetworkManager[44910]: <info>  [1759410638.9300] manager: (tap75dc16ae-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/389)
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.928 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[be911111-6c32-43ad-bf70-1db326e883b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:38 np0005465987 nova_compute[230713]: 2025-10-02 13:10:38.940 2 INFO nova.compute.manager [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Post operation of migration started#033[00m
Oct  2 09:10:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:38.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.967 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a03d04e5-4b10-4eae-905b-5c323da44923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:38.971 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe7299d-2fa4-4776-a9fa-7d509b69383d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:38 np0005465987 NetworkManager[44910]: <info>  [1759410638.9966] device (tap75dc16ae-c0): carrier: link connected
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.006 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[6d04dbc3-43c9-4172-bba2-eecdc8177832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.028 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d81306-9f89-4c5f-99c2-d7059e644b9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75dc16ae-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:bb:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870757, 'reachable_time': 42266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313725, 'error': None, 'target': 'ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.045 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[62dc844f-5148-40e1-80c3-8798cc909407]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:bb70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870757, 'tstamp': 870757}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313726, 'error': None, 'target': 'ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.064 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9a8dfb-fb2d-4de6-912e-b816e7f2ac51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75dc16ae-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:bb:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870757, 'reachable_time': 42266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313727, 'error': None, 'target': 'ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.098 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0421d5cf-0a49-40f0-a957-b41f2b09cfe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.170 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7246c2e1-f370-44f8-afaa-33692048fde6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.172 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75dc16ae-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.172 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.173 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75dc16ae-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:39 np0005465987 kernel: tap75dc16ae-c0: entered promiscuous mode
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:39 np0005465987 NetworkManager[44910]: <info>  [1759410639.1799] manager: (tap75dc16ae-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.180 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75dc16ae-c0, col_values=(('external_ids', {'iface-id': 'e0f330cf-7f46-48ea-8216-823cfdc753a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:39 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:39Z|00833|binding|INFO|Releasing lport e0f330cf-7f46-48ea-8216-823cfdc753a5 from this chassis (sb_readonly=0)
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.207 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75dc16ae-c86f-409d-a774-fc174172fb9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75dc16ae-c86f-409d-a774-fc174172fb9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.208 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[68193b3b-d71f-40f1-8708-a7807bab1103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.209 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-75dc16ae-c86f-409d-a774-fc174172fb9a
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/75dc16ae-c86f-409d-a774-fc174172fb9a.pid.haproxy
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 75dc16ae-c86f-409d-a774-fc174172fb9a
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:10:39 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:39.210 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a', 'env', 'PROCESS_TAG=haproxy-75dc16ae-c86f-409d-a774-fc174172fb9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75dc16ae-c86f-409d-a774-fc174172fb9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:10:39 np0005465987 podman[313760]: 2025-10-02 13:10:39.653197667 +0000 UTC m=+0.066347889 container create d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:10:39 np0005465987 systemd[1]: Started libpod-conmon-d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf.scope.
Oct  2 09:10:39 np0005465987 podman[313760]: 2025-10-02 13:10:39.621167119 +0000 UTC m=+0.034317421 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:10:39 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:10:39 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7c3d48064bd1e36acec7176662afb2ac7c20e59f60df289cb7ecb355c8d74d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:10:39 np0005465987 podman[313760]: 2025-10-02 13:10:39.761527944 +0000 UTC m=+0.174678176 container init d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 09:10:39 np0005465987 podman[313760]: 2025-10-02 13:10:39.767029783 +0000 UTC m=+0.180179995 container start d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:10:39 np0005465987 neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a[313775]: [NOTICE]   (313779) : New worker (313781) forked
Oct  2 09:10:39 np0005465987 neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a[313775]: [NOTICE]   (313779) : Loading success.
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.998 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:10:39 np0005465987 nova_compute[230713]: 2025-10-02 13:10:39.999 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:40 np0005465987 nova_compute[230713]: 2025-10-02 13:10:40.094 2 DEBUG oslo_concurrency.lockutils [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "refresh_cache-6087853f-327c-46b2-baa6-c24854d98b97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:40 np0005465987 nova_compute[230713]: 2025-10-02 13:10:40.094 2 DEBUG oslo_concurrency.lockutils [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquired lock "refresh_cache-6087853f-327c-46b2-baa6-c24854d98b97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:40 np0005465987 nova_compute[230713]: 2025-10-02 13:10:40.095 2 DEBUG nova.network.neutron [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:10:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:40.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:41 np0005465987 nova_compute[230713]: 2025-10-02 13:10:41.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:42 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:10:42 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1909700081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:10:42 np0005465987 nova_compute[230713]: 2025-10-02 13:10:42.291 2 DEBUG nova.network.neutron [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Updating instance_info_cache with network_info: [{"id": "994b5877-4440-4c86-be4e-fd60053ca206", "address": "fa:16:3e:80:bc:0d", "network": {"id": "75dc16ae-c86f-409d-a774-fc174172fb9a", "bridge": "br-int", "label": "tempest-network-smoke--1179677419", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994b5877-44", "ovs_interfaceid": "994b5877-4440-4c86-be4e-fd60053ca206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:42 np0005465987 nova_compute[230713]: 2025-10-02 13:10:42.316 2 DEBUG oslo_concurrency.lockutils [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Releasing lock "refresh_cache-6087853f-327c-46b2-baa6-c24854d98b97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:42 np0005465987 nova_compute[230713]: 2025-10-02 13:10:42.333 2 DEBUG oslo_concurrency.lockutils [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:42 np0005465987 nova_compute[230713]: 2025-10-02 13:10:42.333 2 DEBUG oslo_concurrency.lockutils [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:42 np0005465987 nova_compute[230713]: 2025-10-02 13:10:42.334 2 DEBUG oslo_concurrency.lockutils [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:42 np0005465987 nova_compute[230713]: 2025-10-02 13:10:42.339 2 INFO nova.virt.libvirt.driver [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct  2 09:10:42 np0005465987 virtqemud[230442]: Domain id=93 name='instance-000000d3' uuid=6087853f-327c-46b2-baa6-c24854d98b97 is tainted: custom-monitor
Oct  2 09:10:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:42.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:42.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:42 np0005465987 nova_compute[230713]: 2025-10-02 13:10:42.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:42 np0005465987 nova_compute[230713]: 2025-10-02 13:10:42.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:43 np0005465987 nova_compute[230713]: 2025-10-02 13:10:43.347 2 INFO nova.virt.libvirt.driver [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct  2 09:10:43 np0005465987 nova_compute[230713]: 2025-10-02 13:10:43.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:43 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:43.501 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.353 2 INFO nova.virt.libvirt.driver [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.357 2 DEBUG nova.compute.manager [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.377 2 DEBUG nova.objects.instance [None req-28a50335-b302-408e-9e69-1cdb07e8615e 67f7dad14d084636b43800f8e31de2b3 6c41a7573a694f9fb3ae10771b2fa09f - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 09:10:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:44.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:44.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.994 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.996 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:10:44 np0005465987 nova_compute[230713]: 2025-10-02 13:10:44.996 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1665672065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.439 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.501 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000d3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.501 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000d3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.659 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.660 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4112MB free_disk=20.942699432373047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.660 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.661 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.705 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Applying migration context for instance 6087853f-327c-46b2-baa6-c24854d98b97 as it has an incoming, in-progress migration f09d8e34-4c5a-4003-aebf-40ce1a8562af. Migration status is running _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.706 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.719 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.745 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 6087853f-327c-46b2-baa6-c24854d98b97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.746 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.746 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.761 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.775 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.775 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.788 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.813 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:10:45 np0005465987 nova_compute[230713]: 2025-10-02 13:10:45.847 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2737764232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:46 np0005465987 nova_compute[230713]: 2025-10-02 13:10:46.267 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:46 np0005465987 nova_compute[230713]: 2025-10-02 13:10:46.273 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:10:46 np0005465987 nova_compute[230713]: 2025-10-02 13:10:46.293 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:10:46 np0005465987 nova_compute[230713]: 2025-10-02 13:10:46.316 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:10:46 np0005465987 nova_compute[230713]: 2025-10-02 13:10:46.317 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:46.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:46.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:47 np0005465987 nova_compute[230713]: 2025-10-02 13:10:47.209 2 INFO nova.compute.manager [None req-266b7b7c-233d-4778-bc69-55f3ee65439c ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Get console output#033[00m
Oct  2 09:10:47 np0005465987 nova_compute[230713]: 2025-10-02 13:10:47.215 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:10:47 np0005465987 nova_compute[230713]: 2025-10-02 13:10:47.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.305 2 DEBUG nova.compute.manager [req-1cf62d64-b599-4864-91b5-a5c404a8554a req-826a50f7-9fab-4a1c-b7a1-057e4643a40b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Received event network-changed-994b5877-4440-4c86-be4e-fd60053ca206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.305 2 DEBUG nova.compute.manager [req-1cf62d64-b599-4864-91b5-a5c404a8554a req-826a50f7-9fab-4a1c-b7a1-057e4643a40b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Refreshing instance network info cache due to event network-changed-994b5877-4440-4c86-be4e-fd60053ca206. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.305 2 DEBUG oslo_concurrency.lockutils [req-1cf62d64-b599-4864-91b5-a5c404a8554a req-826a50f7-9fab-4a1c-b7a1-057e4643a40b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-6087853f-327c-46b2-baa6-c24854d98b97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.306 2 DEBUG oslo_concurrency.lockutils [req-1cf62d64-b599-4864-91b5-a5c404a8554a req-826a50f7-9fab-4a1c-b7a1-057e4643a40b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-6087853f-327c-46b2-baa6-c24854d98b97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.306 2 DEBUG nova.network.neutron [req-1cf62d64-b599-4864-91b5-a5c404a8554a req-826a50f7-9fab-4a1c-b7a1-057e4643a40b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Refreshing network info cache for port 994b5877-4440-4c86-be4e-fd60053ca206 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.341 2 DEBUG oslo_concurrency.lockutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "6087853f-327c-46b2-baa6-c24854d98b97" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.341 2 DEBUG oslo_concurrency.lockutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6087853f-327c-46b2-baa6-c24854d98b97" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.341 2 DEBUG oslo_concurrency.lockutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "6087853f-327c-46b2-baa6-c24854d98b97-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.342 2 DEBUG oslo_concurrency.lockutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6087853f-327c-46b2-baa6-c24854d98b97-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.342 2 DEBUG oslo_concurrency.lockutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6087853f-327c-46b2-baa6-c24854d98b97-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.343 2 INFO nova.compute.manager [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Terminating instance#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.344 2 DEBUG nova.compute.manager [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:10:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:48.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:48 np0005465987 kernel: tap994b5877-44 (unregistering): left promiscuous mode
Oct  2 09:10:48 np0005465987 NetworkManager[44910]: <info>  [1759410648.4193] device (tap994b5877-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:10:48 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:48Z|00834|binding|INFO|Releasing lport 994b5877-4440-4c86-be4e-fd60053ca206 from this chassis (sb_readonly=0)
Oct  2 09:10:48 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:48Z|00835|binding|INFO|Setting lport 994b5877-4440-4c86-be4e-fd60053ca206 down in Southbound
Oct  2 09:10:48 np0005465987 ovn_controller[129172]: 2025-10-02T13:10:48Z|00836|binding|INFO|Removing iface tap994b5877-44 ovn-installed in OVS
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.478 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:bc:0d 10.100.0.7'], port_security=['fa:16:3e:80:bc:0d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6087853f-327c-46b2-baa6-c24854d98b97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75dc16ae-c86f-409d-a774-fc174172fb9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '13', 'neutron:security_group_ids': '46f4bf8b-f955-4599-aca9-099d0db9d91d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22de97a1-eaec-475f-a946-6832b4d8dec3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=994b5877-4440-4c86-be4e-fd60053ca206) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.480 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 994b5877-4440-4c86-be4e-fd60053ca206 in datapath 75dc16ae-c86f-409d-a774-fc174172fb9a unbound from our chassis#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.481 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75dc16ae-c86f-409d-a774-fc174172fb9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.482 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d15dfd66-da75-45b0-b2e5-8c75057e426b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.482 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a namespace which is not needed anymore#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d3.scope: Deactivated successfully.
Oct  2 09:10:48 np0005465987 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d3.scope: Consumed 1.870s CPU time.
Oct  2 09:10:48 np0005465987 systemd-machined[188335]: Machine qemu-93-instance-000000d3 terminated.
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.582 2 INFO nova.virt.libvirt.driver [-] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Instance destroyed successfully.#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.583 2 DEBUG nova.objects.instance [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid 6087853f-327c-46b2-baa6-c24854d98b97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:48 np0005465987 neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a[313775]: [NOTICE]   (313779) : haproxy version is 2.8.14-c23fe91
Oct  2 09:10:48 np0005465987 neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a[313775]: [NOTICE]   (313779) : path to executable is /usr/sbin/haproxy
Oct  2 09:10:48 np0005465987 neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a[313775]: [WARNING]  (313779) : Exiting Master process...
Oct  2 09:10:48 np0005465987 neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a[313775]: [WARNING]  (313779) : Exiting Master process...
Oct  2 09:10:48 np0005465987 neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a[313775]: [ALERT]    (313779) : Current worker (313781) exited with code 143 (Terminated)
Oct  2 09:10:48 np0005465987 neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a[313775]: [WARNING]  (313779) : All workers exited. Exiting... (0)
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.608 2 DEBUG nova.virt.libvirt.vif [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:09:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1574812382',display_name='tempest-TestNetworkAdvancedServerOps-server-1574812382',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1574812382',id=211,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCSYCMEZ6q/lvfSzWU/Eg4JQitDnhXCQauYJ5tidoiUmnnwGO4dpjXw3f4qFvlN8HpQGCKccR0Wgvw5uEOFmQ9+8YrZuneTD88dhlCtVTonVS+twysO7TRzsLhP5B6Zj9g==',key_name='tempest-TestNetworkAdvancedServerOps-842167042',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:10:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-bs5w7gov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:44Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=6087853f-327c-46b2-baa6-c24854d98b97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "994b5877-4440-4c86-be4e-fd60053ca206", "address": "fa:16:3e:80:bc:0d", "network": {"id": "75dc16ae-c86f-409d-a774-fc174172fb9a", "bridge": "br-int", "label": "tempest-network-smoke--1179677419", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994b5877-44", "ovs_interfaceid": "994b5877-4440-4c86-be4e-fd60053ca206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.608 2 DEBUG nova.network.os_vif_util [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "994b5877-4440-4c86-be4e-fd60053ca206", "address": "fa:16:3e:80:bc:0d", "network": {"id": "75dc16ae-c86f-409d-a774-fc174172fb9a", "bridge": "br-int", "label": "tempest-network-smoke--1179677419", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994b5877-44", "ovs_interfaceid": "994b5877-4440-4c86-be4e-fd60053ca206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:10:48 np0005465987 systemd[1]: libpod-d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf.scope: Deactivated successfully.
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.610 2 DEBUG nova.network.os_vif_util [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:bc:0d,bridge_name='br-int',has_traffic_filtering=True,id=994b5877-4440-4c86-be4e-fd60053ca206,network=Network(75dc16ae-c86f-409d-a774-fc174172fb9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994b5877-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.610 2 DEBUG os_vif [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:bc:0d,bridge_name='br-int',has_traffic_filtering=True,id=994b5877-4440-4c86-be4e-fd60053ca206,network=Network(75dc16ae-c86f-409d-a774-fc174172fb9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994b5877-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap994b5877-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 podman[313860]: 2025-10-02 13:10:48.617013768 +0000 UTC m=+0.046630035 container died d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.616 2 INFO os_vif [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:bc:0d,bridge_name='br-int',has_traffic_filtering=True,id=994b5877-4440-4c86-be4e-fd60053ca206,network=Network(75dc16ae-c86f-409d-a774-fc174172fb9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994b5877-44')#033[00m
Oct  2 09:10:48 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf-userdata-shm.mount: Deactivated successfully.
Oct  2 09:10:48 np0005465987 systemd[1]: var-lib-containers-storage-overlay-8d7c3d48064bd1e36acec7176662afb2ac7c20e59f60df289cb7ecb355c8d74d-merged.mount: Deactivated successfully.
Oct  2 09:10:48 np0005465987 podman[313860]: 2025-10-02 13:10:48.653656952 +0000 UTC m=+0.083273209 container cleanup d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:10:48 np0005465987 systemd[1]: libpod-conmon-d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf.scope: Deactivated successfully.
Oct  2 09:10:48 np0005465987 podman[313914]: 2025-10-02 13:10:48.776644236 +0000 UTC m=+0.102725396 container remove d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.782 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[64c365fd-95bf-4c87-95ae-66f78a73b51b]: (4, ('Thu Oct  2 01:10:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a (d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf)\nd360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf\nThu Oct  2 01:10:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a (d360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf)\nd360dc3dbfc7250f4f8d02651a03acbc210acb0cc65e0e4d9b7555cb453856bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.784 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[78f5cdb0-9fe0-4cd6-891e-6b47234ed243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.785 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75dc16ae-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 kernel: tap75dc16ae-c0: left promiscuous mode
Oct  2 09:10:48 np0005465987 nova_compute[230713]: 2025-10-02 13:10:48.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.802 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e98fd5e9-08e4-4cd4-b858-d2361341402b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.826 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[39fc58c9-5f7b-49c8-99d7-0fdf55a82198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.827 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[71b709d8-23ba-442e-862e-415c3abe84fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.841 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad540e8-f2ac-4057-b4e3-782f11736461]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870748, 'reachable_time': 27031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313930, 'error': None, 'target': 'ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.843 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75dc16ae-c86f-409d-a774-fc174172fb9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:10:48 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:10:48.843 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[434fcf24-f34e-4a2f-98b9-989c0875b7f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:48 np0005465987 systemd[1]: run-netns-ovnmeta\x2d75dc16ae\x2dc86f\x2d409d\x2da774\x2dfc174172fb9a.mount: Deactivated successfully.
Oct  2 09:10:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:48.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:49 np0005465987 nova_compute[230713]: 2025-10-02 13:10:49.340 2 INFO nova.virt.libvirt.driver [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Deleting instance files /var/lib/nova/instances/6087853f-327c-46b2-baa6-c24854d98b97_del#033[00m
Oct  2 09:10:49 np0005465987 nova_compute[230713]: 2025-10-02 13:10:49.342 2 INFO nova.virt.libvirt.driver [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Deletion of /var/lib/nova/instances/6087853f-327c-46b2-baa6-c24854d98b97_del complete#033[00m
Oct  2 09:10:49 np0005465987 nova_compute[230713]: 2025-10-02 13:10:49.384 2 INFO nova.compute.manager [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Took 1.04 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:10:49 np0005465987 nova_compute[230713]: 2025-10-02 13:10:49.385 2 DEBUG oslo.service.loopingcall [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:10:49 np0005465987 nova_compute[230713]: 2025-10-02 13:10:49.391 2 DEBUG nova.compute.manager [-] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:10:49 np0005465987 nova_compute[230713]: 2025-10-02 13:10:49.392 2 DEBUG nova.network.neutron [-] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.317 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.318 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.330 2 DEBUG nova.network.neutron [req-1cf62d64-b599-4864-91b5-a5c404a8554a req-826a50f7-9fab-4a1c-b7a1-057e4643a40b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Updated VIF entry in instance network info cache for port 994b5877-4440-4c86-be4e-fd60053ca206. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.330 2 DEBUG nova.network.neutron [req-1cf62d64-b599-4864-91b5-a5c404a8554a req-826a50f7-9fab-4a1c-b7a1-057e4643a40b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Updating instance_info_cache with network_info: [{"id": "994b5877-4440-4c86-be4e-fd60053ca206", "address": "fa:16:3e:80:bc:0d", "network": {"id": "75dc16ae-c86f-409d-a774-fc174172fb9a", "bridge": "br-int", "label": "tempest-network-smoke--1179677419", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994b5877-44", "ovs_interfaceid": "994b5877-4440-4c86-be4e-fd60053ca206", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.353 2 DEBUG oslo_concurrency.lockutils [req-1cf62d64-b599-4864-91b5-a5c404a8554a req-826a50f7-9fab-4a1c-b7a1-057e4643a40b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-6087853f-327c-46b2-baa6-c24854d98b97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.404 2 DEBUG nova.compute.manager [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Received event network-vif-unplugged-994b5877-4440-4c86-be4e-fd60053ca206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.404 2 DEBUG oslo_concurrency.lockutils [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6087853f-327c-46b2-baa6-c24854d98b97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.405 2 DEBUG oslo_concurrency.lockutils [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6087853f-327c-46b2-baa6-c24854d98b97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.405 2 DEBUG oslo_concurrency.lockutils [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6087853f-327c-46b2-baa6-c24854d98b97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.405 2 DEBUG nova.compute.manager [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] No waiting events found dispatching network-vif-unplugged-994b5877-4440-4c86-be4e-fd60053ca206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.405 2 DEBUG nova.compute.manager [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Received event network-vif-unplugged-994b5877-4440-4c86-be4e-fd60053ca206 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.406 2 DEBUG nova.compute.manager [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Received event network-vif-plugged-994b5877-4440-4c86-be4e-fd60053ca206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.406 2 DEBUG oslo_concurrency.lockutils [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "6087853f-327c-46b2-baa6-c24854d98b97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.406 2 DEBUG oslo_concurrency.lockutils [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6087853f-327c-46b2-baa6-c24854d98b97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.406 2 DEBUG oslo_concurrency.lockutils [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "6087853f-327c-46b2-baa6-c24854d98b97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.406 2 DEBUG nova.compute.manager [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] No waiting events found dispatching network-vif-plugged-994b5877-4440-4c86-be4e-fd60053ca206 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.406 2 WARNING nova.compute.manager [req-8558aeb1-e72a-4cfd-a18a-21c9ed6e7357 req-585d8c87-f9b9-4a19-9e01-df5189a1895c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Received unexpected event network-vif-plugged-994b5877-4440-4c86-be4e-fd60053ca206 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:10:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:50.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.486 2 DEBUG nova.network.neutron [-] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.514 2 INFO nova.compute.manager [-] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Took 1.12 seconds to deallocate network for instance.#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.561 2 DEBUG oslo_concurrency.lockutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.562 2 DEBUG oslo_concurrency.lockutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.576 2 DEBUG nova.compute.manager [req-a67c4521-4ea8-48df-a20b-1df26d603b98 req-801501e0-4b1c-428e-8639-e0a0ac33dcd2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Received event network-vif-deleted-994b5877-4440-4c86-be4e-fd60053ca206 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:50 np0005465987 nova_compute[230713]: 2025-10-02 13:10:50.615 2 DEBUG oslo_concurrency.processutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:50 np0005465987 podman[313943]: 2025-10-02 13:10:50.850082561 +0000 UTC m=+0.067002888 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:10:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:10:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:50.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:10:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1642908140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:51 np0005465987 nova_compute[230713]: 2025-10-02 13:10:51.095 2 DEBUG oslo_concurrency.processutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:51 np0005465987 nova_compute[230713]: 2025-10-02 13:10:51.102 2 DEBUG nova.compute.provider_tree [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:10:51 np0005465987 nova_compute[230713]: 2025-10-02 13:10:51.122 2 DEBUG nova.scheduler.client.report [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:10:51 np0005465987 nova_compute[230713]: 2025-10-02 13:10:51.143 2 DEBUG oslo_concurrency.lockutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:51 np0005465987 nova_compute[230713]: 2025-10-02 13:10:51.170 2 INFO nova.scheduler.client.report [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocations for instance 6087853f-327c-46b2-baa6-c24854d98b97#033[00m
Oct  2 09:10:51 np0005465987 nova_compute[230713]: 2025-10-02 13:10:51.225 2 DEBUG oslo_concurrency.lockutils [None req-55992032-d574-426a-9489-90d481627f89 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "6087853f-327c-46b2-baa6-c24854d98b97" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:52.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:52.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:53 np0005465987 nova_compute[230713]: 2025-10-02 13:10:53.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:53 np0005465987 nova_compute[230713]: 2025-10-02 13:10:53.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005465987 nova_compute[230713]: 2025-10-02 13:10:54.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:54.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:54 np0005465987 nova_compute[230713]: 2025-10-02 13:10:54.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:54.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:55 np0005465987 podman[313975]: 2025-10-02 13:10:55.854670337 +0000 UTC m=+0.063737919 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  2 09:10:55 np0005465987 podman[313974]: 2025-10-02 13:10:55.862617363 +0000 UTC m=+0.084263957 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:10:55 np0005465987 podman[313976]: 2025-10-02 13:10:55.866637511 +0000 UTC m=+0.083319060 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:10:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:56.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:56.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:10:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828257001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:10:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:10:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828257001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:10:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:10:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:58.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:10:58 np0005465987 nova_compute[230713]: 2025-10-02 13:10:58.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:58 np0005465987 nova_compute[230713]: 2025-10-02 13:10:58.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:10:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:58.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:00.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:02.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:03 np0005465987 nova_compute[230713]: 2025-10-02 13:11:03.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:03 np0005465987 nova_compute[230713]: 2025-10-02 13:11:03.579 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410648.5777674, 6087853f-327c-46b2-baa6-c24854d98b97 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:03 np0005465987 nova_compute[230713]: 2025-10-02 13:11:03.579 2 INFO nova.compute.manager [-] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:11:03 np0005465987 nova_compute[230713]: 2025-10-02 13:11:03.604 2 DEBUG nova.compute.manager [None req-7a752411-c075-4691-af3b-43e9afaf5734 - - - - - -] [instance: 6087853f-327c-46b2-baa6-c24854d98b97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:03 np0005465987 nova_compute[230713]: 2025-10-02 13:11:03.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:03 np0005465987 nova_compute[230713]: 2025-10-02 13:11:03.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:04.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:05 np0005465987 nova_compute[230713]: 2025-10-02 13:11:05.766 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:05 np0005465987 nova_compute[230713]: 2025-10-02 13:11:05.766 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:05 np0005465987 nova_compute[230713]: 2025-10-02 13:11:05.864 2 DEBUG nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:11:05 np0005465987 nova_compute[230713]: 2025-10-02 13:11:05.992 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:05 np0005465987 nova_compute[230713]: 2025-10-02 13:11:05.993 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.001 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.001 2 INFO nova.compute.claims [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:11:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.198 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2083019403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.616 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.622 2 DEBUG nova.compute.provider_tree [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.641 2 DEBUG nova.scheduler.client.report [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.664 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.665 2 DEBUG nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.785 2 DEBUG nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.785 2 DEBUG nova.network.neutron [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:11:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e417 e417: 3 total, 3 up, 3 in
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.862 2 INFO nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.965 2 DEBUG nova.policy [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffe4d737e4414fb3a3e358f8ca3f3e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '08e102ae48244af2ab448a2e1ff757df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:11:06 np0005465987 nova_compute[230713]: 2025-10-02 13:11:06.968 2 DEBUG nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:11:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.226 2 DEBUG nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.227 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.228 2 INFO nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Creating image(s)#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.255 2 DEBUG nova.storage.rbd_utils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.282 2 DEBUG nova.storage.rbd_utils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.310 2 DEBUG nova.storage.rbd_utils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.315 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.379 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.380 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.380 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.381 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.411 2 DEBUG nova.storage.rbd_utils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.416 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.707 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.782 2 DEBUG nova.network.neutron [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Successfully created port: 16a02f62-168d-4c03-b86a-4e634bf4c15d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.790 2 DEBUG nova.storage.rbd_utils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] resizing rbd image ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.925 2 DEBUG nova.objects.instance [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid ad7157fd-646c-48d6-8c7e-5ee0f80efd5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.944 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.944 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Ensure instance console log exists: /var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.945 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.946 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:07 np0005465987 nova_compute[230713]: 2025-10-02 13:11:07.946 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:08.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.470 2 DEBUG nova.network.neutron [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Successfully updated port: 16a02f62-168d-4c03-b86a-4e634bf4c15d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.515 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.516 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.516 2 DEBUG nova.network.neutron [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.575 2 DEBUG nova.compute.manager [req-0f649687-0418-47c3-abf3-6916bb14de91 req-ed8d3655-9986-4710-80e2-0b72b7738f81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received event network-changed-16a02f62-168d-4c03-b86a-4e634bf4c15d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.576 2 DEBUG nova.compute.manager [req-0f649687-0418-47c3-abf3-6916bb14de91 req-ed8d3655-9986-4710-80e2-0b72b7738f81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Refreshing instance network info cache due to event network-changed-16a02f62-168d-4c03-b86a-4e634bf4c15d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.576 2 DEBUG oslo_concurrency.lockutils [req-0f649687-0418-47c3-abf3-6916bb14de91 req-ed8d3655-9986-4710-80e2-0b72b7738f81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:08 np0005465987 nova_compute[230713]: 2025-10-02 13:11:08.673 2 DEBUG nova.network.neutron [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:11:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:11:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:11:08 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:11:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.488 2 DEBUG nova.network.neutron [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updating instance_info_cache with network_info: [{"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.512 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.512 2 DEBUG nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Instance network_info: |[{"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.513 2 DEBUG oslo_concurrency.lockutils [req-0f649687-0418-47c3-abf3-6916bb14de91 req-ed8d3655-9986-4710-80e2-0b72b7738f81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.513 2 DEBUG nova.network.neutron [req-0f649687-0418-47c3-abf3-6916bb14de91 req-ed8d3655-9986-4710-80e2-0b72b7738f81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Refreshing network info cache for port 16a02f62-168d-4c03-b86a-4e634bf4c15d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.516 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Start _get_guest_xml network_info=[{"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.521 2 WARNING nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.526 2 DEBUG nova.virt.libvirt.host [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.527 2 DEBUG nova.virt.libvirt.host [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.530 2 DEBUG nova.virt.libvirt.host [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.530 2 DEBUG nova.virt.libvirt.host [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.531 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.531 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.532 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.532 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.532 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.532 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.533 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.533 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.533 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.533 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.533 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.534 2 DEBUG nova.virt.hardware [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.536 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:11:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1728221769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:11:09 np0005465987 nova_compute[230713]: 2025-10-02 13:11:09.997 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.035 2 DEBUG nova.storage.rbd_utils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.039 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:11:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:10.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:11:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:11:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2338560981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.472 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.474 2 DEBUG nova.virt.libvirt.vif [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383436651',display_name='tempest-TestNetworkAdvancedServerOps-server-383436651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383436651',id=213,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMXl1Hgr2gWH+LKTd/OVUGfIlI7FjzKSN922u/H2k/LzYchmprRCXIRdXyhc6kupEVCtzfJGfzM3R2ZahWieNVO2Bs1y1rxrEci/0cGjGdoRGch1PXj0ypX33zv1zLXs6w==',key_name='tempest-TestNetworkAdvancedServerOps-86181458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-m8iqo8ux',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:11:07Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ad7157fd-646c-48d6-8c7e-5ee0f80efd5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.474 2 DEBUG nova.network.os_vif_util [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.475 2 DEBUG nova.network.os_vif_util [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:88:85,bridge_name='br-int',has_traffic_filtering=True,id=16a02f62-168d-4c03-b86a-4e634bf4c15d,network=Network(c9ec7d8f-d4ca-4f72-91c4-eabed5abb534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16a02f62-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.476 2 DEBUG nova.objects.instance [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid ad7157fd-646c-48d6-8c7e-5ee0f80efd5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.491 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <uuid>ad7157fd-646c-48d6-8c7e-5ee0f80efd5f</uuid>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <name>instance-000000d5</name>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-383436651</nova:name>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:11:09</nova:creationTime>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <nova:port uuid="16a02f62-168d-4c03-b86a-4e634bf4c15d">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <entry name="serial">ad7157fd-646c-48d6-8c7e-5ee0f80efd5f</entry>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <entry name="uuid">ad7157fd-646c-48d6-8c7e-5ee0f80efd5f</entry>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk.config">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:cb:88:85"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <target dev="tap16a02f62-16"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f/console.log" append="off"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:11:10 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:11:10 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:11:10 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:11:10 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.492 2 DEBUG nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Preparing to wait for external event network-vif-plugged-16a02f62-168d-4c03-b86a-4e634bf4c15d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.493 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.493 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.493 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.494 2 DEBUG nova.virt.libvirt.vif [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383436651',display_name='tempest-TestNetworkAdvancedServerOps-server-383436651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383436651',id=213,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMXl1Hgr2gWH+LKTd/OVUGfIlI7FjzKSN922u/H2k/LzYchmprRCXIRdXyhc6kupEVCtzfJGfzM3R2ZahWieNVO2Bs1y1rxrEci/0cGjGdoRGch1PXj0ypX33zv1zLXs6w==',key_name='tempest-TestNetworkAdvancedServerOps-86181458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-m8iqo8ux',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:11:07Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ad7157fd-646c-48d6-8c7e-5ee0f80efd5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.494 2 DEBUG nova.network.os_vif_util [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.495 2 DEBUG nova.network.os_vif_util [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:88:85,bridge_name='br-int',has_traffic_filtering=True,id=16a02f62-168d-4c03-b86a-4e634bf4c15d,network=Network(c9ec7d8f-d4ca-4f72-91c4-eabed5abb534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16a02f62-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.496 2 DEBUG os_vif [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:88:85,bridge_name='br-int',has_traffic_filtering=True,id=16a02f62-168d-4c03-b86a-4e634bf4c15d,network=Network(c9ec7d8f-d4ca-4f72-91c4-eabed5abb534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16a02f62-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.497 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16a02f62-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.501 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap16a02f62-16, col_values=(('external_ids', {'iface-id': '16a02f62-168d-4c03-b86a-4e634bf4c15d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:88:85', 'vm-uuid': 'ad7157fd-646c-48d6-8c7e-5ee0f80efd5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:10 np0005465987 NetworkManager[44910]: <info>  [1759410670.5044] manager: (tap16a02f62-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.514 2 INFO os_vif [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:88:85,bridge_name='br-int',has_traffic_filtering=True,id=16a02f62-168d-4c03-b86a-4e634bf4c15d,network=Network(c9ec7d8f-d4ca-4f72-91c4-eabed5abb534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16a02f62-16')#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.584 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.585 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.585 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No VIF found with MAC fa:16:3e:cb:88:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.586 2 INFO nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Using config drive#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.623 2 DEBUG nova.storage.rbd_utils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.707 2 DEBUG nova.network.neutron [req-0f649687-0418-47c3-abf3-6916bb14de91 req-ed8d3655-9986-4710-80e2-0b72b7738f81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updated VIF entry in instance network info cache for port 16a02f62-168d-4c03-b86a-4e634bf4c15d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.708 2 DEBUG nova.network.neutron [req-0f649687-0418-47c3-abf3-6916bb14de91 req-ed8d3655-9986-4710-80e2-0b72b7738f81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updating instance_info_cache with network_info: [{"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:10 np0005465987 nova_compute[230713]: 2025-10-02 13:11:10.726 2 DEBUG oslo_concurrency.lockutils [req-0f649687-0418-47c3-abf3-6916bb14de91 req-ed8d3655-9986-4710-80e2-0b72b7738f81 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:10.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.052 2 INFO nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Creating config drive at /var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f/disk.config#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.056 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8w1zpmjl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.190 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8w1zpmjl" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.221 2 DEBUG nova.storage.rbd_utils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.225 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f/disk.config ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.478 2 DEBUG oslo_concurrency.processutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f/disk.config ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.480 2 INFO nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Deleting local config drive /var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f/disk.config because it was imported into RBD.#033[00m
Oct  2 09:11:11 np0005465987 kernel: tap16a02f62-16: entered promiscuous mode
Oct  2 09:11:11 np0005465987 NetworkManager[44910]: <info>  [1759410671.5813] manager: (tap16a02f62-16): new Tun device (/org/freedesktop/NetworkManager/Devices/392)
Oct  2 09:11:11 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:11Z|00837|binding|INFO|Claiming lport 16a02f62-168d-4c03-b86a-4e634bf4c15d for this chassis.
Oct  2 09:11:11 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:11Z|00838|binding|INFO|16a02f62-168d-4c03-b86a-4e634bf4c15d: Claiming fa:16:3e:cb:88:85 10.100.0.10
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.603 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:88:85 10.100.0.10'], port_security=['fa:16:3e:cb:88:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ad7157fd-646c-48d6-8c7e-5ee0f80efd5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70538dbb-149a-49d9-b664-81755ab8b49e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e3670b9-59be-4977-90e5-dea383192678, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=16a02f62-168d-4c03-b86a-4e634bf4c15d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.604 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 16a02f62-168d-4c03-b86a-4e634bf4c15d in datapath c9ec7d8f-d4ca-4f72-91c4-eabed5abb534 bound to our chassis#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.605 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c9ec7d8f-d4ca-4f72-91c4-eabed5abb534#033[00m
Oct  2 09:11:11 np0005465987 systemd-udevd[314498]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:11:11 np0005465987 systemd-machined[188335]: New machine qemu-94-instance-000000d5.
Oct  2 09:11:11 np0005465987 NetworkManager[44910]: <info>  [1759410671.6218] device (tap16a02f62-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:11:11 np0005465987 NetworkManager[44910]: <info>  [1759410671.6229] device (tap16a02f62-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.626 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bba8d8dc-50b5-42f4-82df-646ea3e187fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.627 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc9ec7d8f-d1 in ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.629 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc9ec7d8f-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.630 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[78251ed0-de1d-4c24-bf60-3fa4e6a02246]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.631 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[44556edf-4342-4d21-8345-9e646596b9c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 systemd[1]: Started Virtual Machine qemu-94-instance-000000d5.
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.648 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[ca19f0f7-1c99-4abe-ae93-e62a4cc84c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:11Z|00839|binding|INFO|Setting lport 16a02f62-168d-4c03-b86a-4e634bf4c15d ovn-installed in OVS
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.663 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c00cb0-9331-4c93-976a-adcb5b224e11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:11Z|00840|binding|INFO|Setting lport 16a02f62-168d-4c03-b86a-4e634bf4c15d up in Southbound
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.697 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb16ed4-f1ea-41b2-95bc-fa055ea80813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 systemd-udevd[314501]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:11:11 np0005465987 NetworkManager[44910]: <info>  [1759410671.7036] manager: (tapc9ec7d8f-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/393)
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.701 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[80a64999-38ca-44b5-a6ce-3f36596480e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.734 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[7f76f7d7-2782-423d-8e93-9f511f48aaff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.737 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8ced15-19a6-43c1-89a8-5ea4e6d16b64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 NetworkManager[44910]: <info>  [1759410671.7583] device (tapc9ec7d8f-d0): carrier: link connected
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.766 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ca16a5-6ce7-4134-85c7-0753b8f87c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.783 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d8889a5b-7a03-4028-862e-4717d7c63a93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9ec7d8f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:4e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874033, 'reachable_time': 19681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314531, 'error': None, 'target': 'ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.796 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6d7651-92d6-4ddc-8b01-5c2808237ad1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:4ea8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874033, 'tstamp': 874033}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314532, 'error': None, 'target': 'ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.812 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d1d460-7db9-4e94-9888-6ff52d092375]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9ec7d8f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:4e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874033, 'reachable_time': 19681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314533, 'error': None, 'target': 'ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.836 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[59588f7d-4ffa-4014-8c24-3b3504fb073f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.885 2 DEBUG nova.compute.manager [req-186296f0-6f18-4424-9479-e4948c38ba28 req-37f99e63-1d86-4ba8-b93a-5ab4fd10060d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received event network-vif-plugged-16a02f62-168d-4c03-b86a-4e634bf4c15d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.885 2 DEBUG oslo_concurrency.lockutils [req-186296f0-6f18-4424-9479-e4948c38ba28 req-37f99e63-1d86-4ba8-b93a-5ab4fd10060d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.886 2 DEBUG oslo_concurrency.lockutils [req-186296f0-6f18-4424-9479-e4948c38ba28 req-37f99e63-1d86-4ba8-b93a-5ab4fd10060d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.886 2 DEBUG oslo_concurrency.lockutils [req-186296f0-6f18-4424-9479-e4948c38ba28 req-37f99e63-1d86-4ba8-b93a-5ab4fd10060d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.886 2 DEBUG nova.compute.manager [req-186296f0-6f18-4424-9479-e4948c38ba28 req-37f99e63-1d86-4ba8-b93a-5ab4fd10060d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Processing event network-vif-plugged-16a02f62-168d-4c03-b86a-4e634bf4c15d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.888 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb24b50-d7d3-4d1e-842a-efc842fc4eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.889 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9ec7d8f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.889 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.890 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9ec7d8f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:11 np0005465987 NetworkManager[44910]: <info>  [1759410671.8921] manager: (tapc9ec7d8f-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct  2 09:11:11 np0005465987 kernel: tapc9ec7d8f-d0: entered promiscuous mode
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.895 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc9ec7d8f-d0, col_values=(('external_ids', {'iface-id': 'a139fbf1-335f-4843-b08e-1841f3a3c567'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:11 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:11Z|00841|binding|INFO|Releasing lport a139fbf1-335f-4843-b08e-1841f3a3c567 from this chassis (sb_readonly=0)
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.899 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c9ec7d8f-d4ca-4f72-91c4-eabed5abb534.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c9ec7d8f-d4ca-4f72-91c4-eabed5abb534.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.900 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4611a305-2a6d-40fd-ae39-3562c70c2246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.901 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/c9ec7d8f-d4ca-4f72-91c4-eabed5abb534.pid.haproxy
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID c9ec7d8f-d4ca-4f72-91c4-eabed5abb534
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:11:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:11.901 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534', 'env', 'PROCESS_TAG=haproxy-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c9ec7d8f-d4ca-4f72-91c4-eabed5abb534.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:11:11 np0005465987 nova_compute[230713]: 2025-10-02 13:11:11.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:12 np0005465987 podman[314565]: 2025-10-02 13:11:12.248445888 +0000 UTC m=+0.051797356 container create acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:11:12 np0005465987 systemd[1]: Started libpod-conmon-acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d.scope.
Oct  2 09:11:12 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:11:12 np0005465987 podman[314565]: 2025-10-02 13:11:12.221270141 +0000 UTC m=+0.024621619 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:11:12 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/673bdae2aafd9acbae79184cfd2d807929b153bf4dd12f05f930abe8e73242ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:11:12 np0005465987 podman[314565]: 2025-10-02 13:11:12.333465083 +0000 UTC m=+0.136816551 container init acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:11:12 np0005465987 podman[314565]: 2025-10-02 13:11:12.340242657 +0000 UTC m=+0.143594105 container start acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:11:12 np0005465987 neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534[314598]: [NOTICE]   (314602) : New worker (314619) forked
Oct  2 09:11:12 np0005465987 neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534[314598]: [NOTICE]   (314602) : Loading success.
Oct  2 09:11:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.863 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410672.8634915, ad7157fd-646c-48d6-8c7e-5ee0f80efd5f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.864 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] VM Started (Lifecycle Event)#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.866 2 DEBUG nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.869 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.873 2 INFO nova.virt.libvirt.driver [-] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Instance spawned successfully.#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.873 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.902 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.908 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.912 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.912 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.913 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.913 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.914 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.915 2 DEBUG nova.virt.libvirt.driver [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.946 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.946 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410672.8642588, ad7157fd-646c-48d6-8c7e-5ee0f80efd5f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.946 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.975 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.978 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410672.8687584, ad7157fd-646c-48d6-8c7e-5ee0f80efd5f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:12 np0005465987 nova_compute[230713]: 2025-10-02 13:11:12.978 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:11:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:12.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.002 2 INFO nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Took 5.78 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.003 2 DEBUG nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.005 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.011 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.046 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.078 2 INFO nova.compute.manager [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Took 7.12 seconds to build instance.#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.099 2 DEBUG oslo_concurrency.lockutils [None req-bf869a78-d1c9-4e2c-9c7f-c04e071f6bea ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.993 2 DEBUG nova.compute.manager [req-9bd1def2-e797-49d1-a0ad-7d4acbcf76c8 req-76f98fa3-9bb4-4ffa-95af-faa6e5105ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received event network-vif-plugged-16a02f62-168d-4c03-b86a-4e634bf4c15d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.994 2 DEBUG oslo_concurrency.lockutils [req-9bd1def2-e797-49d1-a0ad-7d4acbcf76c8 req-76f98fa3-9bb4-4ffa-95af-faa6e5105ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.994 2 DEBUG oslo_concurrency.lockutils [req-9bd1def2-e797-49d1-a0ad-7d4acbcf76c8 req-76f98fa3-9bb4-4ffa-95af-faa6e5105ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.994 2 DEBUG oslo_concurrency.lockutils [req-9bd1def2-e797-49d1-a0ad-7d4acbcf76c8 req-76f98fa3-9bb4-4ffa-95af-faa6e5105ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.995 2 DEBUG nova.compute.manager [req-9bd1def2-e797-49d1-a0ad-7d4acbcf76c8 req-76f98fa3-9bb4-4ffa-95af-faa6e5105ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] No waiting events found dispatching network-vif-plugged-16a02f62-168d-4c03-b86a-4e634bf4c15d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:11:13 np0005465987 nova_compute[230713]: 2025-10-02 13:11:13.995 2 WARNING nova.compute.manager [req-9bd1def2-e797-49d1-a0ad-7d4acbcf76c8 req-76f98fa3-9bb4-4ffa-95af-faa6e5105ff1 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received unexpected event network-vif-plugged-16a02f62-168d-4c03-b86a-4e634bf4c15d for instance with vm_state active and task_state None.#033[00m
Oct  2 09:11:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:14.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:15.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:15 np0005465987 nova_compute[230713]: 2025-10-02 13:11:15.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:11:15 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:11:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:16.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:17.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:17.800 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:11:17 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:17.802 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:11:17 np0005465987 nova_compute[230713]: 2025-10-02 13:11:17.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:18.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:18 np0005465987 NetworkManager[44910]: <info>  [1759410678.4821] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Oct  2 09:11:18 np0005465987 NetworkManager[44910]: <info>  [1759410678.4840] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Oct  2 09:11:18 np0005465987 nova_compute[230713]: 2025-10-02 13:11:18.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:18 np0005465987 nova_compute[230713]: 2025-10-02 13:11:18.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:18 np0005465987 nova_compute[230713]: 2025-10-02 13:11:18.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:18 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:18Z|00842|binding|INFO|Releasing lport a139fbf1-335f-4843-b08e-1841f3a3c567 from this chassis (sb_readonly=0)
Oct  2 09:11:18 np0005465987 nova_compute[230713]: 2025-10-02 13:11:18.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:18 np0005465987 nova_compute[230713]: 2025-10-02 13:11:18.840 2 DEBUG nova.compute.manager [req-71c13872-617f-413a-86be-016df841f975 req-7b93bb9b-a2cd-42a8-9b90-926fd5cfb912 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received event network-changed-16a02f62-168d-4c03-b86a-4e634bf4c15d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:18 np0005465987 nova_compute[230713]: 2025-10-02 13:11:18.841 2 DEBUG nova.compute.manager [req-71c13872-617f-413a-86be-016df841f975 req-7b93bb9b-a2cd-42a8-9b90-926fd5cfb912 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Refreshing instance network info cache due to event network-changed-16a02f62-168d-4c03-b86a-4e634bf4c15d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:11:18 np0005465987 nova_compute[230713]: 2025-10-02 13:11:18.841 2 DEBUG oslo_concurrency.lockutils [req-71c13872-617f-413a-86be-016df841f975 req-7b93bb9b-a2cd-42a8-9b90-926fd5cfb912 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:18 np0005465987 nova_compute[230713]: 2025-10-02 13:11:18.841 2 DEBUG oslo_concurrency.lockutils [req-71c13872-617f-413a-86be-016df841f975 req-7b93bb9b-a2cd-42a8-9b90-926fd5cfb912 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:18 np0005465987 nova_compute[230713]: 2025-10-02 13:11:18.842 2 DEBUG nova.network.neutron [req-71c13872-617f-413a-86be-016df841f975 req-7b93bb9b-a2cd-42a8-9b90-926fd5cfb912 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Refreshing network info cache for port 16a02f62-168d-4c03-b86a-4e634bf4c15d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:11:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:19.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:20 np0005465987 nova_compute[230713]: 2025-10-02 13:11:20.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:21.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:21 np0005465987 podman[314688]: 2025-10-02 13:11:21.83979774 +0000 UTC m=+0.054321455 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 09:11:21 np0005465987 nova_compute[230713]: 2025-10-02 13:11:21.938 2 DEBUG nova.network.neutron [req-71c13872-617f-413a-86be-016df841f975 req-7b93bb9b-a2cd-42a8-9b90-926fd5cfb912 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updated VIF entry in instance network info cache for port 16a02f62-168d-4c03-b86a-4e634bf4c15d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:11:21 np0005465987 nova_compute[230713]: 2025-10-02 13:11:21.939 2 DEBUG nova.network.neutron [req-71c13872-617f-413a-86be-016df841f975 req-7b93bb9b-a2cd-42a8-9b90-926fd5cfb912 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updating instance_info_cache with network_info: [{"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:21 np0005465987 nova_compute[230713]: 2025-10-02 13:11:21.970 2 DEBUG oslo_concurrency.lockutils [req-71c13872-617f-413a-86be-016df841f975 req-7b93bb9b-a2cd-42a8-9b90-926fd5cfb912 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:23.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:23 np0005465987 nova_compute[230713]: 2025-10-02 13:11:23.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:24.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:25.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:25 np0005465987 nova_compute[230713]: 2025-10-02 13:11:25.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:26 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:26Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:88:85 10.100.0.10
Oct  2 09:11:26 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:26Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:88:85 10.100.0.10
Oct  2 09:11:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:26.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:26 np0005465987 podman[314709]: 2025-10-02 13:11:26.842621727 +0000 UTC m=+0.056149363 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:11:26 np0005465987 podman[314708]: 2025-10-02 13:11:26.869762553 +0000 UTC m=+0.086230779 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:11:26 np0005465987 podman[314707]: 2025-10-02 13:11:26.905601245 +0000 UTC m=+0.125452112 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:11:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:27.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:27.804 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:11:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2021659220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:11:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:11:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2021659220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:11:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:27.997 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:27.998 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:27.998 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:28.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:28 np0005465987 nova_compute[230713]: 2025-10-02 13:11:28.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:29.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:29 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e418 e418: 3 total, 3 up, 3 in
Oct  2 09:11:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:30.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:30 np0005465987 nova_compute[230713]: 2025-10-02 13:11:30.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:31.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:31 np0005465987 nova_compute[230713]: 2025-10-02 13:11:31.815 2 INFO nova.compute.manager [None req-b7eb3567-fe62-4b77-ace9-2d49a4105630 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Get console output#033[00m
Oct  2 09:11:31 np0005465987 nova_compute[230713]: 2025-10-02 13:11:31.820 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:11:32 np0005465987 nova_compute[230713]: 2025-10-02 13:11:32.297 2 INFO nova.compute.manager [None req-9f490b01-4984-47db-84c1-dc5198293fd5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Pausing#033[00m
Oct  2 09:11:32 np0005465987 nova_compute[230713]: 2025-10-02 13:11:32.299 2 DEBUG nova.objects.instance [None req-9f490b01-4984-47db-84c1-dc5198293fd5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'flavor' on Instance uuid ad7157fd-646c-48d6-8c7e-5ee0f80efd5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:32 np0005465987 nova_compute[230713]: 2025-10-02 13:11:32.343 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410692.3435297, ad7157fd-646c-48d6-8c7e-5ee0f80efd5f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:32 np0005465987 nova_compute[230713]: 2025-10-02 13:11:32.344 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:11:32 np0005465987 nova_compute[230713]: 2025-10-02 13:11:32.346 2 DEBUG nova.compute.manager [None req-9f490b01-4984-47db-84c1-dc5198293fd5 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:32 np0005465987 nova_compute[230713]: 2025-10-02 13:11:32.389 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:32 np0005465987 nova_compute[230713]: 2025-10-02 13:11:32.392 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:11:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:32.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:32 np0005465987 nova_compute[230713]: 2025-10-02 13:11:32.579 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 09:11:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:33.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:33 np0005465987 nova_compute[230713]: 2025-10-02 13:11:33.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:34.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:34 np0005465987 nova_compute[230713]: 2025-10-02 13:11:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:34 np0005465987 nova_compute[230713]: 2025-10-02 13:11:34.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:35.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:35 np0005465987 nova_compute[230713]: 2025-10-02 13:11:35.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:36 np0005465987 nova_compute[230713]: 2025-10-02 13:11:36.470 2 INFO nova.compute.manager [None req-e8584e7e-06a7-43af-ba69-d8e0d2456546 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Get console output#033[00m
Oct  2 09:11:36 np0005465987 nova_compute[230713]: 2025-10-02 13:11:36.475 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:11:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:11:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:36.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:11:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:11:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:37.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.134 2 INFO nova.compute.manager [None req-34555cde-d59d-44a8-b6b2-76fc65d61b54 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Unpausing#033[00m
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.135 2 DEBUG nova.objects.instance [None req-34555cde-d59d-44a8-b6b2-76fc65d61b54 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'flavor' on Instance uuid ad7157fd-646c-48d6-8c7e-5ee0f80efd5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.168 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410698.167834, ad7157fd-646c-48d6-8c7e-5ee0f80efd5f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.168 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:11:38 np0005465987 virtqemud[230442]: argument unsupported: QEMU guest agent is not configured
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.172 2 DEBUG nova.virt.libvirt.guest [None req-34555cde-d59d-44a8-b6b2-76fc65d61b54 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.173 2 DEBUG nova.compute.manager [None req-34555cde-d59d-44a8-b6b2-76fc65d61b54 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.199 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.202 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.233 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct  2 09:11:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:38.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:38 np0005465987 nova_compute[230713]: 2025-10-02 13:11:38.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:39.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:39 np0005465987 nova_compute[230713]: 2025-10-02 13:11:39.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:40 np0005465987 nova_compute[230713]: 2025-10-02 13:11:40.327 2 INFO nova.compute.manager [None req-592b20df-b98a-4b55-a8a5-2870a905dc37 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Get console output#033[00m
Oct  2 09:11:40 np0005465987 nova_compute[230713]: 2025-10-02 13:11:40.331 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:11:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:40.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:40 np0005465987 nova_compute[230713]: 2025-10-02 13:11:40.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:40 np0005465987 nova_compute[230713]: 2025-10-02 13:11:40.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:40 np0005465987 nova_compute[230713]: 2025-10-02 13:11:40.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:11:40 np0005465987 nova_compute[230713]: 2025-10-02 13:11:40.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:11:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:41.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e419 e419: 3 total, 3 up, 3 in
Oct  2 09:11:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.169 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.169 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.169 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.169 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ad7157fd-646c-48d6-8c7e-5ee0f80efd5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.286 2 DEBUG nova.compute.manager [req-d38e98ce-9137-4c07-a915-0abbe31b8057 req-eb37eef5-e8e8-4572-9830-02194e7a8aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received event network-changed-16a02f62-168d-4c03-b86a-4e634bf4c15d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.286 2 DEBUG nova.compute.manager [req-d38e98ce-9137-4c07-a915-0abbe31b8057 req-eb37eef5-e8e8-4572-9830-02194e7a8aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Refreshing instance network info cache due to event network-changed-16a02f62-168d-4c03-b86a-4e634bf4c15d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.286 2 DEBUG oslo_concurrency.lockutils [req-d38e98ce-9137-4c07-a915-0abbe31b8057 req-eb37eef5-e8e8-4572-9830-02194e7a8aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.288 2 DEBUG oslo_concurrency.lockutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.288 2 DEBUG oslo_concurrency.lockutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.288 2 DEBUG oslo_concurrency.lockutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.289 2 DEBUG oslo_concurrency.lockutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.289 2 DEBUG oslo_concurrency.lockutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.290 2 INFO nova.compute.manager [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Terminating instance#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.290 2 DEBUG nova.compute.manager [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:11:41 np0005465987 kernel: tap16a02f62-16 (unregistering): left promiscuous mode
Oct  2 09:11:41 np0005465987 NetworkManager[44910]: <info>  [1759410701.7268] device (tap16a02f62-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:11:41 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:41Z|00843|binding|INFO|Releasing lport 16a02f62-168d-4c03-b86a-4e634bf4c15d from this chassis (sb_readonly=0)
Oct  2 09:11:41 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:41Z|00844|binding|INFO|Setting lport 16a02f62-168d-4c03-b86a-4e634bf4c15d down in Southbound
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:41 np0005465987 ovn_controller[129172]: 2025-10-02T13:11:41Z|00845|binding|INFO|Removing iface tap16a02f62-16 ovn-installed in OVS
Oct  2 09:11:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:41.753 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:88:85 10.100.0.10'], port_security=['fa:16:3e:cb:88:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ad7157fd-646c-48d6-8c7e-5ee0f80efd5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70538dbb-149a-49d9-b664-81755ab8b49e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e3670b9-59be-4977-90e5-dea383192678, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=16a02f62-168d-4c03-b86a-4e634bf4c15d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:11:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:41.755 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 16a02f62-168d-4c03-b86a-4e634bf4c15d in datapath c9ec7d8f-d4ca-4f72-91c4-eabed5abb534 unbound from our chassis#033[00m
Oct  2 09:11:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:41.756 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c9ec7d8f-d4ca-4f72-91c4-eabed5abb534, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:11:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:41.758 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba9c6fb-e269-47f8-84eb-8ee7788ea536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:41 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:41.759 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534 namespace which is not needed anymore#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:41 np0005465987 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d5.scope: Deactivated successfully.
Oct  2 09:11:41 np0005465987 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d5.scope: Consumed 13.420s CPU time.
Oct  2 09:11:41 np0005465987 systemd-machined[188335]: Machine qemu-94-instance-000000d5 terminated.
Oct  2 09:11:41 np0005465987 neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534[314598]: [NOTICE]   (314602) : haproxy version is 2.8.14-c23fe91
Oct  2 09:11:41 np0005465987 neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534[314598]: [NOTICE]   (314602) : path to executable is /usr/sbin/haproxy
Oct  2 09:11:41 np0005465987 neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534[314598]: [WARNING]  (314602) : Exiting Master process...
Oct  2 09:11:41 np0005465987 neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534[314598]: [ALERT]    (314602) : Current worker (314619) exited with code 143 (Terminated)
Oct  2 09:11:41 np0005465987 neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534[314598]: [WARNING]  (314602) : All workers exited. Exiting... (0)
Oct  2 09:11:41 np0005465987 systemd[1]: libpod-acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d.scope: Deactivated successfully.
Oct  2 09:11:41 np0005465987 podman[314796]: 2025-10-02 13:11:41.888647489 +0000 UTC m=+0.041114175 container died acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:41 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d-userdata-shm.mount: Deactivated successfully.
Oct  2 09:11:41 np0005465987 systemd[1]: var-lib-containers-storage-overlay-673bdae2aafd9acbae79184cfd2d807929b153bf4dd12f05f930abe8e73242ec-merged.mount: Deactivated successfully.
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.931 2 INFO nova.virt.libvirt.driver [-] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Instance destroyed successfully.#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.932 2 DEBUG nova.objects.instance [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid ad7157fd-646c-48d6-8c7e-5ee0f80efd5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:41 np0005465987 podman[314796]: 2025-10-02 13:11:41.94917217 +0000 UTC m=+0.101638846 container cleanup acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 09:11:41 np0005465987 systemd[1]: libpod-conmon-acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d.scope: Deactivated successfully.
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.964 2 DEBUG nova.virt.libvirt.vif [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-383436651',display_name='tempest-TestNetworkAdvancedServerOps-server-383436651',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-383436651',id=213,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMXl1Hgr2gWH+LKTd/OVUGfIlI7FjzKSN922u/H2k/LzYchmprRCXIRdXyhc6kupEVCtzfJGfzM3R2ZahWieNVO2Bs1y1rxrEci/0cGjGdoRGch1PXj0ypX33zv1zLXs6w==',key_name='tempest-TestNetworkAdvancedServerOps-86181458',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:11:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-m8iqo8ux',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:11:38Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=ad7157fd-646c-48d6-8c7e-5ee0f80efd5f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.965 2 DEBUG nova.network.os_vif_util [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.966 2 DEBUG nova.network.os_vif_util [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:88:85,bridge_name='br-int',has_traffic_filtering=True,id=16a02f62-168d-4c03-b86a-4e634bf4c15d,network=Network(c9ec7d8f-d4ca-4f72-91c4-eabed5abb534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16a02f62-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.966 2 DEBUG os_vif [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:88:85,bridge_name='br-int',has_traffic_filtering=True,id=16a02f62-168d-4c03-b86a-4e634bf4c15d,network=Network(c9ec7d8f-d4ca-4f72-91c4-eabed5abb534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16a02f62-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.968 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16a02f62-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:41 np0005465987 nova_compute[230713]: 2025-10-02 13:11:41.977 2 INFO os_vif [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:88:85,bridge_name='br-int',has_traffic_filtering=True,id=16a02f62-168d-4c03-b86a-4e634bf4c15d,network=Network(c9ec7d8f-d4ca-4f72-91c4-eabed5abb534),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16a02f62-16')#033[00m
Oct  2 09:11:42 np0005465987 podman[314839]: 2025-10-02 13:11:42.023178946 +0000 UTC m=+0.044756964 container remove acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:11:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:42.031 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[37d88a4b-7990-4e10-9b04-6567980ba1bf]: (4, ('Thu Oct  2 01:11:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534 (acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d)\nacfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d\nThu Oct  2 01:11:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534 (acfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d)\nacfa6203a7f4ff68dea7f99e1b5dc70c802db913f587af75b021a97209f2879d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:42.033 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[882d5926-05e7-40ff-b6d3-b70649b54a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:42.034 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9ec7d8f-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:42 np0005465987 kernel: tapc9ec7d8f-d0: left promiscuous mode
Oct  2 09:11:42 np0005465987 nova_compute[230713]: 2025-10-02 13:11:42.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:42 np0005465987 nova_compute[230713]: 2025-10-02 13:11:42.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:42 np0005465987 nova_compute[230713]: 2025-10-02 13:11:42.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:42.051 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[96626e54-5915-4a37-8987-a1f535f6d85a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:42.094 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa11260-d976-4379-a07d-3e18a0bbf73a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:42.095 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5c780b49-55fd-438b-9973-7af4bf16a905]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:42.110 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[70e93be6-c31c-4aee-98a3-cf00bdcb34fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874026, 'reachable_time': 44816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314873, 'error': None, 'target': 'ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:42 np0005465987 systemd[1]: run-netns-ovnmeta\x2dc9ec7d8f\x2dd4ca\x2d4f72\x2d91c4\x2deabed5abb534.mount: Deactivated successfully.
Oct  2 09:11:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:42.114 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c9ec7d8f-d4ca-4f72-91c4-eabed5abb534 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:11:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:11:42.114 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[d2463059-2b85-4d0f-b1a1-ee50ca3a5c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:42.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:43.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.062 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updating instance_info_cache with network_info: [{"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.084 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.085 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.085 2 DEBUG oslo_concurrency.lockutils [req-d38e98ce-9137-4c07-a915-0abbe31b8057 req-eb37eef5-e8e8-4572-9830-02194e7a8aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.085 2 DEBUG nova.network.neutron [req-d38e98ce-9137-4c07-a915-0abbe31b8057 req-eb37eef5-e8e8-4572-9830-02194e7a8aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Refreshing network info cache for port 16a02f62-168d-4c03-b86a-4e634bf4c15d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.404 2 DEBUG nova.compute.manager [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received event network-vif-unplugged-16a02f62-168d-4c03-b86a-4e634bf4c15d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.404 2 DEBUG oslo_concurrency.lockutils [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.405 2 DEBUG oslo_concurrency.lockutils [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.405 2 DEBUG oslo_concurrency.lockutils [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.406 2 DEBUG nova.compute.manager [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] No waiting events found dispatching network-vif-unplugged-16a02f62-168d-4c03-b86a-4e634bf4c15d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.406 2 DEBUG nova.compute.manager [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received event network-vif-unplugged-16a02f62-168d-4c03-b86a-4e634bf4c15d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.406 2 DEBUG nova.compute.manager [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received event network-vif-plugged-16a02f62-168d-4c03-b86a-4e634bf4c15d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.407 2 DEBUG oslo_concurrency.lockutils [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.407 2 DEBUG oslo_concurrency.lockutils [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.407 2 DEBUG oslo_concurrency.lockutils [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.407 2 DEBUG nova.compute.manager [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] No waiting events found dispatching network-vif-plugged-16a02f62-168d-4c03-b86a-4e634bf4c15d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.408 2 WARNING nova.compute.manager [req-16f10ac6-5e5b-43a0-9c54-89a83d16a94f req-ba9a1c3c-2cca-4c9e-827a-b770e431c46c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received unexpected event network-vif-plugged-16a02f62-168d-4c03-b86a-4e634bf4c15d for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.980 2 INFO nova.virt.libvirt.driver [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Deleting instance files /var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_del#033[00m
Oct  2 09:11:43 np0005465987 nova_compute[230713]: 2025-10-02 13:11:43.981 2 INFO nova.virt.libvirt.driver [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Deletion of /var/lib/nova/instances/ad7157fd-646c-48d6-8c7e-5ee0f80efd5f_del complete#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.028 2 INFO nova.compute.manager [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Took 2.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.029 2 DEBUG oslo.service.loopingcall [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.030 2 DEBUG nova.compute.manager [-] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.030 2 DEBUG nova.network.neutron [-] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.196 2 DEBUG nova.network.neutron [req-d38e98ce-9137-4c07-a915-0abbe31b8057 req-eb37eef5-e8e8-4572-9830-02194e7a8aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updated VIF entry in instance network info cache for port 16a02f62-168d-4c03-b86a-4e634bf4c15d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.197 2 DEBUG nova.network.neutron [req-d38e98ce-9137-4c07-a915-0abbe31b8057 req-eb37eef5-e8e8-4572-9830-02194e7a8aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updating instance_info_cache with network_info: [{"id": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "address": "fa:16:3e:cb:88:85", "network": {"id": "c9ec7d8f-d4ca-4f72-91c4-eabed5abb534", "bridge": "br-int", "label": "tempest-network-smoke--185090166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16a02f62-16", "ovs_interfaceid": "16a02f62-168d-4c03-b86a-4e634bf4c15d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.222 2 DEBUG oslo_concurrency.lockutils [req-d38e98ce-9137-4c07-a915-0abbe31b8057 req-eb37eef5-e8e8-4572-9830-02194e7a8aa2 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:44.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.613 2 DEBUG nova.network.neutron [-] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.634 2 INFO nova.compute.manager [-] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Took 0.60 seconds to deallocate network for instance.#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.691 2 DEBUG oslo_concurrency.lockutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.692 2 DEBUG oslo_concurrency.lockutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.752 2 DEBUG nova.compute.manager [req-b6f37465-f9db-450e-93e0-d5faefb9f122 req-f9bb41af-a0e9-477b-8407-56fa947bd002 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Received event network-vif-deleted-16a02f62-168d-4c03-b86a-4e634bf4c15d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.842 2 DEBUG oslo_concurrency.processutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:44 np0005465987 nova_compute[230713]: 2025-10-02 13:11:44.984 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:45.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3033729653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.305 2 DEBUG oslo_concurrency.processutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.311 2 DEBUG nova.compute.provider_tree [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.340 2 DEBUG nova.scheduler.client.report [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.369 2 DEBUG oslo_concurrency.lockutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.372 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.372 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.373 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.373 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.425 2 INFO nova.scheduler.client.report [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocations for instance ad7157fd-646c-48d6-8c7e-5ee0f80efd5f#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.491 2 DEBUG oslo_concurrency.lockutils [None req-285e0809-8ce5-4185-9f6c-37e0b085d343 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "ad7157fd-646c-48d6-8c7e-5ee0f80efd5f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:45 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:45 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1865546156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.802 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.965 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.967 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4324MB free_disk=20.95449447631836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.968 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:45 np0005465987 nova_compute[230713]: 2025-10-02 13:11:45.968 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:46 np0005465987 nova_compute[230713]: 2025-10-02 13:11:46.025 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:11:46 np0005465987 nova_compute[230713]: 2025-10-02 13:11:46.025 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:11:46 np0005465987 nova_compute[230713]: 2025-10-02 13:11:46.049 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/183916156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:46 np0005465987 nova_compute[230713]: 2025-10-02 13:11:46.456 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:46 np0005465987 nova_compute[230713]: 2025-10-02 13:11:46.462 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:11:46 np0005465987 nova_compute[230713]: 2025-10-02 13:11:46.488 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:11:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:46.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:46 np0005465987 nova_compute[230713]: 2025-10-02 13:11:46.526 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:11:46 np0005465987 nova_compute[230713]: 2025-10-02 13:11:46.526 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:46 np0005465987 nova_compute[230713]: 2025-10-02 13:11:46.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:47.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:47 np0005465987 nova_compute[230713]: 2025-10-02 13:11:47.526 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:48 np0005465987 nova_compute[230713]: 2025-10-02 13:11:48.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:48 np0005465987 nova_compute[230713]: 2025-10-02 13:11:48.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:48.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:48 np0005465987 nova_compute[230713]: 2025-10-02 13:11:48.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:49.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:49 np0005465987 nova_compute[230713]: 2025-10-02 13:11:49.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:49 np0005465987 nova_compute[230713]: 2025-10-02 13:11:49.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:11:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.003000080s ======
Oct  2 09:11:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:50.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Oct  2 09:11:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:51.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:51 np0005465987 nova_compute[230713]: 2025-10-02 13:11:51.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:52.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:52 np0005465987 podman[314943]: 2025-10-02 13:11:52.864550757 +0000 UTC m=+0.065541597 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:11:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:53.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:53 np0005465987 nova_compute[230713]: 2025-10-02 13:11:53.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.816835) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713816909, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1248, "num_deletes": 252, "total_data_size": 2610300, "memory_usage": 2647040, "flush_reason": "Manual Compaction"}
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713827079, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1720803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80925, "largest_seqno": 82168, "table_properties": {"data_size": 1715425, "index_size": 2773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12118, "raw_average_key_size": 20, "raw_value_size": 1704411, "raw_average_value_size": 2835, "num_data_blocks": 123, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410617, "oldest_key_time": 1759410617, "file_creation_time": 1759410713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 10275 microseconds, and 5651 cpu microseconds.
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.827118) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1720803 bytes OK
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.827137) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.828462) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.828477) EVENT_LOG_v1 {"time_micros": 1759410713828472, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.828495) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2604372, prev total WAL file size 2604372, number of live WAL files 2.
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.829576) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1680KB)], [165(12MB)]
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713829668, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 15317190, "oldest_snapshot_seqno": -1}
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 10189 keys, 13364420 bytes, temperature: kUnknown
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713930580, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 13364420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13298063, "index_size": 39748, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 269252, "raw_average_key_size": 26, "raw_value_size": 13119142, "raw_average_value_size": 1287, "num_data_blocks": 1512, "num_entries": 10189, "num_filter_entries": 10189, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759410713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.930844) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13364420 bytes
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.932505) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.7 rd, 132.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 13.0 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(16.7) write-amplify(7.8) OK, records in: 10710, records dropped: 521 output_compression: NoCompression
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.932549) EVENT_LOG_v1 {"time_micros": 1759410713932529, "job": 106, "event": "compaction_finished", "compaction_time_micros": 100970, "compaction_time_cpu_micros": 60646, "output_level": 6, "num_output_files": 1, "total_output_size": 13364420, "num_input_records": 10710, "num_output_records": 10189, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713933074, "job": 106, "event": "table_file_deletion", "file_number": 167}
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410713935481, "job": 106, "event": "table_file_deletion", "file_number": 165}
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.829387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.935572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.935577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.935579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.935581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:11:53 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:11:53.935582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:11:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:54.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:55.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:11:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142249881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:11:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:11:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142249881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:11:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:56.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:56 np0005465987 nova_compute[230713]: 2025-10-02 13:11:56.928 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410701.9270072, ad7157fd-646c-48d6-8c7e-5ee0f80efd5f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:56 np0005465987 nova_compute[230713]: 2025-10-02 13:11:56.928 2 INFO nova.compute.manager [-] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:11:56 np0005465987 nova_compute[230713]: 2025-10-02 13:11:56.961 2 DEBUG nova.compute.manager [None req-08670383-8025-49b8-9795-8184843ecc27 - - - - - -] [instance: ad7157fd-646c-48d6-8c7e-5ee0f80efd5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:56 np0005465987 nova_compute[230713]: 2025-10-02 13:11:56.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:11:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:57.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:11:57 np0005465987 podman[314962]: 2025-10-02 13:11:57.864070115 +0000 UTC m=+0.078832538 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 09:11:57 np0005465987 podman[314963]: 2025-10-02 13:11:57.866104711 +0000 UTC m=+0.072133617 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:11:57 np0005465987 podman[314964]: 2025-10-02 13:11:57.897597405 +0000 UTC m=+0.099551841 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:11:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:11:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:58.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:11:58 np0005465987 nova_compute[230713]: 2025-10-02 13:11:58.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:11:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:59.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:00.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:01.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:01 np0005465987 nova_compute[230713]: 2025-10-02 13:12:01.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:02.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:03.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:03 np0005465987 nova_compute[230713]: 2025-10-02 13:12:03.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:12:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:04.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:12:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:05.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:06.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:06 np0005465987 nova_compute[230713]: 2025-10-02 13:12:06.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:07.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:08.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:08 np0005465987 nova_compute[230713]: 2025-10-02 13:12:08.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:09.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:10.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:11.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:12 np0005465987 nova_compute[230713]: 2025-10-02 13:12:12.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:12.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:13.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:13 np0005465987 nova_compute[230713]: 2025-10-02 13:12:13.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:12:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:14.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:12:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:15.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:16 np0005465987 podman[315199]: 2025-10-02 13:12:16.047169628 +0000 UTC m=+0.065053525 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 09:12:16 np0005465987 podman[315199]: 2025-10-02 13:12:16.152052301 +0000 UTC m=+0.169936178 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 09:12:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:16.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:17 np0005465987 nova_compute[230713]: 2025-10-02 13:12:17.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:17.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:12:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:12:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:12:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:12:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:18 np0005465987 nova_compute[230713]: 2025-10-02 13:12:18.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:12:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:12:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:12:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:19.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:12:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:20.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:12:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e420 e420: 3 total, 3 up, 3 in
Oct  2 09:12:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:21.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:22 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 e421: 3 total, 3 up, 3 in
Oct  2 09:12:22 np0005465987 nova_compute[230713]: 2025-10-02 13:12:22.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:22.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:23.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:23 np0005465987 nova_compute[230713]: 2025-10-02 13:12:23.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:23 np0005465987 podman[315449]: 2025-10-02 13:12:23.838515588 +0000 UTC m=+0.054774207 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:12:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:25 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:12:25 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:12:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:25.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:12:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:12:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:27.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:27 np0005465987 nova_compute[230713]: 2025-10-02 13:12:27.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:27 np0005465987 nova_compute[230713]: 2025-10-02 13:12:27.861 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:27 np0005465987 nova_compute[230713]: 2025-10-02 13:12:27.861 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:27 np0005465987 nova_compute[230713]: 2025-10-02 13:12:27.879 2 DEBUG nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:12:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:27.998 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:27.999 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:27 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:27.999 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.003 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.003 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.010 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.010 2 INFO nova.compute.claims [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.104 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:28 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:12:28 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4154291980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.509 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.515 2 DEBUG nova.compute.provider_tree [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.543 2 DEBUG nova.scheduler.client.report [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:12:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:28.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.591 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.592 2 DEBUG nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.646 2 INFO nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.648 2 DEBUG nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.649 2 DEBUG nova.network.neutron [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.664 2 DEBUG nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.721 2 INFO nova.virt.block_device [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Booting with volume snapshot aa979228-d578-4819-bd72-9a5d1ac2ab6f at /dev/vda#033[00m
Oct  2 09:12:28 np0005465987 podman[315541]: 2025-10-02 13:12:28.839403933 +0000 UTC m=+0.054400006 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:12:28 np0005465987 podman[315540]: 2025-10-02 13:12:28.847322327 +0000 UTC m=+0.060520482 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:12:28 np0005465987 podman[315539]: 2025-10-02 13:12:28.866477767 +0000 UTC m=+0.086097576 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:12:28 np0005465987 nova_compute[230713]: 2025-10-02 13:12:28.968 2 DEBUG nova.policy [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c10de71fef00497981b8b7cec6a3fff3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbbc6cb494464fd9b31f64c1ad75fa6b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:12:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:29.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:30 np0005465987 nova_compute[230713]: 2025-10-02 13:12:30.095 2 DEBUG nova.network.neutron [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Successfully created port: 3f7597d9-9a35-472b-adcd-575ddade339a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:12:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:30.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:30 np0005465987 nova_compute[230713]: 2025-10-02 13:12:30.947 2 DEBUG nova.network.neutron [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Successfully updated port: 3f7597d9-9a35-472b-adcd-575ddade339a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:12:30 np0005465987 nova_compute[230713]: 2025-10-02 13:12:30.971 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:12:30 np0005465987 nova_compute[230713]: 2025-10-02 13:12:30.972 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquired lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:12:30 np0005465987 nova_compute[230713]: 2025-10-02 13:12:30.972 2 DEBUG nova.network.neutron [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:12:31 np0005465987 nova_compute[230713]: 2025-10-02 13:12:31.072 2 DEBUG nova.compute.manager [req-9cbc883d-8336-4b04-bf2f-095b3456a23b req-be48d59b-71a7-4d93-b0e9-422e801c17ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received event network-changed-3f7597d9-9a35-472b-adcd-575ddade339a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:31 np0005465987 nova_compute[230713]: 2025-10-02 13:12:31.073 2 DEBUG nova.compute.manager [req-9cbc883d-8336-4b04-bf2f-095b3456a23b req-be48d59b-71a7-4d93-b0e9-422e801c17ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Refreshing instance network info cache due to event network-changed-3f7597d9-9a35-472b-adcd-575ddade339a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:12:31 np0005465987 nova_compute[230713]: 2025-10-02 13:12:31.073 2 DEBUG oslo_concurrency.lockutils [req-9cbc883d-8336-4b04-bf2f-095b3456a23b req-be48d59b-71a7-4d93-b0e9-422e801c17ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:12:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:31.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:31 np0005465987 nova_compute[230713]: 2025-10-02 13:12:31.152 2 DEBUG nova.network.neutron [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:12:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:31 np0005465987 nova_compute[230713]: 2025-10-02 13:12:31.823 2 DEBUG nova.network.neutron [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Updating instance_info_cache with network_info: [{"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:31 np0005465987 nova_compute[230713]: 2025-10-02 13:12:31.892 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Releasing lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:12:31 np0005465987 nova_compute[230713]: 2025-10-02 13:12:31.893 2 DEBUG nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Instance network_info: |[{"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:12:31 np0005465987 nova_compute[230713]: 2025-10-02 13:12:31.893 2 DEBUG oslo_concurrency.lockutils [req-9cbc883d-8336-4b04-bf2f-095b3456a23b req-be48d59b-71a7-4d93-b0e9-422e801c17ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:12:31 np0005465987 nova_compute[230713]: 2025-10-02 13:12:31.894 2 DEBUG nova.network.neutron [req-9cbc883d-8336-4b04-bf2f-095b3456a23b req-be48d59b-71a7-4d93-b0e9-422e801c17ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Refreshing network info cache for port 3f7597d9-9a35-472b-adcd-575ddade339a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:12:32 np0005465987 nova_compute[230713]: 2025-10-02 13:12:32.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:32.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:12:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:33.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.291 2 DEBUG os_brick.utils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.292 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.302 1357 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.303 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee7adad-7dda-4e95-a7e0-9253871b7b3b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.306 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.318 1357 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.318 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[fe293d5b-c9c4-48b4-9746-8c7d68532e22]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a46b55b0f33d', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.319 1357 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.328 1357 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.328 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[a63041a2-def1-4478-9754-e367453ad864]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.329 1357 DEBUG oslo.privsep.daemon [-] privsep: reply[801d7635-5c18-455f-a264-421e61997835]: (4, '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.330 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.360 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.362 2 DEBUG os_brick.initiator.connectors.lightos [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.362 2 DEBUG os_brick.initiator.connectors.lightos [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.362 2 DEBUG os_brick.initiator.connectors.lightos [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.362 2 DEBUG os_brick.utils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a46b55b0f33d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '1113ba57-9df9-4fb0-a2ed-0ca31cebd82a', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.363 2 DEBUG nova.virt.block_device [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Updating existing volume attachment record: 262bed8e-b429-414f-a538-1eb85633f691 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.885 2 DEBUG nova.network.neutron [req-9cbc883d-8336-4b04-bf2f-095b3456a23b req-be48d59b-71a7-4d93-b0e9-422e801c17ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Updated VIF entry in instance network info cache for port 3f7597d9-9a35-472b-adcd-575ddade339a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.885 2 DEBUG nova.network.neutron [req-9cbc883d-8336-4b04-bf2f-095b3456a23b req-be48d59b-71a7-4d93-b0e9-422e801c17ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Updating instance_info_cache with network_info: [{"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:33 np0005465987 nova_compute[230713]: 2025-10-02 13:12:33.979 2 DEBUG oslo_concurrency.lockutils [req-9cbc883d-8336-4b04-bf2f-095b3456a23b req-be48d59b-71a7-4d93-b0e9-422e801c17ff d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:12:34 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:34Z|00846|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct  2 09:12:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:12:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:12:34 np0005465987 nova_compute[230713]: 2025-10-02 13:12:34.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:35.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.299 2 DEBUG nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.302 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.303 2 INFO nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Creating image(s)#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.303 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.304 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Ensure instance console log exists: /var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.305 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.305 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.306 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.311 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Start _get_guest_xml network_info=[{"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-02T13:12:20Z,direct_url=<?>,disk_format='qcow2',id=f1af818e-71fa-4f13-bb73-d787866237a6,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-957026105',owner='fbbc6cb494464fd9b31f64c1ad75fa6b',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-02T13:12:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'device_type': 'disk', 'attachment_id': '262bed8e-b429-414f-a538-1eb85633f691', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6e153e1f-2b24-48a4-97ce-843802b3972c', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6e153e1f-2b24-48a4-97ce-843802b3972c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b66520bc-5f0e-4a9c-80bd-531f2a57dc67', 'attached_at': '', 'detached_at': '', 'volume_id': '6e153e1f-2b24-48a4-97ce-843802b3972c', 'serial': '6e153e1f-2b24-48a4-97ce-843802b3972c'}, 'guest_format': None, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.317 2 WARNING nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.322 2 DEBUG nova.virt.libvirt.host [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.323 2 DEBUG nova.virt.libvirt.host [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.326 2 DEBUG nova.virt.libvirt.host [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.326 2 DEBUG nova.virt.libvirt.host [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.328 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.329 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-02T13:12:20Z,direct_url=<?>,disk_format='qcow2',id=f1af818e-71fa-4f13-bb73-d787866237a6,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-957026105',owner='fbbc6cb494464fd9b31f64c1ad75fa6b',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-02T13:12:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.329 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.330 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.330 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.331 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.331 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.331 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.332 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.332 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.333 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.333 2 DEBUG nova.virt.hardware [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.372 2 DEBUG nova.storage.rbd_utils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] rbd image b66520bc-5f0e-4a9c-80bd-531f2a57dc67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.378 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:35 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:12:35 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2533678381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.814 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.836 2 DEBUG nova.virt.libvirt.vif [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1292515515',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1292515515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1292515515',id=217,image_ref='f1af818e-71fa-4f13-bb73-d787866237a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOIuVO5CxuTphSvUaPc8gFKei3KARwNihLWkEtWKSgQIliJdSQna+ccMewG8nJloc5ljPBcBu4Q1dKBUMIVwScako/6SXqbjCzqIzn1MO7mNgaJQGmHgG71n/ZRoq/R8ow==',key_name='tempest-keypair-1943707167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbbc6cb494464fd9b31f64c1ad75fa6b',ramdisk_id='',reservation_id='r-88s0gqkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1200415020',image_owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1200415020',owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c10de71fef00497981b8b7cec6a3fff3',uuid=b66520bc-5f0e-4a9c-80bd-531f2a57dc67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.837 2 DEBUG nova.network.os_vif_util [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converting VIF {"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.838 2 DEBUG nova.network.os_vif_util [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:25:12,bridge_name='br-int',has_traffic_filtering=True,id=3f7597d9-9a35-472b-adcd-575ddade339a,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f7597d9-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.839 2 DEBUG nova.objects.instance [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lazy-loading 'pci_devices' on Instance uuid b66520bc-5f0e-4a9c-80bd-531f2a57dc67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.872 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <uuid>b66520bc-5f0e-4a9c-80bd-531f2a57dc67</uuid>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <name>instance-000000d9</name>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-1292515515</nova:name>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:12:35</nova:creationTime>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <nova:user uuid="c10de71fef00497981b8b7cec6a3fff3">tempest-TestVolumeBootPattern-1200415020-project-member</nova:user>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <nova:project uuid="fbbc6cb494464fd9b31f64c1ad75fa6b">tempest-TestVolumeBootPattern-1200415020</nova:project>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="f1af818e-71fa-4f13-bb73-d787866237a6"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <nova:port uuid="3f7597d9-9a35-472b-adcd-575ddade339a">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <entry name="serial">b66520bc-5f0e-4a9c-80bd-531f2a57dc67</entry>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <entry name="uuid">b66520bc-5f0e-4a9c-80bd-531f2a57dc67</entry>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/b66520bc-5f0e-4a9c-80bd-531f2a57dc67_disk.config">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="volumes/volume-6e153e1f-2b24-48a4-97ce-843802b3972c">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <serial>6e153e1f-2b24-48a4-97ce-843802b3972c</serial>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:ae:25:12"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <target dev="tap3f7597d9-9a"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67/console.log" append="off"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <input type="keyboard" bus="usb"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:12:35 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:12:35 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:12:35 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:12:35 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.873 2 DEBUG nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Preparing to wait for external event network-vif-plugged-3f7597d9-9a35-472b-adcd-575ddade339a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.874 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.874 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.874 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.875 2 DEBUG nova.virt.libvirt.vif [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1292515515',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1292515515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1292515515',id=217,image_ref='f1af818e-71fa-4f13-bb73-d787866237a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOIuVO5CxuTphSvUaPc8gFKei3KARwNihLWkEtWKSgQIliJdSQna+ccMewG8nJloc5ljPBcBu4Q1dKBUMIVwScako/6SXqbjCzqIzn1MO7mNgaJQGmHgG71n/ZRoq/R8ow==',key_name='tempest-keypair-1943707167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbbc6cb494464fd9b31f64c1ad75fa6b',ramdisk_id='',reservation_id='r-88s0gqkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1200415020',image_owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1200415020',owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:12:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c10de71fef00497981b8b7cec6a3fff3',uuid=b66520bc-5f0e-4a9c-80bd-531f2a57dc67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.875 2 DEBUG nova.network.os_vif_util [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converting VIF {"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.876 2 DEBUG nova.network.os_vif_util [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:25:12,bridge_name='br-int',has_traffic_filtering=True,id=3f7597d9-9a35-472b-adcd-575ddade339a,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f7597d9-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.876 2 DEBUG os_vif [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:25:12,bridge_name='br-int',has_traffic_filtering=True,id=3f7597d9-9a35-472b-adcd-575ddade339a,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f7597d9-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.880 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f7597d9-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3f7597d9-9a, col_values=(('external_ids', {'iface-id': '3f7597d9-9a35-472b-adcd-575ddade339a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:25:12', 'vm-uuid': 'b66520bc-5f0e-4a9c-80bd-531f2a57dc67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:35 np0005465987 NetworkManager[44910]: <info>  [1759410755.8833] manager: (tap3f7597d9-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:35 np0005465987 nova_compute[230713]: 2025-10-02 13:12:35.892 2 INFO os_vif [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:25:12,bridge_name='br-int',has_traffic_filtering=True,id=3f7597d9-9a35-472b-adcd-575ddade339a,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f7597d9-9a')#033[00m
Oct  2 09:12:36 np0005465987 nova_compute[230713]: 2025-10-02 13:12:36.026 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:12:36 np0005465987 nova_compute[230713]: 2025-10-02 13:12:36.027 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:12:36 np0005465987 nova_compute[230713]: 2025-10-02 13:12:36.028 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] No VIF found with MAC fa:16:3e:ae:25:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:12:36 np0005465987 nova_compute[230713]: 2025-10-02 13:12:36.029 2 INFO nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Using config drive#033[00m
Oct  2 09:12:36 np0005465987 nova_compute[230713]: 2025-10-02 13:12:36.062 2 DEBUG nova.storage.rbd_utils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] rbd image b66520bc-5f0e-4a9c-80bd-531f2a57dc67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:36.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:36 np0005465987 nova_compute[230713]: 2025-10-02 13:12:36.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:37 np0005465987 nova_compute[230713]: 2025-10-02 13:12:37.061 2 INFO nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Creating config drive at /var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67/disk.config#033[00m
Oct  2 09:12:37 np0005465987 nova_compute[230713]: 2025-10-02 13:12:37.070 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8hlbjf8_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:37.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:37 np0005465987 nova_compute[230713]: 2025-10-02 13:12:37.208 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8hlbjf8_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:37 np0005465987 nova_compute[230713]: 2025-10-02 13:12:37.236 2 DEBUG nova.storage.rbd_utils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] rbd image b66520bc-5f0e-4a9c-80bd-531f2a57dc67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:12:37 np0005465987 nova_compute[230713]: 2025-10-02 13:12:37.240 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67/disk.config b66520bc-5f0e-4a9c-80bd-531f2a57dc67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.103 2 DEBUG oslo_concurrency.processutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67/disk.config b66520bc-5f0e-4a9c-80bd-531f2a57dc67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.863s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.105 2 INFO nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Deleting local config drive /var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67/disk.config because it was imported into RBD.#033[00m
Oct  2 09:12:38 np0005465987 kernel: tap3f7597d9-9a: entered promiscuous mode
Oct  2 09:12:38 np0005465987 NetworkManager[44910]: <info>  [1759410758.1641] manager: (tap3f7597d9-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:38Z|00847|binding|INFO|Claiming lport 3f7597d9-9a35-472b-adcd-575ddade339a for this chassis.
Oct  2 09:12:38 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:38Z|00848|binding|INFO|3f7597d9-9a35-472b-adcd-575ddade339a: Claiming fa:16:3e:ae:25:12 10.100.0.5
Oct  2 09:12:38 np0005465987 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  2 09:12:38 np0005465987 NetworkManager[44910]: <info>  [1759410758.1887] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Oct  2 09:12:38 np0005465987 NetworkManager[44910]: <info>  [1759410758.1905] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.190 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:25:12 10.100.0.5'], port_security=['fa:16:3e:ae:25:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b66520bc-5f0e-4a9c-80bd-531f2a57dc67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-150508fb-9217-4982-8468-977a3b53121a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbbc6cb494464fd9b31f64c1ad75fa6b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7bb440d4-ed26-40f9-a642-6d2e6ed6510f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d5e391d-23a7-4f5a-8146-0f24141a74f2, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=3f7597d9-9a35-472b-adcd-575ddade339a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.192 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 3f7597d9-9a35-472b-adcd-575ddade339a in datapath 150508fb-9217-4982-8468-977a3b53121a bound to our chassis#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.193 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 150508fb-9217-4982-8468-977a3b53121a#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.204 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5a01a37b-fb96-43ff-b269-ea2280b6d27e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.204 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap150508fb-91 in ovnmeta-150508fb-9217-4982-8468-977a3b53121a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:12:38 np0005465987 systemd-machined[188335]: New machine qemu-95-instance-000000d9.
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.207 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap150508fb-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.207 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[8e988cf5-1c8c-4f66-ac52-d3bdb1269225]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.207 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6f29f1-c391-4bc8-a3b7-b3cf749821c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 systemd-udevd[315725]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.218 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d4449d-f9e1-4684-b70b-bf3ca4c3ad88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 NetworkManager[44910]: <info>  [1759410758.2310] device (tap3f7597d9-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:12:38 np0005465987 NetworkManager[44910]: <info>  [1759410758.2334] device (tap3f7597d9-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:12:38 np0005465987 systemd[1]: Started Virtual Machine qemu-95-instance-000000d9.
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.245 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3b39853b-2fdd-449b-9979-39e7e36b1f6f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.278 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[80372386-e312-4418-9aa4-20533507c729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.288 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[69c5ba91-94b3-49d0-a6b1-fe58ad02916d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 NetworkManager[44910]: <info>  [1759410758.2904] manager: (tap150508fb-90): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.319 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f78f44-ee79-495e-982b-eef2e1a8c851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.321 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[16c922fb-b73f-4474-8d0c-dc3151b86c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 NetworkManager[44910]: <info>  [1759410758.3452] device (tap150508fb-90): carrier: link connected
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.390 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[af3ee307-84b2-492d-95e1-a3b871c9fda8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:38Z|00849|binding|INFO|Setting lport 3f7597d9-9a35-472b-adcd-575ddade339a ovn-installed in OVS
Oct  2 09:12:38 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:38Z|00850|binding|INFO|Setting lport 3f7597d9-9a35-472b-adcd-575ddade339a up in Southbound
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.419 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[041d4f14-fffe-4f01-a956-0ef7dd9a1711]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap150508fb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:69:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 882692, 'reachable_time': 20801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315756, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.436 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac9263b-f354-4a26-a2c6-577aa708fa1d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:6993'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 882692, 'tstamp': 882692}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315757, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.455 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4db62217-a01f-4f39-8eb6-84f3595bef3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap150508fb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:69:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 882692, 'reachable_time': 20801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315758, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.498 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6f61b5dd-364b-44a7-8956-8448497b02b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:38.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.587 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3e379992-c67b-42f7-97c1-ddc94e99367b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.589 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap150508fb-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.590 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.590 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap150508fb-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005465987 NetworkManager[44910]: <info>  [1759410758.5925] manager: (tap150508fb-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Oct  2 09:12:38 np0005465987 kernel: tap150508fb-90: entered promiscuous mode
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.595 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap150508fb-90, col_values=(('external_ids', {'iface-id': '2a2f4068-0f5b-4d26-b914-4d32097d8b55'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:38Z|00851|binding|INFO|Releasing lport 2a2f4068-0f5b-4d26-b914-4d32097d8b55 from this chassis (sb_readonly=0)
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.626 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/150508fb-9217-4982-8468-977a3b53121a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/150508fb-9217-4982-8468-977a3b53121a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.627 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1a6fdb-66ab-4658-a06b-1e7c7d74657d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.628 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-150508fb-9217-4982-8468-977a3b53121a
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/150508fb-9217-4982-8468-977a3b53121a.pid.haproxy
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 150508fb-9217-4982-8468-977a3b53121a
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:12:38 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:12:38.630 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'env', 'PROCESS_TAG=haproxy-150508fb-9217-4982-8468-977a3b53121a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/150508fb-9217-4982-8468-977a3b53121a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:12:38 np0005465987 nova_compute[230713]: 2025-10-02 13:12:38.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:39 np0005465987 podman[315832]: 2025-10-02 13:12:39.112113288 +0000 UTC m=+0.099007725 container create 93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:12:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:39.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:39 np0005465987 podman[315832]: 2025-10-02 13:12:39.056411897 +0000 UTC m=+0.043306354 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:12:39 np0005465987 systemd[1]: Started libpod-conmon-93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5.scope.
Oct  2 09:12:39 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:12:39 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e8b865cc810cc68f90ad1c95cb0c0d249d549146a38ce557c212e05be2b278/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.202 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410759.2019122, b66520bc-5f0e-4a9c-80bd-531f2a57dc67 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.202 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] VM Started (Lifecycle Event)#033[00m
Oct  2 09:12:39 np0005465987 podman[315832]: 2025-10-02 13:12:39.218252365 +0000 UTC m=+0.205146802 container init 93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:12:39 np0005465987 podman[315832]: 2025-10-02 13:12:39.223441166 +0000 UTC m=+0.210335603 container start 93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.223 2 DEBUG nova.compute.manager [req-402fa401-0b83-4fc3-98ee-334a1c691493 req-12b698a3-0e7a-4791-b0c9-c763c7fd7395 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received event network-vif-plugged-3f7597d9-9a35-472b-adcd-575ddade339a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.224 2 DEBUG oslo_concurrency.lockutils [req-402fa401-0b83-4fc3-98ee-334a1c691493 req-12b698a3-0e7a-4791-b0c9-c763c7fd7395 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.224 2 DEBUG oslo_concurrency.lockutils [req-402fa401-0b83-4fc3-98ee-334a1c691493 req-12b698a3-0e7a-4791-b0c9-c763c7fd7395 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.225 2 DEBUG oslo_concurrency.lockutils [req-402fa401-0b83-4fc3-98ee-334a1c691493 req-12b698a3-0e7a-4791-b0c9-c763c7fd7395 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.225 2 DEBUG nova.compute.manager [req-402fa401-0b83-4fc3-98ee-334a1c691493 req-12b698a3-0e7a-4791-b0c9-c763c7fd7395 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Processing event network-vif-plugged-3f7597d9-9a35-472b-adcd-575ddade339a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.225 2 DEBUG nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.230 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.232 2 DEBUG nova.virt.libvirt.driver [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.235 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.237 2 INFO nova.virt.libvirt.driver [-] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Instance spawned successfully.#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.238 2 INFO nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Took 3.94 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.238 2 DEBUG nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:12:39 np0005465987 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[315847]: [NOTICE]   (315851) : New worker (315853) forked
Oct  2 09:12:39 np0005465987 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[315847]: [NOTICE]   (315851) : Loading success.
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.270 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.270 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410759.2027857, b66520bc-5f0e-4a9c-80bd-531f2a57dc67 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.270 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.296 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.300 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410759.2299743, b66520bc-5f0e-4a9c-80bd-531f2a57dc67 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.300 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.309 2 INFO nova.compute.manager [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Took 11.39 seconds to build instance.#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.315 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.318 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:12:39 np0005465987 nova_compute[230713]: 2025-10-02 13:12:39.322 2 DEBUG oslo_concurrency.lockutils [None req-5eab3ef6-2bf6-4be1-b93a-2dd88d99aaf4 c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:40.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:40 np0005465987 nova_compute[230713]: 2025-10-02 13:12:40.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.325 2 DEBUG nova.compute.manager [req-3f6beb60-2822-45af-a94c-96b7f64ff982 req-c7f546c9-1b78-4dc8-89f8-0705e076cf78 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received event network-vif-plugged-3f7597d9-9a35-472b-adcd-575ddade339a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.326 2 DEBUG oslo_concurrency.lockutils [req-3f6beb60-2822-45af-a94c-96b7f64ff982 req-c7f546c9-1b78-4dc8-89f8-0705e076cf78 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.326 2 DEBUG oslo_concurrency.lockutils [req-3f6beb60-2822-45af-a94c-96b7f64ff982 req-c7f546c9-1b78-4dc8-89f8-0705e076cf78 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.326 2 DEBUG oslo_concurrency.lockutils [req-3f6beb60-2822-45af-a94c-96b7f64ff982 req-c7f546c9-1b78-4dc8-89f8-0705e076cf78 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.327 2 DEBUG nova.compute.manager [req-3f6beb60-2822-45af-a94c-96b7f64ff982 req-c7f546c9-1b78-4dc8-89f8-0705e076cf78 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] No waiting events found dispatching network-vif-plugged-3f7597d9-9a35-472b-adcd-575ddade339a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.327 2 WARNING nova.compute.manager [req-3f6beb60-2822-45af-a94c-96b7f64ff982 req-c7f546c9-1b78-4dc8-89f8-0705e076cf78 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received unexpected event network-vif-plugged-3f7597d9-9a35-472b-adcd-575ddade339a for instance with vm_state active and task_state None.#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.886 2 DEBUG nova.compute.manager [req-5980be8a-334a-4790-a4d4-44c4005e8227 req-e59fcf1e-a9ac-43da-a6af-f75d3182ef4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received event network-changed-3f7597d9-9a35-472b-adcd-575ddade339a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.886 2 DEBUG nova.compute.manager [req-5980be8a-334a-4790-a4d4-44c4005e8227 req-e59fcf1e-a9ac-43da-a6af-f75d3182ef4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Refreshing instance network info cache due to event network-changed-3f7597d9-9a35-472b-adcd-575ddade339a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.887 2 DEBUG oslo_concurrency.lockutils [req-5980be8a-334a-4790-a4d4-44c4005e8227 req-e59fcf1e-a9ac-43da-a6af-f75d3182ef4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.887 2 DEBUG oslo_concurrency.lockutils [req-5980be8a-334a-4790-a4d4-44c4005e8227 req-e59fcf1e-a9ac-43da-a6af-f75d3182ef4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.887 2 DEBUG nova.network.neutron [req-5980be8a-334a-4790-a4d4-44c4005e8227 req-e59fcf1e-a9ac-43da-a6af-f75d3182ef4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Refreshing network info cache for port 3f7597d9-9a35-472b-adcd-575ddade339a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:12:41 np0005465987 nova_compute[230713]: 2025-10-02 13:12:41.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:12:42 np0005465987 nova_compute[230713]: 2025-10-02 13:12:42.158 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:12:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:42.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:43 np0005465987 nova_compute[230713]: 2025-10-02 13:12:43.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:43 np0005465987 nova_compute[230713]: 2025-10-02 13:12:43.948 2 DEBUG nova.network.neutron [req-5980be8a-334a-4790-a4d4-44c4005e8227 req-e59fcf1e-a9ac-43da-a6af-f75d3182ef4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Updated VIF entry in instance network info cache for port 3f7597d9-9a35-472b-adcd-575ddade339a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:12:43 np0005465987 nova_compute[230713]: 2025-10-02 13:12:43.949 2 DEBUG nova.network.neutron [req-5980be8a-334a-4790-a4d4-44c4005e8227 req-e59fcf1e-a9ac-43da-a6af-f75d3182ef4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Updating instance_info_cache with network_info: [{"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:44 np0005465987 nova_compute[230713]: 2025-10-02 13:12:44.064 2 DEBUG oslo_concurrency.lockutils [req-5980be8a-334a-4790-a4d4-44c4005e8227 req-e59fcf1e-a9ac-43da-a6af-f75d3182ef4c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:12:44 np0005465987 nova_compute[230713]: 2025-10-02 13:12:44.065 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:12:44 np0005465987 nova_compute[230713]: 2025-10-02 13:12:44.065 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:12:44 np0005465987 nova_compute[230713]: 2025-10-02 13:12:44.065 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b66520bc-5f0e-4a9c-80bd-531f2a57dc67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:12:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:12:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:44.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:12:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:45.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:45 np0005465987 nova_compute[230713]: 2025-10-02 13:12:45.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:45.999 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Updating instance_info_cache with network_info: [{"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.032 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-b66520bc-5f0e-4a9c-80bd-531f2a57dc67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.032 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.032 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.033 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.033 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:46.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.987 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.987 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.988 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.988 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:12:46 np0005465987 nova_compute[230713]: 2025-10-02 13:12:46.988 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:47.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:12:47 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1928115156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.413 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.487 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.487 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.654 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.655 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4119MB free_disk=20.942352294921875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.656 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.656 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.743 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance b66520bc-5f0e-4a9c-80bd-531f2a57dc67 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.744 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.744 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:12:47 np0005465987 nova_compute[230713]: 2025-10-02 13:12:47.851 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:12:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3033142157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:12:48 np0005465987 nova_compute[230713]: 2025-10-02 13:12:48.256 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:48 np0005465987 nova_compute[230713]: 2025-10-02 13:12:48.262 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:12:48 np0005465987 nova_compute[230713]: 2025-10-02 13:12:48.280 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:12:48 np0005465987 nova_compute[230713]: 2025-10-02 13:12:48.305 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:12:48 np0005465987 nova_compute[230713]: 2025-10-02 13:12:48.305 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:48 np0005465987 nova_compute[230713]: 2025-10-02 13:12:48.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:49.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:50 np0005465987 nova_compute[230713]: 2025-10-02 13:12:50.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:51.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:52 np0005465987 nova_compute[230713]: 2025-10-02 13:12:52.305 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:52 np0005465987 nova_compute[230713]: 2025-10-02 13:12:52.306 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:12:52 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:52Z|00103|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.5
Oct  2 09:12:52 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:52Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ae:25:12 10.100.0.5
Oct  2 09:12:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:53.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:53 np0005465987 nova_compute[230713]: 2025-10-02 13:12:53.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:54.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:54 np0005465987 podman[315908]: 2025-10-02 13:12:54.84542334 +0000 UTC m=+0.060110850 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:12:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:12:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 59K writes, 225K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s#012Cumulative WAL: 59K writes, 22K syncs, 2.68 writes per sync, written: 0.22 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3818 writes, 14K keys, 3818 commit groups, 1.0 writes per commit group, ingest: 14.28 MB, 0.02 MB/s#012Interval WAL: 3818 writes, 1605 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 
Oct  2 09:12:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:12:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3726214819' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:12:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:12:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3726214819' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:12:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:55.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:55 np0005465987 nova_compute[230713]: 2025-10-02 13:12:55.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:55 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:55Z|00105|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.5
Oct  2 09:12:55 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:55Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ae:25:12 10.100.0.5
Oct  2 09:12:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:56.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:57.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:57 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:57Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:25:12 10.100.0.5
Oct  2 09:12:57 np0005465987 ovn_controller[129172]: 2025-10-02T13:12:57Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:25:12 10.100.0.5
Oct  2 09:12:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:58.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:58 np0005465987 nova_compute[230713]: 2025-10-02 13:12:58.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:12:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:12:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:12:59 np0005465987 podman[315930]: 2025-10-02 13:12:59.835942464 +0000 UTC m=+0.053505631 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid)
Oct  2 09:12:59 np0005465987 podman[315931]: 2025-10-02 13:12:59.846739517 +0000 UTC m=+0.061286613 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:12:59 np0005465987 podman[315929]: 2025-10-02 13:12:59.905837589 +0000 UTC m=+0.123438397 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:13:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:00.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:00 np0005465987 nova_compute[230713]: 2025-10-02 13:13:00.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:02.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:03.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:03 np0005465987 nova_compute[230713]: 2025-10-02 13:13:03.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:04.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:05.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:05 np0005465987 nova_compute[230713]: 2025-10-02 13:13:05.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:06.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:07.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:07 np0005465987 nova_compute[230713]: 2025-10-02 13:13:07.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:08.081 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:08 np0005465987 nova_compute[230713]: 2025-10-02 13:13:08.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:08 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:08.082 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:13:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:08.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:08 np0005465987 nova_compute[230713]: 2025-10-02 13:13:08.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:09.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:10.085 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:10.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:10 np0005465987 nova_compute[230713]: 2025-10-02 13:13:10.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:11.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:12.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:13.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:13 np0005465987 nova_compute[230713]: 2025-10-02 13:13:13.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 ovn_controller[129172]: 2025-10-02T13:13:14Z|00852|binding|INFO|Releasing lport 2a2f4068-0f5b-4d26-b914-4d32097d8b55 from this chassis (sb_readonly=0)
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.460 2 DEBUG oslo_concurrency.lockutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.461 2 DEBUG oslo_concurrency.lockutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.461 2 DEBUG oslo_concurrency.lockutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.461 2 DEBUG oslo_concurrency.lockutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.462 2 DEBUG oslo_concurrency.lockutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.463 2 INFO nova.compute.manager [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Terminating instance#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.464 2 DEBUG nova.compute.manager [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:14.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:14 np0005465987 kernel: tap3f7597d9-9a (unregistering): left promiscuous mode
Oct  2 09:13:14 np0005465987 NetworkManager[44910]: <info>  [1759410794.6993] device (tap3f7597d9-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:13:14 np0005465987 ovn_controller[129172]: 2025-10-02T13:13:14Z|00853|binding|INFO|Releasing lport 3f7597d9-9a35-472b-adcd-575ddade339a from this chassis (sb_readonly=0)
Oct  2 09:13:14 np0005465987 ovn_controller[129172]: 2025-10-02T13:13:14Z|00854|binding|INFO|Setting lport 3f7597d9-9a35-472b-adcd-575ddade339a down in Southbound
Oct  2 09:13:14 np0005465987 ovn_controller[129172]: 2025-10-02T13:13:14Z|00855|binding|INFO|Removing iface tap3f7597d9-9a ovn-installed in OVS
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:14.715 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:25:12 10.100.0.5'], port_security=['fa:16:3e:ae:25:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b66520bc-5f0e-4a9c-80bd-531f2a57dc67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-150508fb-9217-4982-8468-977a3b53121a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbbc6cb494464fd9b31f64c1ad75fa6b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7bb440d4-ed26-40f9-a642-6d2e6ed6510f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d5e391d-23a7-4f5a-8146-0f24141a74f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=3f7597d9-9a35-472b-adcd-575ddade339a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:14.716 138325 INFO neutron.agent.ovn.metadata.agent [-] Port 3f7597d9-9a35-472b-adcd-575ddade339a in datapath 150508fb-9217-4982-8468-977a3b53121a unbound from our chassis#033[00m
Oct  2 09:13:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:14.717 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 150508fb-9217-4982-8468-977a3b53121a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:13:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:14.719 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[48c4c737-71c5-4299-886a-88ce39043df9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:14.719 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-150508fb-9217-4982-8468-977a3b53121a namespace which is not needed anymore#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d9.scope: Deactivated successfully.
Oct  2 09:13:14 np0005465987 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d9.scope: Consumed 15.293s CPU time.
Oct  2 09:13:14 np0005465987 systemd-machined[188335]: Machine qemu-95-instance-000000d9 terminated.
Oct  2 09:13:14 np0005465987 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[315847]: [NOTICE]   (315851) : haproxy version is 2.8.14-c23fe91
Oct  2 09:13:14 np0005465987 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[315847]: [NOTICE]   (315851) : path to executable is /usr/sbin/haproxy
Oct  2 09:13:14 np0005465987 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[315847]: [WARNING]  (315851) : Exiting Master process...
Oct  2 09:13:14 np0005465987 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[315847]: [ALERT]    (315851) : Current worker (315853) exited with code 143 (Terminated)
Oct  2 09:13:14 np0005465987 neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a[315847]: [WARNING]  (315851) : All workers exited. Exiting... (0)
Oct  2 09:13:14 np0005465987 systemd[1]: libpod-93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5.scope: Deactivated successfully.
Oct  2 09:13:14 np0005465987 podman[316019]: 2025-10-02 13:13:14.848849369 +0000 UTC m=+0.045954137 container died 93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:13:14 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5-userdata-shm.mount: Deactivated successfully.
Oct  2 09:13:14 np0005465987 systemd[1]: var-lib-containers-storage-overlay-13e8b865cc810cc68f90ad1c95cb0c0d249d549146a38ce557c212e05be2b278-merged.mount: Deactivated successfully.
Oct  2 09:13:14 np0005465987 podman[316019]: 2025-10-02 13:13:14.896420509 +0000 UTC m=+0.093525267 container cleanup 93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 systemd[1]: libpod-conmon-93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5.scope: Deactivated successfully.
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.916 2 INFO nova.virt.libvirt.driver [-] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Instance destroyed successfully.#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.916 2 DEBUG nova.objects.instance [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lazy-loading 'resources' on Instance uuid b66520bc-5f0e-4a9c-80bd-531f2a57dc67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.933 2 DEBUG nova.virt.libvirt.vif [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1292515515',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1292515515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1292515515',id=217,image_ref='f1af818e-71fa-4f13-bb73-d787866237a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOIuVO5CxuTphSvUaPc8gFKei3KARwNihLWkEtWKSgQIliJdSQna+ccMewG8nJloc5ljPBcBu4Q1dKBUMIVwScako/6SXqbjCzqIzn1MO7mNgaJQGmHgG71n/ZRoq/R8ow==',key_name='tempest-keypair-1943707167',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:12:39Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fbbc6cb494464fd9b31f64c1ad75fa6b',ramdisk_id='',reservation_id='r-88s0gqkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1200415020',image_owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1200415020',owner_user_name='tempest-TestVolumeBootPattern-1200415020-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:12:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c10de71fef00497981b8b7cec6a3fff3',uuid=b66520bc-5f0e-4a9c-80bd-531f2a57dc67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.935 2 DEBUG nova.network.os_vif_util [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converting VIF {"id": "3f7597d9-9a35-472b-adcd-575ddade339a", "address": "fa:16:3e:ae:25:12", "network": {"id": "150508fb-9217-4982-8468-977a3b53121a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1348951324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbbc6cb494464fd9b31f64c1ad75fa6b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f7597d9-9a", "ovs_interfaceid": "3f7597d9-9a35-472b-adcd-575ddade339a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.936 2 DEBUG nova.network.os_vif_util [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:25:12,bridge_name='br-int',has_traffic_filtering=True,id=3f7597d9-9a35-472b-adcd-575ddade339a,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f7597d9-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.937 2 DEBUG os_vif [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:25:12,bridge_name='br-int',has_traffic_filtering=True,id=3f7597d9-9a35-472b-adcd-575ddade339a,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f7597d9-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.939 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f7597d9-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.947 2 INFO os_vif [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:25:12,bridge_name='br-int',has_traffic_filtering=True,id=3f7597d9-9a35-472b-adcd-575ddade339a,network=Network(150508fb-9217-4982-8468-977a3b53121a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f7597d9-9a')#033[00m
Oct  2 09:13:14 np0005465987 podman[316055]: 2025-10-02 13:13:14.966055337 +0000 UTC m=+0.045968008 container remove 93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:13:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:14.972 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[153fe50a-8068-41f1-bc89-60a8214a38e9]: (4, ('Thu Oct  2 01:13:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a (93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5)\n93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5\nThu Oct  2 01:13:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-150508fb-9217-4982-8468-977a3b53121a (93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5)\n93e22ab28febb83070209257c423512bcdb93f710851c76660064a976ac266a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:14.974 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3f17c9-05ac-47ea-9f48-df0869da9e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:14.975 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap150508fb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:14 np0005465987 kernel: tap150508fb-90: left promiscuous mode
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 nova_compute[230713]: 2025-10-02 13:13:14.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:14 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:14.994 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3e16a070-9dcd-45a6-bc76-54af6cccdb7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:15.028 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[86f43aa2-f42b-4fcf-935b-6a3e1cab629b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:15.029 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[23e01e0c-158c-4d14-b07a-706440c5c32a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:15.044 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6bea2d-60fb-4852-9009-d5225f3f5f2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 882684, 'reachable_time': 16701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316093, 'error': None, 'target': 'ovnmeta-150508fb-9217-4982-8468-977a3b53121a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:15 np0005465987 systemd[1]: run-netns-ovnmeta\x2d150508fb\x2d9217\x2d4982\x2d8468\x2d977a3b53121a.mount: Deactivated successfully.
Oct  2 09:13:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:15.048 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-150508fb-9217-4982-8468-977a3b53121a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:13:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:15.049 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[9769b12b-447b-4573-9019-a42f24f6fb58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:15 np0005465987 nova_compute[230713]: 2025-10-02 13:13:15.183 2 DEBUG nova.compute.manager [req-455598eb-fa11-441d-8b8f-385373944cee req-0791bd87-836a-44ed-98b2-0bb13cdb73b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received event network-vif-unplugged-3f7597d9-9a35-472b-adcd-575ddade339a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:15 np0005465987 nova_compute[230713]: 2025-10-02 13:13:15.183 2 DEBUG oslo_concurrency.lockutils [req-455598eb-fa11-441d-8b8f-385373944cee req-0791bd87-836a-44ed-98b2-0bb13cdb73b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:15 np0005465987 nova_compute[230713]: 2025-10-02 13:13:15.184 2 DEBUG oslo_concurrency.lockutils [req-455598eb-fa11-441d-8b8f-385373944cee req-0791bd87-836a-44ed-98b2-0bb13cdb73b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:15 np0005465987 nova_compute[230713]: 2025-10-02 13:13:15.184 2 DEBUG oslo_concurrency.lockutils [req-455598eb-fa11-441d-8b8f-385373944cee req-0791bd87-836a-44ed-98b2-0bb13cdb73b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:15 np0005465987 nova_compute[230713]: 2025-10-02 13:13:15.184 2 DEBUG nova.compute.manager [req-455598eb-fa11-441d-8b8f-385373944cee req-0791bd87-836a-44ed-98b2-0bb13cdb73b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] No waiting events found dispatching network-vif-unplugged-3f7597d9-9a35-472b-adcd-575ddade339a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:15 np0005465987 nova_compute[230713]: 2025-10-02 13:13:15.185 2 DEBUG nova.compute.manager [req-455598eb-fa11-441d-8b8f-385373944cee req-0791bd87-836a-44ed-98b2-0bb13cdb73b0 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received event network-vif-unplugged-3f7597d9-9a35-472b-adcd-575ddade339a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:13:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:15.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:16.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.161 2 INFO nova.virt.libvirt.driver [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Deleting instance files /var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67_del#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.162 2 INFO nova.virt.libvirt.driver [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Deletion of /var/lib/nova/instances/b66520bc-5f0e-4a9c-80bd-531f2a57dc67_del complete#033[00m
Oct  2 09:13:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:17.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.222 2 INFO nova.compute.manager [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Took 2.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.225 2 DEBUG oslo.service.loopingcall [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.226 2 DEBUG nova.compute.manager [-] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.227 2 DEBUG nova.network.neutron [-] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.293 2 DEBUG nova.compute.manager [req-3e6797dd-9b47-4de5-8047-cada069c67b0 req-8db4e9cc-0542-48b1-8a1e-a630a51d9741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received event network-vif-plugged-3f7597d9-9a35-472b-adcd-575ddade339a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.294 2 DEBUG oslo_concurrency.lockutils [req-3e6797dd-9b47-4de5-8047-cada069c67b0 req-8db4e9cc-0542-48b1-8a1e-a630a51d9741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.295 2 DEBUG oslo_concurrency.lockutils [req-3e6797dd-9b47-4de5-8047-cada069c67b0 req-8db4e9cc-0542-48b1-8a1e-a630a51d9741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.296 2 DEBUG oslo_concurrency.lockutils [req-3e6797dd-9b47-4de5-8047-cada069c67b0 req-8db4e9cc-0542-48b1-8a1e-a630a51d9741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.296 2 DEBUG nova.compute.manager [req-3e6797dd-9b47-4de5-8047-cada069c67b0 req-8db4e9cc-0542-48b1-8a1e-a630a51d9741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] No waiting events found dispatching network-vif-plugged-3f7597d9-9a35-472b-adcd-575ddade339a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:17 np0005465987 nova_compute[230713]: 2025-10-02 13:13:17.297 2 WARNING nova.compute.manager [req-3e6797dd-9b47-4de5-8047-cada069c67b0 req-8db4e9cc-0542-48b1-8a1e-a630a51d9741 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received unexpected event network-vif-plugged-3f7597d9-9a35-472b-adcd-575ddade339a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:13:18 np0005465987 nova_compute[230713]: 2025-10-02 13:13:18.499 2 DEBUG nova.network.neutron [-] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:18 np0005465987 nova_compute[230713]: 2025-10-02 13:13:18.523 2 INFO nova.compute.manager [-] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Took 1.30 seconds to deallocate network for instance.#033[00m
Oct  2 09:13:18 np0005465987 nova_compute[230713]: 2025-10-02 13:13:18.614 2 DEBUG nova.compute.manager [req-d7bb7a7a-6c2b-4c97-a143-2a75a4df8558 req-c0a54442-5fb4-4bbd-a1aa-12cb7f2c0008 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Received event network-vif-deleted-3f7597d9-9a35-472b-adcd-575ddade339a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:18.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:18 np0005465987 nova_compute[230713]: 2025-10-02 13:13:18.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:18 np0005465987 nova_compute[230713]: 2025-10-02 13:13:18.734 2 INFO nova.compute.manager [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Oct  2 09:13:18 np0005465987 nova_compute[230713]: 2025-10-02 13:13:18.736 2 DEBUG nova.compute.manager [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Deleting volume: 6e153e1f-2b24-48a4-97ce-843802b3972c _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 09:13:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:19.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.293 2 DEBUG oslo_concurrency.lockutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.293 2 DEBUG oslo_concurrency.lockutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.350 2 DEBUG oslo_concurrency.processutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/794214939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.792 2 DEBUG oslo_concurrency.processutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.800 2 DEBUG nova.compute.provider_tree [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.815 2 DEBUG nova.scheduler.client.report [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.839 2 DEBUG oslo_concurrency.lockutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.867 2 INFO nova.scheduler.client.report [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Deleted allocations for instance b66520bc-5f0e-4a9c-80bd-531f2a57dc67#033[00m
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:19 np0005465987 nova_compute[230713]: 2025-10-02 13:13:19.954 2 DEBUG oslo_concurrency.lockutils [None req-0a1b41d2-35c6-4680-b9ec-c3f646d3d2fe c10de71fef00497981b8b7cec6a3fff3 fbbc6cb494464fd9b31f64c1ad75fa6b - - default default] Lock "b66520bc-5f0e-4a9c-80bd-531f2a57dc67" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:21.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:22.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:23.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:23 np0005465987 nova_compute[230713]: 2025-10-02 13:13:23.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:13:23 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1955584302' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:13:23 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:13:23 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1955584302' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:13:23 np0005465987 nova_compute[230713]: 2025-10-02 13:13:23.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:24.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:24 np0005465987 nova_compute[230713]: 2025-10-02 13:13:24.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:25.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:25 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e422 e422: 3 total, 3 up, 3 in
Oct  2 09:13:25 np0005465987 podman[316248]: 2025-10-02 13:13:25.83847364 +0000 UTC m=+0.057579812 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:13:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:13:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:13:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:13:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:26.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:27.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:27.999 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:28.000 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:28.000 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:13:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:28.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:13:28 np0005465987 nova_compute[230713]: 2025-10-02 13:13:28.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:29.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:29 np0005465987 nova_compute[230713]: 2025-10-02 13:13:29.914 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410794.9127567, b66520bc-5f0e-4a9c-80bd-531f2a57dc67 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:29 np0005465987 nova_compute[230713]: 2025-10-02 13:13:29.914 2 INFO nova.compute.manager [-] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:13:29 np0005465987 nova_compute[230713]: 2025-10-02 13:13:29.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:29 np0005465987 nova_compute[230713]: 2025-10-02 13:13:29.959 2 DEBUG nova.compute.manager [None req-beb473cc-0a4e-464b-840a-d9de38e7aa22 - - - - - -] [instance: b66520bc-5f0e-4a9c-80bd-531f2a57dc67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:30.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:30 np0005465987 podman[316267]: 2025-10-02 13:13:30.863220973 +0000 UTC m=+0.086005733 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 09:13:30 np0005465987 podman[316268]: 2025-10-02 13:13:30.865237457 +0000 UTC m=+0.085169439 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2)
Oct  2 09:13:30 np0005465987 podman[316269]: 2025-10-02 13:13:30.870288074 +0000 UTC m=+0.087612256 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:13:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:31.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:13:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 16K writes, 83K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1611 writes, 7605 keys, 1611 commit groups, 1.0 writes per commit group, ingest: 16.03 MB, 0.03 MB/s#012Interval WAL: 1611 writes, 1611 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     57.3      1.78              0.31        53    0.034       0      0       0.0       0.0#012  L6      1/0   12.75 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2    117.5    100.7      5.31              1.57        52    0.102    391K    28K       0.0       0.0#012 Sum      1/0   12.75 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     88.0     89.8      7.09              1.89       105    0.068    391K    28K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8    158.6    157.3      0.44              0.21        10    0.044     52K   2614       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    117.5    100.7      5.31              1.57        52    0.102    391K    28K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     57.3      1.78              0.31        52    0.034       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.100, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.62 GB write, 0.11 MB/s write, 0.61 GB read, 0.10 MB/s read, 7.1 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 67.85 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000506 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3877,65.03 MB,21.3902%) FilterBlock(105,1.05 MB,0.346169%) IndexBlock(105,1.78 MB,0.583885%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:13:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e423 e423: 3 total, 3 up, 3 in
Oct  2 09:13:31 np0005465987 nova_compute[230713]: 2025-10-02 13:13:31.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:13:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:13:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:13:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/785670362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:13:32 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:13:32 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/785670362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:13:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:32.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:33.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:33 np0005465987 nova_compute[230713]: 2025-10-02 13:13:33.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:34.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:34 np0005465987 nova_compute[230713]: 2025-10-02 13:13:34.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:34 np0005465987 nova_compute[230713]: 2025-10-02 13:13:34.982 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:35.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:36 np0005465987 nova_compute[230713]: 2025-10-02 13:13:36.185 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:36 np0005465987 nova_compute[230713]: 2025-10-02 13:13:36.186 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:36 np0005465987 nova_compute[230713]: 2025-10-02 13:13:36.212 2 DEBUG nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:13:36 np0005465987 nova_compute[230713]: 2025-10-02 13:13:36.391 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:36 np0005465987 nova_compute[230713]: 2025-10-02 13:13:36.391 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:36 np0005465987 nova_compute[230713]: 2025-10-02 13:13:36.405 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:13:36 np0005465987 nova_compute[230713]: 2025-10-02 13:13:36.405 2 INFO nova.compute.claims [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:13:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:36.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e424 e424: 3 total, 3 up, 3 in
Oct  2 09:13:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:37.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:37 np0005465987 nova_compute[230713]: 2025-10-02 13:13:37.474 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:37 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:37 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/902856131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:37 np0005465987 nova_compute[230713]: 2025-10-02 13:13:37.906 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:37 np0005465987 nova_compute[230713]: 2025-10-02 13:13:37.912 2 DEBUG nova.compute.provider_tree [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:37 np0005465987 nova_compute[230713]: 2025-10-02 13:13:37.945 2 DEBUG nova.scheduler.client.report [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:37 np0005465987 nova_compute[230713]: 2025-10-02 13:13:37.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:37 np0005465987 nova_compute[230713]: 2025-10-02 13:13:37.966 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:37 np0005465987 nova_compute[230713]: 2025-10-02 13:13:37.966 2 DEBUG nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.025 2 DEBUG nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.026 2 DEBUG nova.network.neutron [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.050 2 INFO nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.069 2 DEBUG nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.201 2 DEBUG nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.202 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.203 2 INFO nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Creating image(s)#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.236 2 DEBUG nova.storage.rbd_utils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.265 2 DEBUG nova.storage.rbd_utils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.293 2 DEBUG nova.storage.rbd_utils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.296 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.323 2 DEBUG nova.policy [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffe4d737e4414fb3a3e358f8ca3f3e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '08e102ae48244af2ab448a2e1ff757df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.359 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.360 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.361 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.361 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.391 2 DEBUG nova.storage.rbd_utils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.395 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 8913130a-880a-4f54-969a-4c7b4d447c14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.696 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 8913130a-880a-4f54-969a-4c7b4d447c14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.805 2 DEBUG nova.storage.rbd_utils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] resizing rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.936 2 DEBUG nova.objects.instance [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.951 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.952 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Ensure instance console log exists: /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.953 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.954 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:38 np0005465987 nova_compute[230713]: 2025-10-02 13:13:38.955 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:39.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:39 np0005465987 nova_compute[230713]: 2025-10-02 13:13:39.755 2 DEBUG nova.network.neutron [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Successfully created port: c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:13:39 np0005465987 nova_compute[230713]: 2025-10-02 13:13:39.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:40.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 e425: 3 total, 3 up, 3 in
Oct  2 09:13:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:41.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:41 np0005465987 nova_compute[230713]: 2025-10-02 13:13:41.662 2 DEBUG nova.network.neutron [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Successfully updated port: c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:13:41 np0005465987 nova_compute[230713]: 2025-10-02 13:13:41.676 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:13:41 np0005465987 nova_compute[230713]: 2025-10-02 13:13:41.676 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:13:41 np0005465987 nova_compute[230713]: 2025-10-02 13:13:41.677 2 DEBUG nova.network.neutron [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:13:41 np0005465987 nova_compute[230713]: 2025-10-02 13:13:41.800 2 DEBUG nova.compute.manager [req-9d8a370f-db07-4386-8d8e-aa77b57037f3 req-6402a1d8-f832-4baa-8dd5-0a019c11a537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-changed-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:41 np0005465987 nova_compute[230713]: 2025-10-02 13:13:41.801 2 DEBUG nova.compute.manager [req-9d8a370f-db07-4386-8d8e-aa77b57037f3 req-6402a1d8-f832-4baa-8dd5-0a019c11a537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Refreshing instance network info cache due to event network-changed-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:13:41 np0005465987 nova_compute[230713]: 2025-10-02 13:13:41.801 2 DEBUG oslo_concurrency.lockutils [req-9d8a370f-db07-4386-8d8e-aa77b57037f3 req-6402a1d8-f832-4baa-8dd5-0a019c11a537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:13:41 np0005465987 nova_compute[230713]: 2025-10-02 13:13:41.853 2 DEBUG nova.network.neutron [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:13:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:42.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:42 np0005465987 nova_compute[230713]: 2025-10-02 13:13:42.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:42 np0005465987 nova_compute[230713]: 2025-10-02 13:13:42.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:13:42 np0005465987 nova_compute[230713]: 2025-10-02 13:13:42.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:13:42 np0005465987 nova_compute[230713]: 2025-10-02 13:13:42.998 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 09:13:42 np0005465987 nova_compute[230713]: 2025-10-02 13:13:42.998 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:13:42 np0005465987 nova_compute[230713]: 2025-10-02 13:13:42.999 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:43.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.243 2 DEBUG nova.network.neutron [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updating instance_info_cache with network_info: [{"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.262 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.263 2 DEBUG nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance network_info: |[{"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.263 2 DEBUG oslo_concurrency.lockutils [req-9d8a370f-db07-4386-8d8e-aa77b57037f3 req-6402a1d8-f832-4baa-8dd5-0a019c11a537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.264 2 DEBUG nova.network.neutron [req-9d8a370f-db07-4386-8d8e-aa77b57037f3 req-6402a1d8-f832-4baa-8dd5-0a019c11a537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Refreshing network info cache for port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.266 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Start _get_guest_xml network_info=[{"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.271 2 WARNING nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.276 2 DEBUG nova.virt.libvirt.host [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.276 2 DEBUG nova.virt.libvirt.host [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.279 2 DEBUG nova.virt.libvirt.host [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.280 2 DEBUG nova.virt.libvirt.host [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.281 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.282 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.282 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.282 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.283 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.283 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.283 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.284 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.284 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.284 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.284 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.285 2 DEBUG nova.virt.hardware [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.288 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:13:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/707081822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.729 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.758 2 DEBUG nova.storage.rbd_utils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:43 np0005465987 nova_compute[230713]: 2025-10-02 13:13:43.762 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:44 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:13:44 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1761052138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.187 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.189 2 DEBUG nova.virt.libvirt.vif [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1144969198',display_name='tempest-TestNetworkAdvancedServerOps-server-1144969198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1144969198',id=218,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISBzb758icLI7PGci4RqL0CU+8vvNYOvJAX7zkqNygfviXgPTJODUdmp02nluRyTp1PDwlyRaifnYPPffvW7eJO4xZE3i1JSR26PcFkqmAlAlzhasGxkMnuaKA33kRTzg==',key_name='tempest-TestNetworkAdvancedServerOps-977712776',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-v6acwu7y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:38Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=8913130a-880a-4f54-969a-4c7b4d447c14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.190 2 DEBUG nova.network.os_vif_util [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.190 2 DEBUG nova.network.os_vif_util [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.191 2 DEBUG nova.objects.instance [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.210 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <uuid>8913130a-880a-4f54-969a-4c7b4d447c14</uuid>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <name>instance-000000da</name>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1144969198</nova:name>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:13:43</nova:creationTime>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <nova:port uuid="c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <entry name="serial">8913130a-880a-4f54-969a-4c7b4d447c14</entry>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <entry name="uuid">8913130a-880a-4f54-969a-4c7b4d447c14</entry>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/8913130a-880a-4f54-969a-4c7b4d447c14_disk">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/8913130a-880a-4f54-969a-4c7b4d447c14_disk.config">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:c3:bb:0f"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <target dev="tapc1ddd8d4-4e"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/console.log" append="off"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:13:44 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:13:44 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:13:44 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:13:44 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.212 2 DEBUG nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Preparing to wait for external event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.212 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.212 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.213 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.213 2 DEBUG nova.virt.libvirt.vif [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1144969198',display_name='tempest-TestNetworkAdvancedServerOps-server-1144969198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1144969198',id=218,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISBzb758icLI7PGci4RqL0CU+8vvNYOvJAX7zkqNygfviXgPTJODUdmp02nluRyTp1PDwlyRaifnYPPffvW7eJO4xZE3i1JSR26PcFkqmAlAlzhasGxkMnuaKA33kRTzg==',key_name='tempest-TestNetworkAdvancedServerOps-977712776',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-v6acwu7y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:38Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=8913130a-880a-4f54-969a-4c7b4d447c14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.214 2 DEBUG nova.network.os_vif_util [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.215 2 DEBUG nova.network.os_vif_util [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.215 2 DEBUG os_vif [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ddd8d4-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1ddd8d4-4e, col_values=(('external_ids', {'iface-id': 'c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:bb:0f', 'vm-uuid': '8913130a-880a-4f54-969a-4c7b4d447c14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:44 np0005465987 NetworkManager[44910]: <info>  [1759410824.2239] manager: (tapc1ddd8d4-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.231 2 INFO os_vif [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e')#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.307 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.308 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.308 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No VIF found with MAC fa:16:3e:c3:bb:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.308 2 INFO nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Using config drive#033[00m
Oct  2 09:13:44 np0005465987 nova_compute[230713]: 2025-10-02 13:13:44.338 2 DEBUG nova.storage.rbd_utils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:44.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.214 2 INFO nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Creating config drive at /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config#033[00m
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.221 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwj7ghyta execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:45.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.362 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwj7ghyta" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.398 2 DEBUG nova.storage.rbd_utils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.402 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.653 2 DEBUG oslo_concurrency.processutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.654 2 INFO nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Deleting local config drive /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config because it was imported into RBD.#033[00m
Oct  2 09:13:45 np0005465987 kernel: tapc1ddd8d4-4e: entered promiscuous mode
Oct  2 09:13:45 np0005465987 NetworkManager[44910]: <info>  [1759410825.7182] manager: (tapc1ddd8d4-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Oct  2 09:13:45 np0005465987 ovn_controller[129172]: 2025-10-02T13:13:45Z|00856|binding|INFO|Claiming lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 for this chassis.
Oct  2 09:13:45 np0005465987 ovn_controller[129172]: 2025-10-02T13:13:45Z|00857|binding|INFO|c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83: Claiming fa:16:3e:c3:bb:0f 10.100.0.7
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:45 np0005465987 ovn_controller[129172]: 2025-10-02T13:13:45Z|00858|binding|INFO|Setting lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 ovn-installed in OVS
Oct  2 09:13:45 np0005465987 ovn_controller[129172]: 2025-10-02T13:13:45Z|00859|binding|INFO|Setting lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 up in Southbound
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.732 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:bb:0f 10.100.0.7'], port_security=['fa:16:3e:c3:bb:0f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8913130a-880a-4f54-969a-4c7b4d447c14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fa90689-867b-4efc-b402-c838a2efa5d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c876d23a-6966-4e21-9725-773e3dbffee1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1abdba69-d4c9-46b5-9115-0d23b68bff9f, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.733 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 in datapath 7fa90689-867b-4efc-b402-c838a2efa5d6 bound to our chassis#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.734 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fa90689-867b-4efc-b402-c838a2efa5d6#033[00m
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:45 np0005465987 systemd-udevd[316706]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.748 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7a99ee-48a6-4669-9c11-6356e2a80192]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.749 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7fa90689-81 in ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.750 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7fa90689-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.750 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[69977c56-c92b-4438-9541-4f1a6ec8489e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.751 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a8efb26d-4739-4c4d-83f1-e4f8f46c970d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 systemd-machined[188335]: New machine qemu-96-instance-000000da.
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.764 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ff8071-9313-4cf6-bad2-95f505bb8fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 NetworkManager[44910]: <info>  [1759410825.7665] device (tapc1ddd8d4-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:13:45 np0005465987 NetworkManager[44910]: <info>  [1759410825.7680] device (tapc1ddd8d4-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:13:45 np0005465987 systemd[1]: Started Virtual Machine qemu-96-instance-000000da.
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.791 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c719f9d7-2cd6-43dc-8e7a-fbee5786006d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.795 2 DEBUG nova.network.neutron [req-9d8a370f-db07-4386-8d8e-aa77b57037f3 req-6402a1d8-f832-4baa-8dd5-0a019c11a537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updated VIF entry in instance network info cache for port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.796 2 DEBUG nova.network.neutron [req-9d8a370f-db07-4386-8d8e-aa77b57037f3 req-6402a1d8-f832-4baa-8dd5-0a019c11a537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updating instance_info_cache with network_info: [{"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:45 np0005465987 nova_compute[230713]: 2025-10-02 13:13:45.817 2 DEBUG oslo_concurrency.lockutils [req-9d8a370f-db07-4386-8d8e-aa77b57037f3 req-6402a1d8-f832-4baa-8dd5-0a019c11a537 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.823 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0413e4e7-0200-400d-8441-6c670216e342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.828 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ed30cd45-6b98-4fe4-8a14-dee55da32524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 NetworkManager[44910]: <info>  [1759410825.8293] manager: (tap7fa90689-80): new Veth device (/org/freedesktop/NetworkManager/Devices/405)
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.865 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[720909a3-c62f-45b9-82be-b3f1d87caa46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.869 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c41b4d-9f8a-4a31-92c7-eb33dcead1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 NetworkManager[44910]: <info>  [1759410825.8936] device (tap7fa90689-80): carrier: link connected
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.901 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[5f995626-26a3-45dc-9b7d-73f23a5fdc1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.918 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[320389e0-2041-472a-b735-30526a3e6637]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fa90689-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:c0:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889446, 'reachable_time': 16696, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316740, 'error': None, 'target': 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.937 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd9a259-2b5b-4a0f-ad39-f6770dd8f5a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:c00d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889446, 'tstamp': 889446}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316741, 'error': None, 'target': 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.952 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4055e7d6-1c1b-47a8-b743-1051d5086635]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fa90689-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:c0:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889446, 'reachable_time': 16696, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316742, 'error': None, 'target': 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:45 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:45.983 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f72a81-0f98-4ed0-abef-84f0122dd723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:46.042 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[3750aac3-c2fe-4876-a7cb-3d2b515ce7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:46.043 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fa90689-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:46.043 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:46.044 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fa90689-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:46 np0005465987 kernel: tap7fa90689-80: entered promiscuous mode
Oct  2 09:13:46 np0005465987 NetworkManager[44910]: <info>  [1759410826.0460] manager: (tap7fa90689-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:46.048 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fa90689-80, col_values=(('external_ids', {'iface-id': '27daa8ca-e509-4d68-a752-fe0f6eb24a78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:46.050 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7fa90689-867b-4efc-b402-c838a2efa5d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7fa90689-867b-4efc-b402-c838a2efa5d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:13:46 np0005465987 ovn_controller[129172]: 2025-10-02T13:13:46Z|00860|binding|INFO|Releasing lport 27daa8ca-e509-4d68-a752-fe0f6eb24a78 from this chassis (sb_readonly=0)
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:46.051 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c0effebe-b076-44e1-91ec-f3fff93fec69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:46.052 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-7fa90689-867b-4efc-b402-c838a2efa5d6
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/7fa90689-867b-4efc-b402-c838a2efa5d6.pid.haproxy
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 7fa90689-867b-4efc-b402-c838a2efa5d6
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:13:46 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:13:46.053 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'env', 'PROCESS_TAG=haproxy-7fa90689-867b-4efc-b402-c838a2efa5d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7fa90689-867b-4efc-b402-c838a2efa5d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.063 2 DEBUG nova.compute.manager [req-ff67450b-5341-410c-bb1d-15f02d4a755a req-d41c61f5-6a7c-4ea9-9dc6-083931b5ab77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.064 2 DEBUG oslo_concurrency.lockutils [req-ff67450b-5341-410c-bb1d-15f02d4a755a req-d41c61f5-6a7c-4ea9-9dc6-083931b5ab77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.064 2 DEBUG oslo_concurrency.lockutils [req-ff67450b-5341-410c-bb1d-15f02d4a755a req-d41c61f5-6a7c-4ea9-9dc6-083931b5ab77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.065 2 DEBUG oslo_concurrency.lockutils [req-ff67450b-5341-410c-bb1d-15f02d4a755a req-d41c61f5-6a7c-4ea9-9dc6-083931b5ab77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.065 2 DEBUG nova.compute.manager [req-ff67450b-5341-410c-bb1d-15f02d4a755a req-d41c61f5-6a7c-4ea9-9dc6-083931b5ab77 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Processing event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:46 np0005465987 podman[316816]: 2025-10-02 13:13:46.419194121 +0000 UTC m=+0.047691585 container create a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:13:46 np0005465987 systemd[1]: Started libpod-conmon-a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21.scope.
Oct  2 09:13:46 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:13:46 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc4187f3a2c05867fd0271909d44ad178b52ba3bbbc8efaeb03a4510f0ab65a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:13:46 np0005465987 podman[316816]: 2025-10-02 13:13:46.395074967 +0000 UTC m=+0.023572451 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:13:46 np0005465987 podman[316816]: 2025-10-02 13:13:46.501264536 +0000 UTC m=+0.129762000 container init a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:13:46 np0005465987 podman[316816]: 2025-10-02 13:13:46.506689663 +0000 UTC m=+0.135187117 container start a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 09:13:46 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[316832]: [NOTICE]   (316836) : New worker (316838) forked
Oct  2 09:13:46 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[316832]: [NOTICE]   (316836) : Loading success.
Oct  2 09:13:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:46.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.722 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410826.7221227, 8913130a-880a-4f54-969a-4c7b4d447c14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.723 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] VM Started (Lifecycle Event)#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.725 2 DEBUG nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.731 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.734 2 INFO nova.virt.libvirt.driver [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance spawned successfully.#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.735 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.766 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.771 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.772 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.772 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.773 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.773 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.774 2 DEBUG nova.virt.libvirt.driver [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.777 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.825 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.825 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410826.7222424, 8913130a-880a-4f54-969a-4c7b4d447c14 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.826 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.853 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.856 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410826.7289274, 8913130a-880a-4f54-969a-4c7b4d447c14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.857 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.860 2 INFO nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Took 8.66 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.860 2 DEBUG nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.890 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.894 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.925 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.941 2 INFO nova.compute.manager [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Took 10.67 seconds to build instance.#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.963 2 DEBUG oslo_concurrency.lockutils [None req-28ada335-7ec4-4a6d-9644-573f81fba5ca ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:46 np0005465987 nova_compute[230713]: 2025-10-02 13:13:46.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:47.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:48 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.146 2 DEBUG nova.compute.manager [req-9c08a667-60b0-4330-8cad-0e1938d841f7 req-840a7451-75f1-4ad1-88da-d8cdde2fc151 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:48 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.146 2 DEBUG oslo_concurrency.lockutils [req-9c08a667-60b0-4330-8cad-0e1938d841f7 req-840a7451-75f1-4ad1-88da-d8cdde2fc151 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:48 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.147 2 DEBUG oslo_concurrency.lockutils [req-9c08a667-60b0-4330-8cad-0e1938d841f7 req-840a7451-75f1-4ad1-88da-d8cdde2fc151 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:48 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.147 2 DEBUG oslo_concurrency.lockutils [req-9c08a667-60b0-4330-8cad-0e1938d841f7 req-840a7451-75f1-4ad1-88da-d8cdde2fc151 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:48 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.147 2 DEBUG nova.compute.manager [req-9c08a667-60b0-4330-8cad-0e1938d841f7 req-840a7451-75f1-4ad1-88da-d8cdde2fc151 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] No waiting events found dispatching network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:48 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.147 2 WARNING nova.compute.manager [req-9c08a667-60b0-4330-8cad-0e1938d841f7 req-840a7451-75f1-4ad1-88da-d8cdde2fc151 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received unexpected event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:13:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:13:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:48.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:13:48 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:48 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:48 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.999 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:48.999 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.000 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.000 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.000 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:49.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1417398520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.427 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.641 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.642 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.796 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.797 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4088MB free_disk=20.967517852783203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.797 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.798 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.863 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 8913130a-880a-4f54-969a-4c7b4d447c14 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.864 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.864 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:13:49 np0005465987 nova_compute[230713]: 2025-10-02 13:13:49.918 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:50 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3328327207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:50 np0005465987 nova_compute[230713]: 2025-10-02 13:13:50.366 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:50 np0005465987 nova_compute[230713]: 2025-10-02 13:13:50.371 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:50 np0005465987 nova_compute[230713]: 2025-10-02 13:13:50.396 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:50 np0005465987 nova_compute[230713]: 2025-10-02 13:13:50.427 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:13:50 np0005465987 nova_compute[230713]: 2025-10-02 13:13:50.428 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:50.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:51.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:51 np0005465987 nova_compute[230713]: 2025-10-02 13:13:51.720 2 DEBUG nova.compute.manager [req-08365e86-4d84-471b-b7a8-26882b1cdc2e req-7ba0867b-04c4-4f92-a586-53a302ac0bc8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-changed-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:51 np0005465987 nova_compute[230713]: 2025-10-02 13:13:51.720 2 DEBUG nova.compute.manager [req-08365e86-4d84-471b-b7a8-26882b1cdc2e req-7ba0867b-04c4-4f92-a586-53a302ac0bc8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Refreshing instance network info cache due to event network-changed-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:13:51 np0005465987 nova_compute[230713]: 2025-10-02 13:13:51.721 2 DEBUG oslo_concurrency.lockutils [req-08365e86-4d84-471b-b7a8-26882b1cdc2e req-7ba0867b-04c4-4f92-a586-53a302ac0bc8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:13:51 np0005465987 nova_compute[230713]: 2025-10-02 13:13:51.721 2 DEBUG oslo_concurrency.lockutils [req-08365e86-4d84-471b-b7a8-26882b1cdc2e req-7ba0867b-04c4-4f92-a586-53a302ac0bc8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:13:51 np0005465987 nova_compute[230713]: 2025-10-02 13:13:51.721 2 DEBUG nova.network.neutron [req-08365e86-4d84-471b-b7a8-26882b1cdc2e req-7ba0867b-04c4-4f92-a586-53a302ac0bc8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Refreshing network info cache for port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:13:52 np0005465987 nova_compute[230713]: 2025-10-02 13:13:52.430 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:52 np0005465987 nova_compute[230713]: 2025-10-02 13:13:52.431 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:13:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:52.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:53 np0005465987 nova_compute[230713]: 2025-10-02 13:13:53.049 2 DEBUG nova.network.neutron [req-08365e86-4d84-471b-b7a8-26882b1cdc2e req-7ba0867b-04c4-4f92-a586-53a302ac0bc8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updated VIF entry in instance network info cache for port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:13:53 np0005465987 nova_compute[230713]: 2025-10-02 13:13:53.049 2 DEBUG nova.network.neutron [req-08365e86-4d84-471b-b7a8-26882b1cdc2e req-7ba0867b-04c4-4f92-a586-53a302ac0bc8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updating instance_info_cache with network_info: [{"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:53 np0005465987 nova_compute[230713]: 2025-10-02 13:13:53.074 2 DEBUG oslo_concurrency.lockutils [req-08365e86-4d84-471b-b7a8-26882b1cdc2e req-7ba0867b-04c4-4f92-a586-53a302ac0bc8 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:13:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:53.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:53 np0005465987 nova_compute[230713]: 2025-10-02 13:13:53.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:54 np0005465987 nova_compute[230713]: 2025-10-02 13:13:54.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:13:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:54.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:13:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:13:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/189000789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:13:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:13:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/189000789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:13:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.170576) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836170676, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1576, "num_deletes": 252, "total_data_size": 3479351, "memory_usage": 3530136, "flush_reason": "Manual Compaction"}
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836186128, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 1431528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82173, "largest_seqno": 83744, "table_properties": {"data_size": 1426258, "index_size": 2537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13959, "raw_average_key_size": 21, "raw_value_size": 1414718, "raw_average_value_size": 2153, "num_data_blocks": 112, "num_entries": 657, "num_filter_entries": 657, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410714, "oldest_key_time": 1759410714, "file_creation_time": 1759410836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 15732 microseconds, and 9203 cpu microseconds.
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.186314) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 1431528 bytes OK
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.186393) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.187756) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.187779) EVENT_LOG_v1 {"time_micros": 1759410836187772, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.187803) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 3472099, prev total WAL file size 3472099, number of live WAL files 2.
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.189909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373632' seq:72057594037927935, type:22 .. '6D6772737461740033303133' seq:0, type:0; will stop at (end)
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(1397KB)], [168(12MB)]
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836189981, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 14795948, "oldest_snapshot_seqno": -1}
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10378 keys, 11784323 bytes, temperature: kUnknown
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836253076, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 11784323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11719664, "index_size": 37602, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 273479, "raw_average_key_size": 26, "raw_value_size": 11540367, "raw_average_value_size": 1112, "num_data_blocks": 1425, "num_entries": 10378, "num_filter_entries": 10378, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759410836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.253304) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11784323 bytes
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.254401) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.3 rd, 186.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.7 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(18.6) write-amplify(8.2) OK, records in: 10846, records dropped: 468 output_compression: NoCompression
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.254415) EVENT_LOG_v1 {"time_micros": 1759410836254409, "job": 108, "event": "compaction_finished", "compaction_time_micros": 63162, "compaction_time_cpu_micros": 31344, "output_level": 6, "num_output_files": 1, "total_output_size": 11784323, "num_input_records": 10846, "num_output_records": 10378, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836254704, "job": 108, "event": "table_file_deletion", "file_number": 170}
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410836257089, "job": 108, "event": "table_file_deletion", "file_number": 168}
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.189765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.257159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.257166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.257167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.257168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:13:56.257170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:56.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:56 np0005465987 podman[316893]: 2025-10-02 13:13:56.830790171 +0000 UTC m=+0.051631421 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  2 09:13:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:58.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:58 np0005465987 nova_compute[230713]: 2025-10-02 13:13:58.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:59 np0005465987 nova_compute[230713]: 2025-10-02 13:13:59.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:13:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:59.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/922732795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:00 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:00Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:bb:0f 10.100.0.7
Oct  2 09:14:00 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:00Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:bb:0f 10.100.0.7
Oct  2 09:14:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:00.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:01.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:01 np0005465987 podman[316915]: 2025-10-02 13:14:01.852504631 +0000 UTC m=+0.053140761 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:14:01 np0005465987 podman[316914]: 2025-10-02 13:14:01.852666986 +0000 UTC m=+0.056710140 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct  2 09:14:01 np0005465987 podman[316913]: 2025-10-02 13:14:01.90228939 +0000 UTC m=+0.109519270 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 09:14:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:03.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:03 np0005465987 nova_compute[230713]: 2025-10-02 13:14:03.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:04 np0005465987 nova_compute[230713]: 2025-10-02 13:14:04.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:04.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:05.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:06.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:07 np0005465987 nova_compute[230713]: 2025-10-02 13:14:07.053 2 INFO nova.compute.manager [None req-e8fe5e52-7c9f-4fe5-af91-1415dc68bd6f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Get console output#033[00m
Oct  2 09:14:07 np0005465987 nova_compute[230713]: 2025-10-02 13:14:07.061 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:14:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:07.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.261 2 INFO nova.compute.manager [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Rebuilding instance#033[00m
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.551 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.577 2 DEBUG nova.compute.manager [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.619 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_requests' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.634 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.644 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.655 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.669 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.673 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 09:14:08 np0005465987 nova_compute[230713]: 2025-10-02 13:14:08.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:08.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:09 np0005465987 nova_compute[230713]: 2025-10-02 13:14:09.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:09.088 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:14:09 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:09.089 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:14:09 np0005465987 nova_compute[230713]: 2025-10-02 13:14:09.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:09.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:10.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:10 np0005465987 kernel: tapc1ddd8d4-4e (unregistering): left promiscuous mode
Oct  2 09:14:10 np0005465987 NetworkManager[44910]: <info>  [1759410850.9457] device (tapc1ddd8d4-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:14:10 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:10Z|00861|binding|INFO|Releasing lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 from this chassis (sb_readonly=0)
Oct  2 09:14:10 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:10Z|00862|binding|INFO|Setting lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 down in Southbound
Oct  2 09:14:10 np0005465987 nova_compute[230713]: 2025-10-02 13:14:10.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:10 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:10Z|00863|binding|INFO|Removing iface tapc1ddd8d4-4e ovn-installed in OVS
Oct  2 09:14:10 np0005465987 nova_compute[230713]: 2025-10-02 13:14:10.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:10.970 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:bb:0f 10.100.0.7'], port_security=['fa:16:3e:c3:bb:0f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8913130a-880a-4f54-969a-4c7b4d447c14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fa90689-867b-4efc-b402-c838a2efa5d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c876d23a-6966-4e21-9725-773e3dbffee1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1abdba69-d4c9-46b5-9115-0d23b68bff9f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:14:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:10.971 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 in datapath 7fa90689-867b-4efc-b402-c838a2efa5d6 unbound from our chassis#033[00m
Oct  2 09:14:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:10.972 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fa90689-867b-4efc-b402-c838a2efa5d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:14:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:10.975 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe45479-1ad9-45b1-b1d9-4c9ad4cf17a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:10 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:10.975 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 namespace which is not needed anymore#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:10.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:11 np0005465987 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000da.scope: Deactivated successfully.
Oct  2 09:14:11 np0005465987 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000da.scope: Consumed 14.581s CPU time.
Oct  2 09:14:11 np0005465987 systemd-machined[188335]: Machine qemu-96-instance-000000da terminated.
Oct  2 09:14:11 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[316832]: [NOTICE]   (316836) : haproxy version is 2.8.14-c23fe91
Oct  2 09:14:11 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[316832]: [NOTICE]   (316836) : path to executable is /usr/sbin/haproxy
Oct  2 09:14:11 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[316832]: [WARNING]  (316836) : Exiting Master process...
Oct  2 09:14:11 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[316832]: [WARNING]  (316836) : Exiting Master process...
Oct  2 09:14:11 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[316832]: [ALERT]    (316836) : Current worker (316838) exited with code 143 (Terminated)
Oct  2 09:14:11 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[316832]: [WARNING]  (316836) : All workers exited. Exiting... (0)
Oct  2 09:14:11 np0005465987 systemd[1]: libpod-a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21.scope: Deactivated successfully.
Oct  2 09:14:11 np0005465987 podman[316997]: 2025-10-02 13:14:11.128079973 +0000 UTC m=+0.050051708 container died a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 09:14:11 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21-userdata-shm.mount: Deactivated successfully.
Oct  2 09:14:11 np0005465987 systemd[1]: var-lib-containers-storage-overlay-9cc4187f3a2c05867fd0271909d44ad178b52ba3bbbc8efaeb03a4510f0ab65a-merged.mount: Deactivated successfully.
Oct  2 09:14:11 np0005465987 podman[316997]: 2025-10-02 13:14:11.166817083 +0000 UTC m=+0.088788818 container cleanup a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:14:11 np0005465987 systemd[1]: libpod-conmon-a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21.scope: Deactivated successfully.
Oct  2 09:14:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.231 2 DEBUG nova.compute.manager [req-c7c454a8-7283-47e0-9bca-29060d03e1de req-c3a71cdf-4ed1-49dc-8c5c-e3fae5d52c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-unplugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.232 2 DEBUG oslo_concurrency.lockutils [req-c7c454a8-7283-47e0-9bca-29060d03e1de req-c3a71cdf-4ed1-49dc-8c5c-e3fae5d52c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.232 2 DEBUG oslo_concurrency.lockutils [req-c7c454a8-7283-47e0-9bca-29060d03e1de req-c3a71cdf-4ed1-49dc-8c5c-e3fae5d52c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.233 2 DEBUG oslo_concurrency.lockutils [req-c7c454a8-7283-47e0-9bca-29060d03e1de req-c3a71cdf-4ed1-49dc-8c5c-e3fae5d52c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.233 2 DEBUG nova.compute.manager [req-c7c454a8-7283-47e0-9bca-29060d03e1de req-c3a71cdf-4ed1-49dc-8c5c-e3fae5d52c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] No waiting events found dispatching network-vif-unplugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.233 2 WARNING nova.compute.manager [req-c7c454a8-7283-47e0-9bca-29060d03e1de req-c3a71cdf-4ed1-49dc-8c5c-e3fae5d52c34 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received unexpected event network-vif-unplugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 09:14:11 np0005465987 podman[317023]: 2025-10-02 13:14:11.240234774 +0000 UTC m=+0.050692786 container remove a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:14:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:11.248 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6675c3fe-5dd3-423d-9e2a-ab3895a2ed88]: (4, ('Thu Oct  2 01:14:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 (a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21)\na310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21\nThu Oct  2 01:14:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 (a310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21)\na310b0b389242f9ca0f33f5f780562f6cf109dacf201f186158f48ee763afd21\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:11.250 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[dbddf6e6-2a65-4fe2-8e68-8ac9a27bf690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:11.251 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fa90689-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:11 np0005465987 kernel: tap7fa90689-80: left promiscuous mode
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:11.272 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[769b3caf-c2bc-4150-9a77-6aa5997a5717]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:11.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:11.298 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4cdcde-333f-4b9d-8168-d87f6ef4bab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:11.299 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[37845327-1aae-4c1c-81eb-d1c39585b310]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:11.315 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd6ac04-323b-45a0-8db8-05952b09e789]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889439, 'reachable_time': 36869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317053, 'error': None, 'target': 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:11.321 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:14:11 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:11.321 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed162cd-6ce4-4735-bb46-9aa69a1fd4f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:11 np0005465987 systemd[1]: run-netns-ovnmeta\x2d7fa90689\x2d867b\x2d4efc\x2db402\x2dc838a2efa5d6.mount: Deactivated successfully.
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.688 2 INFO nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.695 2 INFO nova.virt.libvirt.driver [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance destroyed successfully.#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.701 2 INFO nova.virt.libvirt.driver [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance destroyed successfully.#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.702 2 DEBUG nova.virt.libvirt.vif [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1144969198',display_name='tempest-TestNetworkAdvancedServerOps-server-1144969198',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1144969198',id=218,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISBzb758icLI7PGci4RqL0CU+8vvNYOvJAX7zkqNygfviXgPTJODUdmp02nluRyTp1PDwlyRaifnYPPffvW7eJO4xZE3i1JSR26PcFkqmAlAlzhasGxkMnuaKA33kRTzg==',key_name='tempest-TestNetworkAdvancedServerOps-977712776',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:13:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-v6acwu7y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:07Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=8913130a-880a-4f54-969a-4c7b4d447c14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.702 2 DEBUG nova.network.os_vif_util [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.704 2 DEBUG nova.network.os_vif_util [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.704 2 DEBUG os_vif [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ddd8d4-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:14:11 np0005465987 nova_compute[230713]: 2025-10-02 13:14:11.719 2 INFO os_vif [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e')#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.193 2 INFO nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Deleting instance files /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14_del#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.194 2 INFO nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Deletion of /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14_del complete#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.313 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.314 2 INFO nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Creating image(s)#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.336 2 DEBUG nova.storage.rbd_utils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.360 2 DEBUG nova.storage.rbd_utils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.383 2 DEBUG nova.storage.rbd_utils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.386 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.449 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.450 2 DEBUG oslo_concurrency.lockutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.451 2 DEBUG oslo_concurrency.lockutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.451 2 DEBUG oslo_concurrency.lockutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "5133c8c7459ce4fa1cf043a638fc1b5c66ed8609" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.475 2 DEBUG nova.storage.rbd_utils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.478 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 8913130a-880a-4f54-969a-4c7b4d447c14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:12.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.859 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 8913130a-880a-4f54-969a-4c7b4d447c14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:12 np0005465987 nova_compute[230713]: 2025-10-02 13:14:12.923 2 DEBUG nova.storage.rbd_utils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] resizing rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.012 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.012 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Ensure instance console log exists: /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.013 2 DEBUG oslo_concurrency.lockutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.013 2 DEBUG oslo_concurrency.lockutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.013 2 DEBUG oslo_concurrency.lockutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.015 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Start _get_guest_xml network_info=[{"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.018 2 WARNING nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.029 2 DEBUG nova.virt.libvirt.host [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.029 2 DEBUG nova.virt.libvirt.host [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.032 2 DEBUG nova.virt.libvirt.host [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.032 2 DEBUG nova.virt.libvirt.host [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.033 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.033 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:47Z,direct_url=<?>,disk_format='qcow2',id=db05f54c-61f8-42d6-a1e2-da3219a77b12,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:49Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.034 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.034 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.034 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.034 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.034 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.034 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.035 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.035 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.035 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.035 2 DEBUG nova.virt.hardware [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.035 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.050 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:13.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.307 2 DEBUG nova.compute.manager [req-57ece357-cc9a-4b98-a67c-e1a47887e60d req-aba9b590-a7f5-4635-90bb-f81cb0d82830 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.307 2 DEBUG oslo_concurrency.lockutils [req-57ece357-cc9a-4b98-a67c-e1a47887e60d req-aba9b590-a7f5-4635-90bb-f81cb0d82830 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.308 2 DEBUG oslo_concurrency.lockutils [req-57ece357-cc9a-4b98-a67c-e1a47887e60d req-aba9b590-a7f5-4635-90bb-f81cb0d82830 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.308 2 DEBUG oslo_concurrency.lockutils [req-57ece357-cc9a-4b98-a67c-e1a47887e60d req-aba9b590-a7f5-4635-90bb-f81cb0d82830 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.308 2 DEBUG nova.compute.manager [req-57ece357-cc9a-4b98-a67c-e1a47887e60d req-aba9b590-a7f5-4635-90bb-f81cb0d82830 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] No waiting events found dispatching network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.309 2 WARNING nova.compute.manager [req-57ece357-cc9a-4b98-a67c-e1a47887e60d req-aba9b590-a7f5-4635-90bb-f81cb0d82830 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received unexpected event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 09:14:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1775144993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.478 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.502 2 DEBUG nova.storage.rbd_utils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.506 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3155891014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.937 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.939 2 DEBUG nova.virt.libvirt.vif [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1144969198',display_name='tempest-TestNetworkAdvancedServerOps-server-1144969198',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1144969198',id=218,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISBzb758icLI7PGci4RqL0CU+8vvNYOvJAX7zkqNygfviXgPTJODUdmp02nluRyTp1PDwlyRaifnYPPffvW7eJO4xZE3i1JSR26PcFkqmAlAlzhasGxkMnuaKA33kRTzg==',key_name='tempest-TestNetworkAdvancedServerOps-977712776',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:13:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-v6acwu7y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:12Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=8913130a-880a-4f54-969a-4c7b4d447c14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.939 2 DEBUG nova.network.os_vif_util [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.940 2 DEBUG nova.network.os_vif_util [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.942 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <uuid>8913130a-880a-4f54-969a-4c7b4d447c14</uuid>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <name>instance-000000da</name>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1144969198</nova:name>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:14:13</nova:creationTime>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="db05f54c-61f8-42d6-a1e2-da3219a77b12"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <nova:port uuid="c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <entry name="serial">8913130a-880a-4f54-969a-4c7b4d447c14</entry>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <entry name="uuid">8913130a-880a-4f54-969a-4c7b4d447c14</entry>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/8913130a-880a-4f54-969a-4c7b4d447c14_disk">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/8913130a-880a-4f54-969a-4c7b4d447c14_disk.config">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:c3:bb:0f"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <target dev="tapc1ddd8d4-4e"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/console.log" append="off"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:14:13 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:14:13 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:14:13 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:14:13 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.944 2 DEBUG nova.virt.libvirt.vif [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1144969198',display_name='tempest-TestNetworkAdvancedServerOps-server-1144969198',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1144969198',id=218,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISBzb758icLI7PGci4RqL0CU+8vvNYOvJAX7zkqNygfviXgPTJODUdmp02nluRyTp1PDwlyRaifnYPPffvW7eJO4xZE3i1JSR26PcFkqmAlAlzhasGxkMnuaKA33kRTzg==',key_name='tempest-TestNetworkAdvancedServerOps-977712776',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:13:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-v6acwu7y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:12Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=8913130a-880a-4f54-969a-4c7b4d447c14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.945 2 DEBUG nova.network.os_vif_util [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.945 2 DEBUG nova.network.os_vif_util [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.946 2 DEBUG os_vif [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.946 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.947 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ddd8d4-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1ddd8d4-4e, col_values=(('external_ids', {'iface-id': 'c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:bb:0f', 'vm-uuid': '8913130a-880a-4f54-969a-4c7b4d447c14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:13 np0005465987 NetworkManager[44910]: <info>  [1759410853.9519] manager: (tapc1ddd8d4-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:13 np0005465987 nova_compute[230713]: 2025-10-02 13:14:13.956 2 INFO os_vif [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e')#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.017 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.017 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.017 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No VIF found with MAC fa:16:3e:c3:bb:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.018 2 INFO nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Using config drive#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.045 2 DEBUG nova.storage.rbd_utils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.067 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.141 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'keypairs' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.503 2 INFO nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Creating config drive at /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.508 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdy2svkjo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.646 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdy2svkjo" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.675 2 DEBUG nova.storage.rbd_utils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.679 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:14.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.953 2 DEBUG oslo_concurrency.processutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config 8913130a-880a-4f54-969a-4c7b4d447c14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:14 np0005465987 nova_compute[230713]: 2025-10-02 13:14:14.954 2 INFO nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Deleting local config drive /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14/disk.config because it was imported into RBD.#033[00m
Oct  2 09:14:15 np0005465987 kernel: tapc1ddd8d4-4e: entered promiscuous mode
Oct  2 09:14:15 np0005465987 NetworkManager[44910]: <info>  [1759410855.0137] manager: (tapc1ddd8d4-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/408)
Oct  2 09:14:15 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:15Z|00864|binding|INFO|Claiming lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 for this chassis.
Oct  2 09:14:15 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:15Z|00865|binding|INFO|c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83: Claiming fa:16:3e:c3:bb:0f 10.100.0.7
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:15 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:15Z|00866|binding|INFO|Setting lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 ovn-installed in OVS
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:15 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:15Z|00867|binding|INFO|Setting lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 up in Southbound
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:15 np0005465987 systemd-udevd[317373]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.040 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:bb:0f 10.100.0.7'], port_security=['fa:16:3e:c3:bb:0f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8913130a-880a-4f54-969a-4c7b4d447c14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fa90689-867b-4efc-b402-c838a2efa5d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c876d23a-6966-4e21-9725-773e3dbffee1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1abdba69-d4c9-46b5-9115-0d23b68bff9f, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.042 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 in datapath 7fa90689-867b-4efc-b402-c838a2efa5d6 bound to our chassis#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.044 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fa90689-867b-4efc-b402-c838a2efa5d6#033[00m
Oct  2 09:14:15 np0005465987 NetworkManager[44910]: <info>  [1759410855.0545] device (tapc1ddd8d4-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:14:15 np0005465987 NetworkManager[44910]: <info>  [1759410855.0553] device (tapc1ddd8d4-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:14:15 np0005465987 systemd-machined[188335]: New machine qemu-97-instance-000000da.
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.058 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[15f5703a-20b0-4baa-9205-bbace94ee407]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.060 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7fa90689-81 in ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.062 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7fa90689-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.062 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ba116bba-1ca9-4266-b6d3-274a357bf942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.063 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[95542dcd-a6b0-4ac6-9314-595f65da4201]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 systemd[1]: Started Virtual Machine qemu-97-instance-000000da.
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.073 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[12bfebf6-306a-47b0-a5fa-7c61c4e1ccaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.100 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[758c5456-3597-4c19-b621-3e0f329696d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.131 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[b20ed7bc-8f50-43fe-ba6b-3eba136b4b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 systemd-udevd[317378]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:14:15 np0005465987 NetworkManager[44910]: <info>  [1759410855.1382] manager: (tap7fa90689-80): new Veth device (/org/freedesktop/NetworkManager/Devices/409)
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.138 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c7497a8d-51d7-4ca0-9ea0-8a0e59cdff8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.171 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[0139ccc4-f88a-4a96-9132-3bbd2cc9002b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.174 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcd66c8-0042-4e2f-821c-0bafae65d2fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 NetworkManager[44910]: <info>  [1759410855.1968] device (tap7fa90689-80): carrier: link connected
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.203 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[81be631d-c5be-43d7-96fe-0004b1dd3637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.221 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2520d9d7-e70d-479c-964a-c5a067bfca7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fa90689-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:c0:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 892377, 'reachable_time': 41626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317408, 'error': None, 'target': 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.236 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[99959330-ed93-436b-92a8-60d704285b20]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:c00d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 892377, 'tstamp': 892377}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317409, 'error': None, 'target': 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.254 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[33cdb9a6-e4b0-4b85-a0fe-b859eb134014]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fa90689-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:c0:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 892377, 'reachable_time': 41626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317410, 'error': None, 'target': 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.283 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[113e69cd-0234-4032-a76c-e2ed42599e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:15.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.432 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[76a78a22-5f08-46cf-b3ef-79e063465cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.433 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fa90689-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.434 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.434 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fa90689-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:15 np0005465987 kernel: tap7fa90689-80: entered promiscuous mode
Oct  2 09:14:15 np0005465987 NetworkManager[44910]: <info>  [1759410855.4366] manager: (tap7fa90689-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.439 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fa90689-80, col_values=(('external_ids', {'iface-id': '27daa8ca-e509-4d68-a752-fe0f6eb24a78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:15 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:15Z|00868|binding|INFO|Releasing lport 27daa8ca-e509-4d68-a752-fe0f6eb24a78 from this chassis (sb_readonly=0)
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.442 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7fa90689-867b-4efc-b402-c838a2efa5d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7fa90689-867b-4efc-b402-c838a2efa5d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.442 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1b8cce-ae03-4b40-a2d2-c00f78d64704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.443 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-7fa90689-867b-4efc-b402-c838a2efa5d6
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/7fa90689-867b-4efc-b402-c838a2efa5d6.pid.haproxy
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID 7fa90689-867b-4efc-b402-c838a2efa5d6
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:14:15 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:15.444 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'env', 'PROCESS_TAG=haproxy-7fa90689-867b-4efc-b402-c838a2efa5d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7fa90689-867b-4efc-b402-c838a2efa5d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:15 np0005465987 podman[317483]: 2025-10-02 13:14:15.830202797 +0000 UTC m=+0.045930196 container create 3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:14:15 np0005465987 systemd[1]: Started libpod-conmon-3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57.scope.
Oct  2 09:14:15 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:14:15 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a22826777baaeb3d0b8c3396912f23b9a78837b58bc419c24587ee04d297ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:14:15 np0005465987 podman[317483]: 2025-10-02 13:14:15.895625181 +0000 UTC m=+0.111352600 container init 3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 09:14:15 np0005465987 podman[317483]: 2025-10-02 13:14:15.807507632 +0000 UTC m=+0.023235051 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:14:15 np0005465987 podman[317483]: 2025-10-02 13:14:15.90332909 +0000 UTC m=+0.119056489 container start 3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 09:14:15 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[317498]: [NOTICE]   (317502) : New worker (317504) forked
Oct  2 09:14:15 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[317498]: [NOTICE]   (317502) : Loading success.
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.991 2 DEBUG nova.virt.libvirt.host [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Removed pending event for 8913130a-880a-4f54-969a-4c7b4d447c14 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.992 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410855.9910018, 8913130a-880a-4f54-969a-4c7b4d447c14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.992 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.994 2 DEBUG nova.compute.manager [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.995 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.999 2 INFO nova.virt.libvirt.driver [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance spawned successfully.#033[00m
Oct  2 09:14:15 np0005465987 nova_compute[230713]: 2025-10-02 13:14:15.999 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.054 2 DEBUG nova.compute.manager [req-8a22138a-4fbe-4c41-95d8-62345c70eecd req-5464dc09-e169-487c-8897-fd2d752a0f25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.055 2 DEBUG oslo_concurrency.lockutils [req-8a22138a-4fbe-4c41-95d8-62345c70eecd req-5464dc09-e169-487c-8897-fd2d752a0f25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.055 2 DEBUG oslo_concurrency.lockutils [req-8a22138a-4fbe-4c41-95d8-62345c70eecd req-5464dc09-e169-487c-8897-fd2d752a0f25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.055 2 DEBUG oslo_concurrency.lockutils [req-8a22138a-4fbe-4c41-95d8-62345c70eecd req-5464dc09-e169-487c-8897-fd2d752a0f25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.055 2 DEBUG nova.compute.manager [req-8a22138a-4fbe-4c41-95d8-62345c70eecd req-5464dc09-e169-487c-8897-fd2d752a0f25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] No waiting events found dispatching network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.056 2 WARNING nova.compute.manager [req-8a22138a-4fbe-4c41-95d8-62345c70eecd req-5464dc09-e169-487c-8897-fd2d752a0f25 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received unexpected event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.121 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.131 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.136 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.137 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.137 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.138 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.139 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.139 2 DEBUG nova.virt.libvirt.driver [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.188 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.189 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410855.9917746, 8913130a-880a-4f54-969a-4c7b4d447c14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.189 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] VM Started (Lifecycle Event)#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.409 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.413 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.449 2 DEBUG nova.compute.manager [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.689 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 09:14:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:16.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.783 2 DEBUG oslo_concurrency.lockutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.784 2 DEBUG oslo_concurrency.lockutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:16 np0005465987 nova_compute[230713]: 2025-10-02 13:14:16.784 2 DEBUG nova.objects.instance [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 09:14:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:17.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:17 np0005465987 nova_compute[230713]: 2025-10-02 13:14:17.570 2 DEBUG oslo_concurrency.lockutils [None req-010d0eb4-89ea-49c5-ad0c-f7dd61fc9049 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:18.091 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:18 np0005465987 nova_compute[230713]: 2025-10-02 13:14:18.445 2 DEBUG nova.compute.manager [req-a80ca9dc-411f-42bb-9ec8-69d78d908901 req-563d316d-9f19-4686-b591-afc32ef4954c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:18 np0005465987 nova_compute[230713]: 2025-10-02 13:14:18.446 2 DEBUG oslo_concurrency.lockutils [req-a80ca9dc-411f-42bb-9ec8-69d78d908901 req-563d316d-9f19-4686-b591-afc32ef4954c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:18 np0005465987 nova_compute[230713]: 2025-10-02 13:14:18.446 2 DEBUG oslo_concurrency.lockutils [req-a80ca9dc-411f-42bb-9ec8-69d78d908901 req-563d316d-9f19-4686-b591-afc32ef4954c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:18 np0005465987 nova_compute[230713]: 2025-10-02 13:14:18.446 2 DEBUG oslo_concurrency.lockutils [req-a80ca9dc-411f-42bb-9ec8-69d78d908901 req-563d316d-9f19-4686-b591-afc32ef4954c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:18 np0005465987 nova_compute[230713]: 2025-10-02 13:14:18.446 2 DEBUG nova.compute.manager [req-a80ca9dc-411f-42bb-9ec8-69d78d908901 req-563d316d-9f19-4686-b591-afc32ef4954c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] No waiting events found dispatching network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:14:18 np0005465987 nova_compute[230713]: 2025-10-02 13:14:18.446 2 WARNING nova.compute.manager [req-a80ca9dc-411f-42bb-9ec8-69d78d908901 req-563d316d-9f19-4686-b591-afc32ef4954c d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received unexpected event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:14:18 np0005465987 nova_compute[230713]: 2025-10-02 13:14:18.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:18.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:18 np0005465987 nova_compute[230713]: 2025-10-02 13:14:18.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:19.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:20.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:21.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:22.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:23.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:23 np0005465987 nova_compute[230713]: 2025-10-02 13:14:23.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:23 np0005465987 nova_compute[230713]: 2025-10-02 13:14:23.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:14:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:25.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:14:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:26.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:27.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:27 np0005465987 podman[317513]: 2025-10-02 13:14:27.852827715 +0000 UTC m=+0.052763741 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 09:14:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:28.001 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:28.001 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:28.001 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:28 np0005465987 nova_compute[230713]: 2025-10-02 13:14:28.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:28.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:28 np0005465987 nova_compute[230713]: 2025-10-02 13:14:28.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:14:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:29.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:14:30 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:30Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:bb:0f 10.100.0.7
Oct  2 09:14:30 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:30Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:bb:0f 10.100.0.7
Oct  2 09:14:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:30.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:31.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:32 np0005465987 podman[317559]: 2025-10-02 13:14:32.275554475 +0000 UTC m=+0.068628392 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:14:32 np0005465987 podman[317560]: 2025-10-02 13:14:32.314433119 +0000 UTC m=+0.095304275 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 09:14:32 np0005465987 podman[317558]: 2025-10-02 13:14:32.324762088 +0000 UTC m=+0.121402952 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:14:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:32.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:33.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:33 np0005465987 nova_compute[230713]: 2025-10-02 13:14:33.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:33 np0005465987 nova_compute[230713]: 2025-10-02 13:14:33.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:14:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:14:34 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:14:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:34.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:35.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:35 np0005465987 nova_compute[230713]: 2025-10-02 13:14:35.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:36.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:37.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:37 np0005465987 nova_compute[230713]: 2025-10-02 13:14:37.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:38 np0005465987 nova_compute[230713]: 2025-10-02 13:14:38.280 2 INFO nova.compute.manager [None req-dc748f88-a3e6-4f80-bf85-2574ae596d62 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Get console output#033[00m
Oct  2 09:14:38 np0005465987 nova_compute[230713]: 2025-10-02 13:14:38.287 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:14:38 np0005465987 nova_compute[230713]: 2025-10-02 13:14:38.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:38.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:38 np0005465987 nova_compute[230713]: 2025-10-02 13:14:38.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:39.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:14:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.123 2 DEBUG nova.compute.manager [req-4caf3cea-3415-4354-8bdf-be4b93c5665c req-6b4f9f30-2ad3-4ac9-9d45-cd27fec12b1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-changed-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.123 2 DEBUG nova.compute.manager [req-4caf3cea-3415-4354-8bdf-be4b93c5665c req-6b4f9f30-2ad3-4ac9-9d45-cd27fec12b1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Refreshing instance network info cache due to event network-changed-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.123 2 DEBUG oslo_concurrency.lockutils [req-4caf3cea-3415-4354-8bdf-be4b93c5665c req-6b4f9f30-2ad3-4ac9-9d45-cd27fec12b1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.123 2 DEBUG oslo_concurrency.lockutils [req-4caf3cea-3415-4354-8bdf-be4b93c5665c req-6b4f9f30-2ad3-4ac9-9d45-cd27fec12b1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.124 2 DEBUG nova.network.neutron [req-4caf3cea-3415-4354-8bdf-be4b93c5665c req-6b4f9f30-2ad3-4ac9-9d45-cd27fec12b1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Refreshing network info cache for port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.424 2 DEBUG oslo_concurrency.lockutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.425 2 DEBUG oslo_concurrency.lockutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.425 2 DEBUG oslo_concurrency.lockutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.425 2 DEBUG oslo_concurrency.lockutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.426 2 DEBUG oslo_concurrency.lockutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.427 2 INFO nova.compute.manager [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Terminating instance#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.428 2 DEBUG nova.compute.manager [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:14:40 np0005465987 kernel: tapc1ddd8d4-4e (unregistering): left promiscuous mode
Oct  2 09:14:40 np0005465987 NetworkManager[44910]: <info>  [1759410880.4799] device (tapc1ddd8d4-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:40Z|00869|binding|INFO|Releasing lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 from this chassis (sb_readonly=0)
Oct  2 09:14:40 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:40Z|00870|binding|INFO|Setting lport c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 down in Southbound
Oct  2 09:14:40 np0005465987 ovn_controller[129172]: 2025-10-02T13:14:40Z|00871|binding|INFO|Removing iface tapc1ddd8d4-4e ovn-installed in OVS
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.510 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:bb:0f 10.100.0.7'], port_security=['fa:16:3e:c3:bb:0f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8913130a-880a-4f54-969a-4c7b4d447c14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fa90689-867b-4efc-b402-c838a2efa5d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c876d23a-6966-4e21-9725-773e3dbffee1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1abdba69-d4c9-46b5-9115-0d23b68bff9f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.511 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 in datapath 7fa90689-867b-4efc-b402-c838a2efa5d6 unbound from our chassis#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.513 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fa90689-867b-4efc-b402-c838a2efa5d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.515 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[129ec7bf-e389-46a5-adc9-e141a6bfc8ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.516 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 namespace which is not needed anymore#033[00m
Oct  2 09:14:40 np0005465987 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000da.scope: Deactivated successfully.
Oct  2 09:14:40 np0005465987 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000da.scope: Consumed 13.936s CPU time.
Oct  2 09:14:40 np0005465987 systemd-machined[188335]: Machine qemu-97-instance-000000da terminated.
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.668 2 INFO nova.virt.libvirt.driver [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance destroyed successfully.#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.669 2 DEBUG nova.objects.instance [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'resources' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:40 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[317498]: [NOTICE]   (317502) : haproxy version is 2.8.14-c23fe91
Oct  2 09:14:40 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[317498]: [NOTICE]   (317502) : path to executable is /usr/sbin/haproxy
Oct  2 09:14:40 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[317498]: [WARNING]  (317502) : Exiting Master process...
Oct  2 09:14:40 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[317498]: [WARNING]  (317502) : Exiting Master process...
Oct  2 09:14:40 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[317498]: [ALERT]    (317502) : Current worker (317504) exited with code 143 (Terminated)
Oct  2 09:14:40 np0005465987 neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6[317498]: [WARNING]  (317502) : All workers exited. Exiting... (0)
Oct  2 09:14:40 np0005465987 systemd[1]: libpod-3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57.scope: Deactivated successfully.
Oct  2 09:14:40 np0005465987 podman[317802]: 2025-10-02 13:14:40.68346719 +0000 UTC m=+0.046104500 container died 3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:14:40 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57-userdata-shm.mount: Deactivated successfully.
Oct  2 09:14:40 np0005465987 systemd[1]: var-lib-containers-storage-overlay-32a22826777baaeb3d0b8c3396912f23b9a78837b58bc419c24587ee04d297ee-merged.mount: Deactivated successfully.
Oct  2 09:14:40 np0005465987 podman[317802]: 2025-10-02 13:14:40.719416165 +0000 UTC m=+0.082053455 container cleanup 3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.731 2 DEBUG nova.virt.libvirt.vif [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1144969198',display_name='tempest-TestNetworkAdvancedServerOps-server-1144969198',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1144969198',id=218,image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISBzb758icLI7PGci4RqL0CU+8vvNYOvJAX7zkqNygfviXgPTJODUdmp02nluRyTp1PDwlyRaifnYPPffvW7eJO4xZE3i1JSR26PcFkqmAlAlzhasGxkMnuaKA33kRTzg==',key_name='tempest-TestNetworkAdvancedServerOps-977712776',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:14:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-v6acwu7y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='db05f54c-61f8-42d6-a1e2-da3219a77b12',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:14:16Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=8913130a-880a-4f54-969a-4c7b4d447c14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.732 2 DEBUG nova.network.os_vif_util [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.733 2 DEBUG nova.network.os_vif_util [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.733 2 DEBUG os_vif [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ddd8d4-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.741 2 INFO os_vif [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:bb:0f,bridge_name='br-int',has_traffic_filtering=True,id=c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83,network=Network(7fa90689-867b-4efc-b402-c838a2efa5d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ddd8d4-4e')#033[00m
Oct  2 09:14:40 np0005465987 systemd[1]: libpod-conmon-3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57.scope: Deactivated successfully.
Oct  2 09:14:40 np0005465987 podman[317842]: 2025-10-02 13:14:40.782978719 +0000 UTC m=+0.043771269 container remove 3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:14:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:40.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.789 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[05c00e2a-13b8-400b-89e4-a86b3b0e184d]: (4, ('Thu Oct  2 01:14:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 (3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57)\n3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57\nThu Oct  2 01:14:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 (3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57)\n3b018ebd2c75087d2999efbcb7a4829e687007f6a4f32313cf48f8f8fac8ca57\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.791 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2d02a6d4-e281-4d9f-b130-9288387a6e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.792 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fa90689-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005465987 kernel: tap7fa90689-80: left promiscuous mode
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.801 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6d10808d-7b86-4cc5-87e5-65a3b0295bf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.819 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[7be5571a-94c7-4969-8dec-80e21d6e719e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.820 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6202493f-7f14-4671-b45f-3a7e6287e6ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.836 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ce500d-a1db-40f1-a461-92f8734cf446]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 892369, 'reachable_time': 15789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317876, 'error': None, 'target': 'ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005465987 systemd[1]: run-netns-ovnmeta\x2d7fa90689\x2d867b\x2d4efc\x2db402\x2dc838a2efa5d6.mount: Deactivated successfully.
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.838 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7fa90689-867b-4efc-b402-c838a2efa5d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:14:40 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:14:40.838 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[09db4b55-ae0d-4a1d-9d26-62b6e03ec8dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.913 2 DEBUG nova.compute.manager [req-38060db4-7505-4b6d-a01d-0ebe855e6681 req-956c0bb7-4620-4326-bc3d-5814c770888d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-unplugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.914 2 DEBUG oslo_concurrency.lockutils [req-38060db4-7505-4b6d-a01d-0ebe855e6681 req-956c0bb7-4620-4326-bc3d-5814c770888d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.915 2 DEBUG oslo_concurrency.lockutils [req-38060db4-7505-4b6d-a01d-0ebe855e6681 req-956c0bb7-4620-4326-bc3d-5814c770888d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.915 2 DEBUG oslo_concurrency.lockutils [req-38060db4-7505-4b6d-a01d-0ebe855e6681 req-956c0bb7-4620-4326-bc3d-5814c770888d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.915 2 DEBUG nova.compute.manager [req-38060db4-7505-4b6d-a01d-0ebe855e6681 req-956c0bb7-4620-4326-bc3d-5814c770888d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] No waiting events found dispatching network-vif-unplugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:14:40 np0005465987 nova_compute[230713]: 2025-10-02 13:14:40.916 2 DEBUG nova.compute.manager [req-38060db4-7505-4b6d-a01d-0ebe855e6681 req-956c0bb7-4620-4326-bc3d-5814c770888d d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-unplugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:14:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:41 np0005465987 nova_compute[230713]: 2025-10-02 13:14:41.331 2 INFO nova.virt.libvirt.driver [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Deleting instance files /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14_del#033[00m
Oct  2 09:14:41 np0005465987 nova_compute[230713]: 2025-10-02 13:14:41.332 2 INFO nova.virt.libvirt.driver [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Deletion of /var/lib/nova/instances/8913130a-880a-4f54-969a-4c7b4d447c14_del complete#033[00m
Oct  2 09:14:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:41.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:41 np0005465987 nova_compute[230713]: 2025-10-02 13:14:41.450 2 INFO nova.compute.manager [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:14:41 np0005465987 nova_compute[230713]: 2025-10-02 13:14:41.450 2 DEBUG oslo.service.loopingcall [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:14:41 np0005465987 nova_compute[230713]: 2025-10-02 13:14:41.451 2 DEBUG nova.compute.manager [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:14:41 np0005465987 nova_compute[230713]: 2025-10-02 13:14:41.451 2 DEBUG nova.network.neutron [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.088 2 DEBUG nova.network.neutron [req-4caf3cea-3415-4354-8bdf-be4b93c5665c req-6b4f9f30-2ad3-4ac9-9d45-cd27fec12b1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updated VIF entry in instance network info cache for port c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.088 2 DEBUG nova.network.neutron [req-4caf3cea-3415-4354-8bdf-be4b93c5665c req-6b4f9f30-2ad3-4ac9-9d45-cd27fec12b1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updating instance_info_cache with network_info: [{"id": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "address": "fa:16:3e:c3:bb:0f", "network": {"id": "7fa90689-867b-4efc-b402-c838a2efa5d6", "bridge": "br-int", "label": "tempest-network-smoke--2023586431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ddd8d4-4e", "ovs_interfaceid": "c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.127 2 DEBUG oslo_concurrency.lockutils [req-4caf3cea-3415-4354-8bdf-be4b93c5665c req-6b4f9f30-2ad3-4ac9-9d45-cd27fec12b1b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.588 2 DEBUG nova.network.neutron [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.626 2 INFO nova.compute.manager [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Took 1.17 seconds to deallocate network for instance.#033[00m
Oct  2 09:14:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:42.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.899 2 DEBUG oslo_concurrency.lockutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.899 2 DEBUG oslo_concurrency.lockutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:14:42 np0005465987 nova_compute[230713]: 2025-10-02 13:14:42.987 2 DEBUG oslo_concurrency.processutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.155 2 DEBUG nova.compute.manager [req-82f98a46-d5f2-422c-862b-abfe1d42f9ac req-e435ac9f-873a-4bb7-8bf5-3ad8b3720506 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.155 2 DEBUG oslo_concurrency.lockutils [req-82f98a46-d5f2-422c-862b-abfe1d42f9ac req-e435ac9f-873a-4bb7-8bf5-3ad8b3720506 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.155 2 DEBUG oslo_concurrency.lockutils [req-82f98a46-d5f2-422c-862b-abfe1d42f9ac req-e435ac9f-873a-4bb7-8bf5-3ad8b3720506 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.156 2 DEBUG oslo_concurrency.lockutils [req-82f98a46-d5f2-422c-862b-abfe1d42f9ac req-e435ac9f-873a-4bb7-8bf5-3ad8b3720506 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.156 2 DEBUG nova.compute.manager [req-82f98a46-d5f2-422c-862b-abfe1d42f9ac req-e435ac9f-873a-4bb7-8bf5-3ad8b3720506 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] No waiting events found dispatching network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.156 2 WARNING nova.compute.manager [req-82f98a46-d5f2-422c-862b-abfe1d42f9ac req-e435ac9f-873a-4bb7-8bf5-3ad8b3720506 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received unexpected event network-vif-plugged-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.156 2 DEBUG nova.compute.manager [req-82f98a46-d5f2-422c-862b-abfe1d42f9ac req-e435ac9f-873a-4bb7-8bf5-3ad8b3720506 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Received event network-vif-deleted-c1ddd8d4-4e70-40d1-a971-3b8f1c3bed83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.210 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.211 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.211 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.211 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8913130a-880a-4f54-969a-4c7b4d447c14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:43.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:43 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:14:43 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1611495803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.458 2 DEBUG oslo_concurrency.processutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.464 2 DEBUG nova.compute.provider_tree [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.494 2 DEBUG nova.scheduler.client.report [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.780 2 DEBUG oslo_concurrency.lockutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:43 np0005465987 nova_compute[230713]: 2025-10-02 13:14:43.883 2 INFO nova.scheduler.client.report [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocations for instance 8913130a-880a-4f54-969a-4c7b4d447c14#033[00m
Oct  2 09:14:44 np0005465987 nova_compute[230713]: 2025-10-02 13:14:44.030 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:14:44 np0005465987 nova_compute[230713]: 2025-10-02 13:14:44.037 2 DEBUG oslo_concurrency.lockutils [None req-5a9f05c4-cd5b-4005-b879-0872ded4638b ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "8913130a-880a-4f54-969a-4c7b4d447c14" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:44 np0005465987 nova_compute[230713]: 2025-10-02 13:14:44.407 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:44 np0005465987 nova_compute[230713]: 2025-10-02 13:14:44.482 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-8913130a-880a-4f54-969a-4c7b4d447c14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:44 np0005465987 nova_compute[230713]: 2025-10-02 13:14:44.483 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:14:44 np0005465987 nova_compute[230713]: 2025-10-02 13:14:44.483 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:44.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:45.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:45 np0005465987 nova_compute[230713]: 2025-10-02 13:14:45.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:46.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:46 np0005465987 nova_compute[230713]: 2025-10-02 13:14:46.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:47.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:47 np0005465987 nova_compute[230713]: 2025-10-02 13:14:47.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:48 np0005465987 nova_compute[230713]: 2025-10-02 13:14:48.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:48.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:48 np0005465987 nova_compute[230713]: 2025-10-02 13:14:48.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:48 np0005465987 nova_compute[230713]: 2025-10-02 13:14:48.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.087 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.088 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.088 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.088 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.089 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:49.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:14:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3112505108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.522 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.685 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.687 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4272MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.688 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:49 np0005465987 nova_compute[230713]: 2025-10-02 13:14:49.688 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:50 np0005465987 nova_compute[230713]: 2025-10-02 13:14:50.216 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:14:50 np0005465987 nova_compute[230713]: 2025-10-02 13:14:50.216 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:14:50 np0005465987 nova_compute[230713]: 2025-10-02 13:14:50.235 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:14:50 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3281219525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:14:50 np0005465987 nova_compute[230713]: 2025-10-02 13:14:50.677 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:50 np0005465987 nova_compute[230713]: 2025-10-02 13:14:50.683 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:14:50 np0005465987 nova_compute[230713]: 2025-10-02 13:14:50.713 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:14:50 np0005465987 nova_compute[230713]: 2025-10-02 13:14:50.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:50 np0005465987 nova_compute[230713]: 2025-10-02 13:14:50.776 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:14:50 np0005465987 nova_compute[230713]: 2025-10-02 13:14:50.776 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:50.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:51.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:52.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:53.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:53 np0005465987 nova_compute[230713]: 2025-10-02 13:14:53.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:54 np0005465987 nova_compute[230713]: 2025-10-02 13:14:54.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:54 np0005465987 nova_compute[230713]: 2025-10-02 13:14:54.777 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:54 np0005465987 nova_compute[230713]: 2025-10-02 13:14:54.778 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:14:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:54.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:54 np0005465987 nova_compute[230713]: 2025-10-02 13:14:54.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:55.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:55 np0005465987 nova_compute[230713]: 2025-10-02 13:14:55.665 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410880.664626, 8913130a-880a-4f54-969a-4c7b4d447c14 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:55 np0005465987 nova_compute[230713]: 2025-10-02 13:14:55.665 2 INFO nova.compute.manager [-] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:14:55 np0005465987 nova_compute[230713]: 2025-10-02 13:14:55.700 2 DEBUG nova.compute.manager [None req-fd853130-afbc-4fc0-a0d9-0562951dea25 - - - - - -] [instance: 8913130a-880a-4f54-969a-4c7b4d447c14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:55 np0005465987 nova_compute[230713]: 2025-10-02 13:14:55.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:56.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:57.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:58 np0005465987 nova_compute[230713]: 2025-10-02 13:14:58.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:58.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:14:58 np0005465987 podman[317949]: 2025-10-02 13:14:58.840407143 +0000 UTC m=+0.056766940 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:14:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:14:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:14:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:59.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:00 np0005465987 nova_compute[230713]: 2025-10-02 13:15:00.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:00.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:00 np0005465987 nova_compute[230713]: 2025-10-02 13:15:00.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:00 np0005465987 nova_compute[230713]: 2025-10-02 13:15:00.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:15:01 np0005465987 nova_compute[230713]: 2025-10-02 13:15:01.019 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:01.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:02.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:02 np0005465987 podman[317970]: 2025-10-02 13:15:02.847846042 +0000 UTC m=+0.060864661 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 09:15:02 np0005465987 podman[317968]: 2025-10-02 13:15:02.868899252 +0000 UTC m=+0.087729759 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:15:02 np0005465987 podman[317969]: 2025-10-02 13:15:02.869297804 +0000 UTC m=+0.086285591 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 09:15:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:03 np0005465987 nova_compute[230713]: 2025-10-02 13:15:03.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:04.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.052 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.052 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.053 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.053 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.054 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.054 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.054 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.114 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.115 2 WARNING nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.115 2 WARNING nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.115 2 WARNING nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.115 2 INFO nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Removable base files: /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609 /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.115 2 INFO nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.116 2 INFO nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/5133c8c7459ce4fa1cf043a638fc1b5c66ed8609#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.116 2 INFO nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b205c32a62e786c93a2dbd49e2f9a3b4e1110882#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.116 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.116 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.116 2 DEBUG nova.virt.libvirt.imagecache [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  2 09:15:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:05.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:05 np0005465987 nova_compute[230713]: 2025-10-02 13:15:05.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:06.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:07 np0005465987 nova_compute[230713]: 2025-10-02 13:15:07.093 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:07.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:07 np0005465987 nova_compute[230713]: 2025-10-02 13:15:07.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:07 np0005465987 nova_compute[230713]: 2025-10-02 13:15:07.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:15:07 np0005465987 nova_compute[230713]: 2025-10-02 13:15:07.986 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:15:08 np0005465987 nova_compute[230713]: 2025-10-02 13:15:08.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:15:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:08.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:15:08 np0005465987 nova_compute[230713]: 2025-10-02 13:15:08.980 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:09.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:10 np0005465987 nova_compute[230713]: 2025-10-02 13:15:10.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:10.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:11.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:15:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:12.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:15:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:13.253 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:15:13 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:13.254 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:15:13 np0005465987 nova_compute[230713]: 2025-10-02 13:15:13.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:13 np0005465987 nova_compute[230713]: 2025-10-02 13:15:13.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:14.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 e426: 3 total, 3 up, 3 in
Oct  2 09:15:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:15 np0005465987 nova_compute[230713]: 2025-10-02 13:15:15.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:16.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:17 np0005465987 nova_compute[230713]: 2025-10-02 13:15:17.298 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:17 np0005465987 nova_compute[230713]: 2025-10-02 13:15:17.298 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:17.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:17 np0005465987 nova_compute[230713]: 2025-10-02 13:15:17.398 2 DEBUG nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:15:18 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:18.255 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:18 np0005465987 nova_compute[230713]: 2025-10-02 13:15:18.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:18.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:19.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:19 np0005465987 nova_compute[230713]: 2025-10-02 13:15:19.623 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:19 np0005465987 nova_compute[230713]: 2025-10-02 13:15:19.623 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:19 np0005465987 nova_compute[230713]: 2025-10-02 13:15:19.631 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:15:19 np0005465987 nova_compute[230713]: 2025-10-02 13:15:19.631 2 INFO nova.compute.claims [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:15:19 np0005465987 nova_compute[230713]: 2025-10-02 13:15:19.978 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3840151140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.427 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.433 2 DEBUG nova.compute.provider_tree [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.454 2 DEBUG nova.scheduler.client.report [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.502 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.503 2 DEBUG nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.562 2 DEBUG nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.562 2 DEBUG nova.network.neutron [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.588 2 INFO nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.616 2 DEBUG nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.728 2 DEBUG nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.730 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.730 2 INFO nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Creating image(s)#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.763 2 DEBUG nova.storage.rbd_utils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.799 2 DEBUG nova.storage.rbd_utils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.827 2 DEBUG nova.storage.rbd_utils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.831 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:20.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.902 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.903 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.904 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.904 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "50c3d0e01c5fd68886c717f1fdd053015a0fe968" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.935 2 DEBUG nova.storage.rbd_utils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:20 np0005465987 nova_compute[230713]: 2025-10-02 13:15:20.939 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:21 np0005465987 nova_compute[230713]: 2025-10-02 13:15:21.124 2 DEBUG nova.policy [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffe4d737e4414fb3a3e358f8ca3f3e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '08e102ae48244af2ab448a2e1ff757df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.225191) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921225223, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1077, "num_deletes": 251, "total_data_size": 2241817, "memory_usage": 2271096, "flush_reason": "Manual Compaction"}
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921235352, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 1478566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83749, "largest_seqno": 84821, "table_properties": {"data_size": 1473836, "index_size": 2317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10624, "raw_average_key_size": 19, "raw_value_size": 1464188, "raw_average_value_size": 2731, "num_data_blocks": 103, "num_entries": 536, "num_filter_entries": 536, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410837, "oldest_key_time": 1759410837, "file_creation_time": 1759410921, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 10190 microseconds, and 3920 cpu microseconds.
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.235380) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 1478566 bytes OK
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.235396) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.236844) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.236856) EVENT_LOG_v1 {"time_micros": 1759410921236852, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.236870) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 2236595, prev total WAL file size 2236595, number of live WAL files 2.
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.237600) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(1443KB)], [171(11MB)]
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921237635, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 13262889, "oldest_snapshot_seqno": -1}
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 10395 keys, 11301663 bytes, temperature: kUnknown
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921284364, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 11301663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11237287, "index_size": 37255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 274533, "raw_average_key_size": 26, "raw_value_size": 11058132, "raw_average_value_size": 1063, "num_data_blocks": 1405, "num_entries": 10395, "num_filter_entries": 10395, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759410921, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.284574) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11301663 bytes
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.285479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 283.5 rd, 241.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 11.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(16.6) write-amplify(7.6) OK, records in: 10914, records dropped: 519 output_compression: NoCompression
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.285494) EVENT_LOG_v1 {"time_micros": 1759410921285487, "job": 110, "event": "compaction_finished", "compaction_time_micros": 46787, "compaction_time_cpu_micros": 24525, "output_level": 6, "num_output_files": 1, "total_output_size": 11301663, "num_input_records": 10914, "num_output_records": 10395, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921285779, "job": 110, "event": "table_file_deletion", "file_number": 173}
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410921287825, "job": 110, "event": "table_file_deletion", "file_number": 171}
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.237498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.287872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.287876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.287877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.287878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:15:21 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:15:21.287880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:15:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:21 np0005465987 nova_compute[230713]: 2025-10-02 13:15:21.963 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/50c3d0e01c5fd68886c717f1fdd053015a0fe968 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:22 np0005465987 nova_compute[230713]: 2025-10-02 13:15:22.056 2 DEBUG nova.storage.rbd_utils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] resizing rbd image 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:15:22 np0005465987 nova_compute[230713]: 2025-10-02 13:15:22.184 2 DEBUG nova.network.neutron [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Successfully created port: c7bc4d7b-3f13-4f9c-be12-6edcff4d424e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:15:22 np0005465987 nova_compute[230713]: 2025-10-02 13:15:22.193 2 DEBUG nova.objects.instance [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:15:22 np0005465987 nova_compute[230713]: 2025-10-02 13:15:22.212 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:15:22 np0005465987 nova_compute[230713]: 2025-10-02 13:15:22.212 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Ensure instance console log exists: /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:15:22 np0005465987 nova_compute[230713]: 2025-10-02 13:15:22.213 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:22 np0005465987 nova_compute[230713]: 2025-10-02 13:15:22.213 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:22 np0005465987 nova_compute[230713]: 2025-10-02 13:15:22.213 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:22.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:23 np0005465987 nova_compute[230713]: 2025-10-02 13:15:23.235 2 DEBUG nova.network.neutron [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Successfully updated port: c7bc4d7b-3f13-4f9c-be12-6edcff4d424e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:15:23 np0005465987 nova_compute[230713]: 2025-10-02 13:15:23.273 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:15:23 np0005465987 nova_compute[230713]: 2025-10-02 13:15:23.273 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:15:23 np0005465987 nova_compute[230713]: 2025-10-02 13:15:23.274 2 DEBUG nova.network.neutron [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:15:23 np0005465987 nova_compute[230713]: 2025-10-02 13:15:23.340 2 DEBUG nova.compute.manager [req-fc9e705b-3d39-468d-a0a4-82bc97a7a865 req-56db19ce-bb0e-41f4-908c-1fa31da5f6be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:23 np0005465987 nova_compute[230713]: 2025-10-02 13:15:23.340 2 DEBUG nova.compute.manager [req-fc9e705b-3d39-468d-a0a4-82bc97a7a865 req-56db19ce-bb0e-41f4-908c-1fa31da5f6be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing instance network info cache due to event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:15:23 np0005465987 nova_compute[230713]: 2025-10-02 13:15:23.341 2 DEBUG oslo_concurrency.lockutils [req-fc9e705b-3d39-468d-a0a4-82bc97a7a865 req-56db19ce-bb0e-41f4-908c-1fa31da5f6be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:15:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:23.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:23 np0005465987 nova_compute[230713]: 2025-10-02 13:15:23.490 2 DEBUG nova.network.neutron [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:15:23 np0005465987 nova_compute[230713]: 2025-10-02 13:15:23.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:24.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:24 np0005465987 nova_compute[230713]: 2025-10-02 13:15:24.900 2 DEBUG nova.network.neutron [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:25.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:25 np0005465987 nova_compute[230713]: 2025-10-02 13:15:25.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.254 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.255 2 DEBUG nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance network_info: |[{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.255 2 DEBUG oslo_concurrency.lockutils [req-fc9e705b-3d39-468d-a0a4-82bc97a7a865 req-56db19ce-bb0e-41f4-908c-1fa31da5f6be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.256 2 DEBUG nova.network.neutron [req-fc9e705b-3d39-468d-a0a4-82bc97a7a865 req-56db19ce-bb0e-41f4-908c-1fa31da5f6be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.259 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Start _get_guest_xml network_info=[{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'image_id': 'c2d0c2bc-fe21-4689-86ae-d6728c15874c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.264 2 WARNING nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.274 2 DEBUG nova.virt.libvirt.host [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.275 2 DEBUG nova.virt.libvirt.host [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.279 2 DEBUG nova.virt.libvirt.host [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.280 2 DEBUG nova.virt.libvirt.host [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.281 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.281 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T11:57:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef129e5-cce4-4465-9674-03d3559e8a14',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T11:57:41Z,direct_url=<?>,disk_format='qcow2',id=c2d0c2bc-fe21-4689-86ae-d6728c15874c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1533ac528d35434c826050eed402afba',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T11:57:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.282 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.282 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.283 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.283 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.283 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.283 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.284 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.284 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.284 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.285 2 DEBUG nova.virt.hardware [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.289 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:15:26 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2265586687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.759 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.793 2 DEBUG nova.storage.rbd_utils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:26 np0005465987 nova_compute[230713]: 2025-10-02 13:15:26.799 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:26.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:27 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:15:27 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3603674874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.247 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.249 2 DEBUG nova.virt.libvirt.vif [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:15:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1996915229',display_name='tempest-TestNetworkAdvancedServerOps-server-1996915229',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1996915229',id=221,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDRuo4KdXDZKQ3oIyyNSXE1rik8EaVzgz9MZA+kI2JiH48qHMFlLWYrWwJzNWAoOqZLFOIMkCOlFgIqDZkrdOzURcb4pwQu8clr+WCDZQoFdB5ndJaWWZy3VHIv+OtGSTw==',key_name='tempest-TestNetworkAdvancedServerOps-1841926110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-di1mdv1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:15:20Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=0a899aee-b0ae-4350-9f56-446d42ef77d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.249 2 DEBUG nova.network.os_vif_util [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.250 2 DEBUG nova.network.os_vif_util [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.250 2 DEBUG nova.objects.instance [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.336 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <uuid>0a899aee-b0ae-4350-9f56-446d42ef77d2</uuid>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <name>instance-000000dd</name>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <memory>131072</memory>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <vcpu>1</vcpu>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <metadata>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1996915229</nova:name>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <nova:creationTime>2025-10-02 13:15:26</nova:creationTime>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <nova:flavor name="m1.nano">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <nova:memory>128</nova:memory>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <nova:disk>1</nova:disk>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <nova:swap>0</nova:swap>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      </nova:flavor>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <nova:owner>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <nova:user uuid="ffe4d737e4414fb3a3e358f8ca3f3e1e">tempest-TestNetworkAdvancedServerOps-1527846432-project-member</nova:user>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <nova:project uuid="08e102ae48244af2ab448a2e1ff757df">tempest-TestNetworkAdvancedServerOps-1527846432</nova:project>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      </nova:owner>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <nova:root type="image" uuid="c2d0c2bc-fe21-4689-86ae-d6728c15874c"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <nova:ports>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <nova:port uuid="c7bc4d7b-3f13-4f9c-be12-6edcff4d424e">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        </nova:port>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      </nova:ports>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    </nova:instance>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  </metadata>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <sysinfo type="smbios">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <system>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <entry name="serial">0a899aee-b0ae-4350-9f56-446d42ef77d2</entry>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <entry name="uuid">0a899aee-b0ae-4350-9f56-446d42ef77d2</entry>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    </system>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  </sysinfo>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <os>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <boot dev="hd"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <smbios mode="sysinfo"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  </os>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <features>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <acpi/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <apic/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <vmcoreinfo/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  </features>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <clock offset="utc">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <timer name="hpet" present="no"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  </clock>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <cpu mode="custom" match="exact">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <model>Nehalem</model>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  </cpu>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  <devices>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <disk type="network" device="disk">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/0a899aee-b0ae-4350-9f56-446d42ef77d2_disk">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <target dev="vda" bus="virtio"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <disk type="network" device="cdrom">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <driver type="raw" cache="none"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <source protocol="rbd" name="vms/0a899aee-b0ae-4350-9f56-446d42ef77d2_disk.config">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      </source>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <auth username="openstack">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:        <secret type="ceph" uuid="fd4c5763-22d1-50ea-ad0b-96a3dc3040b2"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      </auth>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <target dev="sda" bus="sata"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    </disk>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <interface type="ethernet">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <mac address="fa:16:3e:c7:1d:48"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <mtu size="1442"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <target dev="tapc7bc4d7b-3f"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    </interface>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <serial type="pty">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <log file="/var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/console.log" append="off"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    </serial>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <video>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <model type="virtio"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    </video>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <input type="tablet" bus="usb"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <rng model="virtio">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    </rng>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <controller type="usb" index="0"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    <memballoon model="virtio">
Oct  2 09:15:27 np0005465987 nova_compute[230713]:      <stats period="10"/>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:    </memballoon>
Oct  2 09:15:27 np0005465987 nova_compute[230713]:  </devices>
Oct  2 09:15:27 np0005465987 nova_compute[230713]: </domain>
Oct  2 09:15:27 np0005465987 nova_compute[230713]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.337 2 DEBUG nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Preparing to wait for external event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.338 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.338 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.338 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.339 2 DEBUG nova.virt.libvirt.vif [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:15:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1996915229',display_name='tempest-TestNetworkAdvancedServerOps-server-1996915229',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1996915229',id=221,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDRuo4KdXDZKQ3oIyyNSXE1rik8EaVzgz9MZA+kI2JiH48qHMFlLWYrWwJzNWAoOqZLFOIMkCOlFgIqDZkrdOzURcb4pwQu8clr+WCDZQoFdB5ndJaWWZy3VHIv+OtGSTw==',key_name='tempest-TestNetworkAdvancedServerOps-1841926110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-di1mdv1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:15:20Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=0a899aee-b0ae-4350-9f56-446d42ef77d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.340 2 DEBUG nova.network.os_vif_util [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.341 2 DEBUG nova.network.os_vif_util [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.341 2 DEBUG os_vif [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7bc4d7b-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7bc4d7b-3f, col_values=(('external_ids', {'iface-id': 'c7bc4d7b-3f13-4f9c-be12-6edcff4d424e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:1d:48', 'vm-uuid': '0a899aee-b0ae-4350-9f56-446d42ef77d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:27 np0005465987 NetworkManager[44910]: <info>  [1759410927.3510] manager: (tapc7bc4d7b-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.358 2 INFO os_vif [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f')#033[00m
Oct  2 09:15:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:15:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:27.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.442 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.442 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.443 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] No VIF found with MAC fa:16:3e:c7:1d:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.443 2 INFO nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Using config drive#033[00m
Oct  2 09:15:27 np0005465987 nova_compute[230713]: 2025-10-02 13:15:27.470 2 DEBUG nova.storage.rbd_utils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.001 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.002 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.002 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.230 2 INFO nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Creating config drive at /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/disk.config#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.234 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphlxqrvwu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.366 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphlxqrvwu" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.402 2 DEBUG nova.storage.rbd_utils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] rbd image 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.406 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/disk.config 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.432 2 DEBUG nova.network.neutron [req-fc9e705b-3d39-468d-a0a4-82bc97a7a865 req-56db19ce-bb0e-41f4-908c-1fa31da5f6be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updated VIF entry in instance network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.433 2 DEBUG nova.network.neutron [req-fc9e705b-3d39-468d-a0a4-82bc97a7a865 req-56db19ce-bb0e-41f4-908c-1fa31da5f6be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.479 2 DEBUG oslo_concurrency.lockutils [req-fc9e705b-3d39-468d-a0a4-82bc97a7a865 req-56db19ce-bb0e-41f4-908c-1fa31da5f6be d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.553 2 DEBUG oslo_concurrency.processutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/disk.config 0a899aee-b0ae-4350-9f56-446d42ef77d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.554 2 INFO nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Deleting local config drive /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/disk.config because it was imported into RBD.#033[00m
Oct  2 09:15:28 np0005465987 kernel: tapc7bc4d7b-3f: entered promiscuous mode
Oct  2 09:15:28 np0005465987 NetworkManager[44910]: <info>  [1759410928.5948] manager: (tapc7bc4d7b-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Oct  2 09:15:28 np0005465987 ovn_controller[129172]: 2025-10-02T13:15:28Z|00872|binding|INFO|Claiming lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for this chassis.
Oct  2 09:15:28 np0005465987 ovn_controller[129172]: 2025-10-02T13:15:28Z|00873|binding|INFO|c7bc4d7b-3f13-4f9c-be12-6edcff4d424e: Claiming fa:16:3e:c7:1d:48 10.100.0.7
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.614 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:1d:48 10.100.0.7'], port_security=['fa:16:3e:c7:1d:48 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0a899aee-b0ae-4350-9f56-446d42ef77d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd81072c-a0ca-4da6-a274-e2604cd954b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8837441d-fb89-46a9-8aa6-bfdcde8d99f9, chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.615 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e in datapath faf67f78-db61-49af-8b59-741a7fc7cb4c bound to our chassis#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.616 138325 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network faf67f78-db61-49af-8b59-741a7fc7cb4c#033[00m
Oct  2 09:15:28 np0005465987 systemd-udevd[318359]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.628 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[2de8e076-7a33-4eff-987a-35bfc25bc2c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.629 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfaf67f78-d1 in ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.631 233943 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfaf67f78-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.631 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[023b6f27-9ce3-43bf-9578-6ce064ea522f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.631 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd4612e-3ff3-4600-b660-78f0b3f316e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 NetworkManager[44910]: <info>  [1759410928.6336] device (tapc7bc4d7b-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:15:28 np0005465987 NetworkManager[44910]: <info>  [1759410928.6346] device (tapc7bc4d7b-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:15:28 np0005465987 systemd-machined[188335]: New machine qemu-98-instance-000000dd.
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.642 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[8902771f-497a-4537-a045-6c07603419cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.667 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7bbf8c-331a-4dc4-a945-706750213bd8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 systemd[1]: Started Virtual Machine qemu-98-instance-000000dd.
Oct  2 09:15:28 np0005465987 ovn_controller[129172]: 2025-10-02T13:15:28Z|00874|binding|INFO|Setting lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e ovn-installed in OVS
Oct  2 09:15:28 np0005465987 ovn_controller[129172]: 2025-10-02T13:15:28Z|00875|binding|INFO|Setting lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e up in Southbound
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.696 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[57f849c2-ea57-4d51-bd52-363fc06af871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.703 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb0aa76-b26c-4536-8a0b-4d7ebb1b0533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 NetworkManager[44910]: <info>  [1759410928.7050] manager: (tapfaf67f78-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/413)
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.734 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[361feed8-cfe9-4797-9e62-ced9ba5d9c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.736 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3a9a8e-24f1-4c25-a514-a0ad5db63411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:28 np0005465987 NetworkManager[44910]: <info>  [1759410928.7593] device (tapfaf67f78-d0): carrier: link connected
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.765 234209 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa5d752-0eb6-4d36-a837-afe3f0b249fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.781 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[ce059b2e-b860-400c-8326-55a782443dd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaf67f78-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:4b:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 899733, 'reachable_time': 26151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318397, 'error': None, 'target': 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.794 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[bd28fce0-25b4-4a09-8933-c96d451d4171]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:4b6c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 899733, 'tstamp': 899733}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318398, 'error': None, 'target': 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.811 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[4e587df7-d643-418c-9dbc-b7e342df4908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaf67f78-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:4b:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 899733, 'reachable_time': 26151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318399, 'error': None, 'target': 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.837 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[674ca5f4-cd53-462e-9ab4-605e4bd07528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:28.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.891 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[367bc244-3925-40fc-ba39-8708bf1c6be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.892 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaf67f78-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.892 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.893 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaf67f78-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:28 np0005465987 kernel: tapfaf67f78-d0: entered promiscuous mode
Oct  2 09:15:28 np0005465987 NetworkManager[44910]: <info>  [1759410928.8954] manager: (tapfaf67f78-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.899 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfaf67f78-d0, col_values=(('external_ids', {'iface-id': '76911ddd-98dc-48ab-a192-244a4cada461'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:28 np0005465987 ovn_controller[129172]: 2025-10-02T13:15:28Z|00876|binding|INFO|Releasing lport 76911ddd-98dc-48ab-a192-244a4cada461 from this chassis (sb_readonly=0)
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.917 138325 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faf67f78-db61-49af-8b59-741a7fc7cb4c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faf67f78-db61-49af-8b59-741a7fc7cb4c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.918 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[70d350cc-85c5-4b9b-b05d-7deca0130ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.919 138325 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: global
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    log         /dev/log local0 debug
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    log-tag     haproxy-metadata-proxy-faf67f78-db61-49af-8b59-741a7fc7cb4c
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    user        root
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    group       root
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    maxconn     1024
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    pidfile     /var/lib/neutron/external/pids/faf67f78-db61-49af-8b59-741a7fc7cb4c.pid.haproxy
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    daemon
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: defaults
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    log global
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    mode http
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    option httplog
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    option dontlognull
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    option http-server-close
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    option forwardfor
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    retries                 3
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    timeout http-request    30s
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    timeout connect         30s
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    timeout client          32s
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    timeout server          32s
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    timeout http-keep-alive 30s
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: listen listener
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    bind 169.254.169.254:80
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]:    http-request add-header X-OVN-Network-ID faf67f78-db61-49af-8b59-741a7fc7cb4c
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:15:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:28.921 138325 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'env', 'PROCESS_TAG=haproxy-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/faf67f78-db61-49af-8b59-741a7fc7cb4c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.973 2 DEBUG nova.compute.manager [req-cab9a54b-b824-44cd-8fc6-87c93eb58554 req-06d5c63f-d8f1-4d87-b545-8972a89812ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.974 2 DEBUG oslo_concurrency.lockutils [req-cab9a54b-b824-44cd-8fc6-87c93eb58554 req-06d5c63f-d8f1-4d87-b545-8972a89812ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.974 2 DEBUG oslo_concurrency.lockutils [req-cab9a54b-b824-44cd-8fc6-87c93eb58554 req-06d5c63f-d8f1-4d87-b545-8972a89812ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.974 2 DEBUG oslo_concurrency.lockutils [req-cab9a54b-b824-44cd-8fc6-87c93eb58554 req-06d5c63f-d8f1-4d87-b545-8972a89812ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:28 np0005465987 nova_compute[230713]: 2025-10-02 13:15:28.975 2 DEBUG nova.compute.manager [req-cab9a54b-b824-44cd-8fc6-87c93eb58554 req-06d5c63f-d8f1-4d87-b545-8972a89812ce d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Processing event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:15:29 np0005465987 podman[318471]: 2025-10-02 13:15:29.292094743 +0000 UTC m=+0.049313409 container create e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:15:29 np0005465987 systemd[1]: Started libpod-conmon-e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e.scope.
Oct  2 09:15:29 np0005465987 systemd[1]: Started libcrun container.
Oct  2 09:15:29 np0005465987 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e891896452dde17c3580a3acd9ea22bdb69dd6ab2183db82ac083fbfa8403f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:15:29 np0005465987 podman[318471]: 2025-10-02 13:15:29.3569143 +0000 UTC m=+0.114132946 container init e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:15:29 np0005465987 podman[318471]: 2025-10-02 13:15:29.263335602 +0000 UTC m=+0.020554248 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:15:29 np0005465987 podman[318471]: 2025-10-02 13:15:29.362214434 +0000 UTC m=+0.119433060 container start e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:15:29 np0005465987 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[318487]: [NOTICE]   (318504) : New worker (318509) forked
Oct  2 09:15:29 np0005465987 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[318487]: [NOTICE]   (318504) : Loading success.
Oct  2 09:15:29 np0005465987 podman[318484]: 2025-10-02 13:15:29.386907413 +0000 UTC m=+0.056114073 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:15:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.638 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410929.6381118, 0a899aee-b0ae-4350-9f56-446d42ef77d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.639 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] VM Started (Lifecycle Event)#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.641 2 DEBUG nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.643 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.647 2 INFO nova.virt.libvirt.driver [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance spawned successfully.#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.648 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.670 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.672 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.681 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.681 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.682 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.682 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.682 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.683 2 DEBUG nova.virt.libvirt.driver [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.728 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.729 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410929.6391888, 0a899aee-b0ae-4350-9f56-446d42ef77d2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.729 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.772 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.775 2 DEBUG nova.virt.driver [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] Emitting event <LifecycleEvent: 1759410929.643286, 0a899aee-b0ae-4350-9f56-446d42ef77d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.776 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.787 2 INFO nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Took 9.06 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.787 2 DEBUG nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.830 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.834 2 DEBUG nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.868 2 INFO nova.compute.manager [None req-8672d99c-87c3-4785-9a28-70dc1b048e29 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.882 2 INFO nova.compute.manager [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Took 12.31 seconds to build instance.#033[00m
Oct  2 09:15:29 np0005465987 nova_compute[230713]: 2025-10-02 13:15:29.929 2 DEBUG oslo_concurrency.lockutils [None req-ca12a339-c8f0-4038-bb56-4ffe7dbd58e3 ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:30.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:31 np0005465987 nova_compute[230713]: 2025-10-02 13:15:31.172 2 DEBUG nova.compute.manager [req-1c4838d5-ae0c-499e-a9a4-3b8c54296f37 req-2270ad89-8fd3-471a-b5c6-41d47ff673ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:31 np0005465987 nova_compute[230713]: 2025-10-02 13:15:31.172 2 DEBUG oslo_concurrency.lockutils [req-1c4838d5-ae0c-499e-a9a4-3b8c54296f37 req-2270ad89-8fd3-471a-b5c6-41d47ff673ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:31 np0005465987 nova_compute[230713]: 2025-10-02 13:15:31.172 2 DEBUG oslo_concurrency.lockutils [req-1c4838d5-ae0c-499e-a9a4-3b8c54296f37 req-2270ad89-8fd3-471a-b5c6-41d47ff673ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:31 np0005465987 nova_compute[230713]: 2025-10-02 13:15:31.173 2 DEBUG oslo_concurrency.lockutils [req-1c4838d5-ae0c-499e-a9a4-3b8c54296f37 req-2270ad89-8fd3-471a-b5c6-41d47ff673ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:31 np0005465987 nova_compute[230713]: 2025-10-02 13:15:31.173 2 DEBUG nova.compute.manager [req-1c4838d5-ae0c-499e-a9a4-3b8c54296f37 req-2270ad89-8fd3-471a-b5c6-41d47ff673ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:15:31 np0005465987 nova_compute[230713]: 2025-10-02 13:15:31.173 2 WARNING nova.compute.manager [req-1c4838d5-ae0c-499e-a9a4-3b8c54296f37 req-2270ad89-8fd3-471a-b5c6-41d47ff673ee d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state active and task_state None.#033[00m
Oct  2 09:15:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:31.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:32 np0005465987 nova_compute[230713]: 2025-10-02 13:15:32.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:32.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:33.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:33 np0005465987 nova_compute[230713]: 2025-10-02 13:15:33.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:33 np0005465987 podman[318519]: 2025-10-02 13:15:33.873506054 +0000 UTC m=+0.054528649 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:15:33 np0005465987 podman[318518]: 2025-10-02 13:15:33.903525888 +0000 UTC m=+0.087702590 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:15:33 np0005465987 podman[318520]: 2025-10-02 13:15:33.904620507 +0000 UTC m=+0.081268634 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:15:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:34.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:15:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:35.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:15:36 np0005465987 NetworkManager[44910]: <info>  [1759410936.1572] manager: (patch-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Oct  2 09:15:36 np0005465987 NetworkManager[44910]: <info>  [1759410936.1578] manager: (patch-br-int-to-provnet-d4c15165-6ca5-4aca-85b0-69b58109c521): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Oct  2 09:15:36 np0005465987 nova_compute[230713]: 2025-10-02 13:15:36.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:36 np0005465987 nova_compute[230713]: 2025-10-02 13:15:36.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:36 np0005465987 ovn_controller[129172]: 2025-10-02T13:15:36Z|00877|binding|INFO|Releasing lport 76911ddd-98dc-48ab-a192-244a4cada461 from this chassis (sb_readonly=0)
Oct  2 09:15:36 np0005465987 nova_compute[230713]: 2025-10-02 13:15:36.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:36 np0005465987 nova_compute[230713]: 2025-10-02 13:15:36.833 2 DEBUG nova.compute.manager [req-dfd933ff-0aec-4c53-89dd-803f6b0666cf req-366ce300-6e50-4fc7-a91c-5ae65a39adca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:36 np0005465987 nova_compute[230713]: 2025-10-02 13:15:36.833 2 DEBUG nova.compute.manager [req-dfd933ff-0aec-4c53-89dd-803f6b0666cf req-366ce300-6e50-4fc7-a91c-5ae65a39adca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing instance network info cache due to event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:15:36 np0005465987 nova_compute[230713]: 2025-10-02 13:15:36.834 2 DEBUG oslo_concurrency.lockutils [req-dfd933ff-0aec-4c53-89dd-803f6b0666cf req-366ce300-6e50-4fc7-a91c-5ae65a39adca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:15:36 np0005465987 nova_compute[230713]: 2025-10-02 13:15:36.834 2 DEBUG oslo_concurrency.lockutils [req-dfd933ff-0aec-4c53-89dd-803f6b0666cf req-366ce300-6e50-4fc7-a91c-5ae65a39adca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:15:36 np0005465987 nova_compute[230713]: 2025-10-02 13:15:36.834 2 DEBUG nova.network.neutron [req-dfd933ff-0aec-4c53-89dd-803f6b0666cf req-366ce300-6e50-4fc7-a91c-5ae65a39adca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:15:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:36.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:37 np0005465987 nova_compute[230713]: 2025-10-02 13:15:37.004 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:37 np0005465987 nova_compute[230713]: 2025-10-02 13:15:37.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:37.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:37 np0005465987 nova_compute[230713]: 2025-10-02 13:15:37.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:38 np0005465987 nova_compute[230713]: 2025-10-02 13:15:38.699 2 DEBUG nova.network.neutron [req-dfd933ff-0aec-4c53-89dd-803f6b0666cf req-366ce300-6e50-4fc7-a91c-5ae65a39adca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updated VIF entry in instance network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:15:38 np0005465987 nova_compute[230713]: 2025-10-02 13:15:38.700 2 DEBUG nova.network.neutron [req-dfd933ff-0aec-4c53-89dd-803f6b0666cf req-366ce300-6e50-4fc7-a91c-5ae65a39adca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:38 np0005465987 nova_compute[230713]: 2025-10-02 13:15:38.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:38 np0005465987 nova_compute[230713]: 2025-10-02 13:15:38.800 2 DEBUG oslo_concurrency.lockutils [req-dfd933ff-0aec-4c53-89dd-803f6b0666cf req-366ce300-6e50-4fc7-a91c-5ae65a39adca d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:15:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:15:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:38.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:15:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:39.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:40.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:15:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:41.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:15:42 np0005465987 nova_compute[230713]: 2025-10-02 13:15:42.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:42 np0005465987 ovn_controller[129172]: 2025-10-02T13:15:42Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:1d:48 10.100.0.7
Oct  2 09:15:42 np0005465987 ovn_controller[129172]: 2025-10-02T13:15:42Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:1d:48 10.100.0.7
Oct  2 09:15:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:15:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:15:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:15:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:15:42 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:15:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:42.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:42 np0005465987 nova_compute[230713]: 2025-10-02 13:15:42.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:42 np0005465987 nova_compute[230713]: 2025-10-02 13:15:42.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:15:42 np0005465987 nova_compute[230713]: 2025-10-02 13:15:42.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:15:43 np0005465987 nova_compute[230713]: 2025-10-02 13:15:43.163 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:15:43 np0005465987 nova_compute[230713]: 2025-10-02 13:15:43.164 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:15:43 np0005465987 nova_compute[230713]: 2025-10-02 13:15:43.164 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:15:43 np0005465987 nova_compute[230713]: 2025-10-02 13:15:43.164 2 DEBUG nova.objects.instance [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:15:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:43 np0005465987 nova_compute[230713]: 2025-10-02 13:15:43.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:44.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:46 np0005465987 nova_compute[230713]: 2025-10-02 13:15:46.386 2 DEBUG nova.network.neutron [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:46 np0005465987 nova_compute[230713]: 2025-10-02 13:15:46.410 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:15:46 np0005465987 nova_compute[230713]: 2025-10-02 13:15:46.411 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:15:46 np0005465987 nova_compute[230713]: 2025-10-02 13:15:46.411 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:46.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:47 np0005465987 nova_compute[230713]: 2025-10-02 13:15:47.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:47 np0005465987 nova_compute[230713]: 2025-10-02 13:15:47.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:48 np0005465987 nova_compute[230713]: 2025-10-02 13:15:48.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:48.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:49.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:49 np0005465987 nova_compute[230713]: 2025-10-02 13:15:49.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:49 np0005465987 nova_compute[230713]: 2025-10-02 13:15:49.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:15:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:15:50 np0005465987 nova_compute[230713]: 2025-10-02 13:15:50.112 2 INFO nova.compute.manager [None req-ce1aa3a9-efd8-4444-b6e8-e98f6a060c8d ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Get console output#033[00m
Oct  2 09:15:50 np0005465987 nova_compute[230713]: 2025-10-02 13:15:50.118 14392 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:15:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:50.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:50 np0005465987 nova_compute[230713]: 2025-10-02 13:15:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.071 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.072 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.072 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.072 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.072 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:51.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/162551055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.493 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.582 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.582 2 DEBUG nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.729 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.730 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4064MB free_disk=20.942596435546875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.730 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.731 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.831 2 INFO nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating resource usage from migration f57aae45-e0d4-4519-a990-fd077295e0a0#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.875 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Instance 0a899aee-b0ae-4350-9f56-446d42ef77d2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.876 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.876 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.902 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.918 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.919 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.964 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:15:51 np0005465987 nova_compute[230713]: 2025-10-02 13:15:51.993 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:15:52 np0005465987 nova_compute[230713]: 2025-10-02 13:15:52.084 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:52 np0005465987 nova_compute[230713]: 2025-10-02 13:15:52.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3437230346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:52 np0005465987 nova_compute[230713]: 2025-10-02 13:15:52.557 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:52 np0005465987 nova_compute[230713]: 2025-10-02 13:15:52.566 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:15:52 np0005465987 nova_compute[230713]: 2025-10-02 13:15:52.597 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:15:52 np0005465987 nova_compute[230713]: 2025-10-02 13:15:52.642 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:15:52 np0005465987 nova_compute[230713]: 2025-10-02 13:15:52.642 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:52.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:53.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:53 np0005465987 nova_compute[230713]: 2025-10-02 13:15:53.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:54.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:15:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/929483847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:15:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:15:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/929483847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:15:55 np0005465987 nova_compute[230713]: 2025-10-02 13:15:55.384 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:15:55 np0005465987 nova_compute[230713]: 2025-10-02 13:15:55.385 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:15:55 np0005465987 nova_compute[230713]: 2025-10-02 13:15:55.385 2 DEBUG nova.network.neutron [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:15:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:55.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:55 np0005465987 nova_compute[230713]: 2025-10-02 13:15:55.641 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:55 np0005465987 nova_compute[230713]: 2025-10-02 13:15:55.642 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:15:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:56.551 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:15:56 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:15:56.552 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:15:56 np0005465987 nova_compute[230713]: 2025-10-02 13:15:56.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:56.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:57 np0005465987 nova_compute[230713]: 2025-10-02 13:15:57.305 2 DEBUG nova.network.neutron [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:57 np0005465987 nova_compute[230713]: 2025-10-02 13:15:57.388 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:15:57 np0005465987 nova_compute[230713]: 2025-10-02 13:15:57.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:57.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:57 np0005465987 nova_compute[230713]: 2025-10-02 13:15:57.758 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 09:15:57 np0005465987 nova_compute[230713]: 2025-10-02 13:15:57.758 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Creating file /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/eff5651e31ed46b481c576acc8ace947.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 09:15:57 np0005465987 nova_compute[230713]: 2025-10-02 13:15:57.758 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/eff5651e31ed46b481c576acc8ace947.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:58 np0005465987 nova_compute[230713]: 2025-10-02 13:15:58.235 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/eff5651e31ed46b481c576acc8ace947.tmp" returned: 1 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:58 np0005465987 nova_compute[230713]: 2025-10-02 13:15:58.235 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2/eff5651e31ed46b481c576acc8ace947.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 09:15:58 np0005465987 nova_compute[230713]: 2025-10-02 13:15:58.236 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Creating directory /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 09:15:58 np0005465987 nova_compute[230713]: 2025-10-02 13:15:58.236 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:58 np0005465987 nova_compute[230713]: 2025-10-02 13:15:58.451 2 DEBUG oslo_concurrency.processutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/0a899aee-b0ae-4350-9f56-446d42ef77d2" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:58 np0005465987 nova_compute[230713]: 2025-10-02 13:15:58.457 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 09:15:58 np0005465987 nova_compute[230713]: 2025-10-02 13:15:58.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:15:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:58.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:15:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:15:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:59.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:59 np0005465987 podman[318806]: 2025-10-02 13:15:59.851882111 +0000 UTC m=+0.063138653 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:16:00 np0005465987 kernel: tapc7bc4d7b-3f (unregistering): left promiscuous mode
Oct  2 09:16:00 np0005465987 NetworkManager[44910]: <info>  [1759410960.8678] device (tapc7bc4d7b-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:16:00 np0005465987 ovn_controller[129172]: 2025-10-02T13:16:00Z|00878|binding|INFO|Releasing lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e from this chassis (sb_readonly=0)
Oct  2 09:16:00 np0005465987 ovn_controller[129172]: 2025-10-02T13:16:00Z|00879|binding|INFO|Setting lport c7bc4d7b-3f13-4f9c-be12-6edcff4d424e down in Southbound
Oct  2 09:16:00 np0005465987 ovn_controller[129172]: 2025-10-02T13:16:00Z|00880|binding|INFO|Removing iface tapc7bc4d7b-3f ovn-installed in OVS
Oct  2 09:16:00 np0005465987 nova_compute[230713]: 2025-10-02 13:16:00.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:00 np0005465987 nova_compute[230713]: 2025-10-02 13:16:00.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:00.888 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:1d:48 10.100.0.7'], port_security=['fa:16:3e:c7:1d:48 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0a899aee-b0ae-4350-9f56-446d42ef77d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08e102ae48244af2ab448a2e1ff757df', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd81072c-a0ca-4da6-a274-e2604cd954b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8837441d-fb89-46a9-8aa6-bfdcde8d99f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>], logical_port=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5990f7a850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:00.890 138325 INFO neutron.agent.ovn.metadata.agent [-] Port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e in datapath faf67f78-db61-49af-8b59-741a7fc7cb4c unbound from our chassis#033[00m
Oct  2 09:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:00.891 138325 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faf67f78-db61-49af-8b59-741a7fc7cb4c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:00.893 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b4a29e-e5ca-47cf-bb35-80f2aa43d734]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:00.893 138325 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c namespace which is not needed anymore#033[00m
Oct  2 09:16:00 np0005465987 nova_compute[230713]: 2025-10-02 13:16:00.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:00.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:00 np0005465987 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000dd.scope: Deactivated successfully.
Oct  2 09:16:00 np0005465987 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000dd.scope: Consumed 14.567s CPU time.
Oct  2 09:16:00 np0005465987 systemd-machined[188335]: Machine qemu-98-instance-000000dd terminated.
Oct  2 09:16:01 np0005465987 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[318487]: [NOTICE]   (318504) : haproxy version is 2.8.14-c23fe91
Oct  2 09:16:01 np0005465987 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[318487]: [NOTICE]   (318504) : path to executable is /usr/sbin/haproxy
Oct  2 09:16:01 np0005465987 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[318487]: [WARNING]  (318504) : Exiting Master process...
Oct  2 09:16:01 np0005465987 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[318487]: [ALERT]    (318504) : Current worker (318509) exited with code 143 (Terminated)
Oct  2 09:16:01 np0005465987 neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c[318487]: [WARNING]  (318504) : All workers exited. Exiting... (0)
Oct  2 09:16:01 np0005465987 systemd[1]: libpod-e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e.scope: Deactivated successfully.
Oct  2 09:16:01 np0005465987 conmon[318487]: conmon e46da58178f7ff936ef9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e.scope/container/memory.events
Oct  2 09:16:01 np0005465987 podman[318851]: 2025-10-02 13:16:01.03178424 +0000 UTC m=+0.051358374 container died e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:16:01 np0005465987 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e-userdata-shm.mount: Deactivated successfully.
Oct  2 09:16:01 np0005465987 systemd[1]: var-lib-containers-storage-overlay-9e891896452dde17c3580a3acd9ea22bdb69dd6ab2183db82ac083fbfa8403f6-merged.mount: Deactivated successfully.
Oct  2 09:16:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:01 np0005465987 podman[318851]: 2025-10-02 13:16:01.225477732 +0000 UTC m=+0.245051856 container cleanup e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:16:01 np0005465987 systemd[1]: libpod-conmon-e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e.scope: Deactivated successfully.
Oct  2 09:16:01 np0005465987 podman[318892]: 2025-10-02 13:16:01.29738065 +0000 UTC m=+0.048444514 container remove e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:16:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:01.304 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[d86d0883-5a83-45b8-9bee-5a4111aa1a01]: (4, ('Thu Oct  2 01:16:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c (e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e)\ne46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e\nThu Oct  2 01:16:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c (e46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e)\ne46da58178f7ff936ef9dc11661a40417d61e81c424cb1de0ce8193cae5dca9e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:01.307 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2d171a-ed0f-4095-9d5e-e533d6836789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:01.308 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaf67f78-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.323 2 DEBUG nova.compute.manager [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-unplugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:01 np0005465987 kernel: tapfaf67f78-d0: left promiscuous mode
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.324 2 DEBUG oslo_concurrency.lockutils [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.325 2 DEBUG oslo_concurrency.lockutils [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.325 2 DEBUG oslo_concurrency.lockutils [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.325 2 DEBUG nova.compute.manager [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-unplugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.326 2 WARNING nova.compute.manager [req-823f8ec7-2130-4e1c-b37b-e59240d1e3a3 req-5a41fdaf-ee54-4c98-802b-d800d7d2a215 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-unplugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:01.333 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[82413352-9a68-4edb-b427-e0ca4db21805]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:01.371 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b74c64-3805-4c6b-a400-01506bde9570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:01.373 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d4305e-8941-4e39-805c-9b0e3df5b50c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:01.389 233943 DEBUG oslo.privsep.daemon [-] privsep: reply[48167f58-6672-4ee0-8294-c83c1c362430]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 899726, 'reachable_time': 38574, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318910, 'error': None, 'target': 'ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:01.392 138437 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-faf67f78-db61-49af-8b59-741a7fc7cb4c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:16:01 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:01.392 138437 DEBUG oslo.privsep.daemon [-] privsep: reply[dc95559c-e17c-4e4d-97ed-dd256d10c5f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:16:01 np0005465987 systemd[1]: run-netns-ovnmeta\x2dfaf67f78\x2ddb61\x2d49af\x2d8b59\x2d741a7fc7cb4c.mount: Deactivated successfully.
Oct  2 09:16:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:01.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.476 2 INFO nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.482 2 INFO nova.virt.libvirt.driver [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Instance destroyed successfully.#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.483 2 DEBUG nova.virt.libvirt.vif [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:15:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1996915229',display_name='tempest-TestNetworkAdvancedServerOps-server-1996915229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1996915229',id=221,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDRuo4KdXDZKQ3oIyyNSXE1rik8EaVzgz9MZA+kI2JiH48qHMFlLWYrWwJzNWAoOqZLFOIMkCOlFgIqDZkrdOzURcb4pwQu8clr+WCDZQoFdB5ndJaWWZy3VHIv+OtGSTw==',key_name='tempest-TestNetworkAdvancedServerOps-1841926110',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:15:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-di1mdv1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:15:53Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=0a899aee-b0ae-4350-9f56-446d42ef77d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1680552744", "vif_mac": "fa:16:3e:c7:1d:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.483 2 DEBUG nova.network.os_vif_util [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1680552744", "vif_mac": "fa:16:3e:c7:1d:48"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.484 2 DEBUG nova.network.os_vif_util [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.484 2 DEBUG os_vif [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7bc4d7b-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.492 2 INFO os_vif [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f')#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.496 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.497 2 DEBUG nova.virt.libvirt.driver [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.661 2 DEBUG neutronclient.v2_0.client [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.806 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.807 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:01 np0005465987 nova_compute[230713]: 2025-10-02 13:16:01.807 2 DEBUG oslo_concurrency.lockutils [None req-5b90017b-577e-4d47-93ed-8b43af2b9d4f ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:02.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.255 2 DEBUG nova.compute.manager [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.255 2 DEBUG nova.compute.manager [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing instance network info cache due to event network-changed-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.256 2 DEBUG oslo_concurrency.lockutils [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.256 2 DEBUG oslo_concurrency.lockutils [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.256 2 DEBUG nova.network.neutron [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Refreshing network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:16:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:03.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.476 2 DEBUG nova.compute.manager [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.476 2 DEBUG oslo_concurrency.lockutils [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.476 2 DEBUG oslo_concurrency.lockutils [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.477 2 DEBUG oslo_concurrency.lockutils [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.477 2 DEBUG nova.compute.manager [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.477 2 WARNING nova.compute.manager [req-30be438f-210c-4a5c-b098-fd9f2c2bbd76 req-46238b3a-f983-4003-a30b-4dad309eda45 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 09:16:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:16:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/498093854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:16:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:16:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/498093854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:16:03 np0005465987 nova_compute[230713]: 2025-10-02 13:16:03.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:04 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:04.553 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:04 np0005465987 podman[318912]: 2025-10-02 13:16:04.861536152 +0000 UTC m=+0.059140304 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:16:04 np0005465987 podman[318913]: 2025-10-02 13:16:04.884463683 +0000 UTC m=+0.084828490 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct  2 09:16:04 np0005465987 podman[318911]: 2025-10-02 13:16:04.884623078 +0000 UTC m=+0.090678059 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:16:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:04.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e427 e427: 3 total, 3 up, 3 in
Oct  2 09:16:05 np0005465987 nova_compute[230713]: 2025-10-02 13:16:05.155 2 DEBUG nova.network.neutron [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updated VIF entry in instance network info cache for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:16:05 np0005465987 nova_compute[230713]: 2025-10-02 13:16:05.156 2 DEBUG nova.network.neutron [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:16:05 np0005465987 nova_compute[230713]: 2025-10-02 13:16:05.172 2 DEBUG oslo_concurrency.lockutils [req-02370929-5313-4a99-9f23-5ebe9e784610 req-fd118cae-5a6b-43f8-9984-69e2adc2c541 d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:16:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:05.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:06 np0005465987 nova_compute[230713]: 2025-10-02 13:16:06.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:06.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:07.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:08 np0005465987 nova_compute[230713]: 2025-10-02 13:16:08.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:08.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:09.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.568 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.568 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.569 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.569 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.570 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.570 2 WARNING nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state resized and task_state None.#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.570 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.571 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.571 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.572 2 DEBUG oslo_concurrency.lockutils [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.572 2 DEBUG nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] No waiting events found dispatching network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.573 2 WARNING nova.compute.manager [req-bff91fb8-c2c7-4e25-8469-436270f5564b req-eadb6fbd-f9d4-4598-979d-93ac4634b03b d55b3ee250d1468fbcf33643dda37adb aebeb47424cc4f05b6c098503009ac0d - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Received unexpected event network-vif-plugged-c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for instance with vm_state resized and task_state None.#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.614 2 DEBUG oslo_concurrency.lockutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "0a899aee-b0ae-4350-9f56-446d42ef77d2" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.614 2 DEBUG oslo_concurrency.lockutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:09 np0005465987 nova_compute[230713]: 2025-10-02 13:16:09.614 2 DEBUG nova.compute.manager [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Going to confirm migration 24 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 09:16:10 np0005465987 nova_compute[230713]: 2025-10-02 13:16:10.368 2 DEBUG neutronclient.v2_0.client [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port c7bc4d7b-3f13-4f9c-be12-6edcff4d424e for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 09:16:10 np0005465987 nova_compute[230713]: 2025-10-02 13:16:10.369 2 DEBUG oslo_concurrency.lockutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:16:10 np0005465987 nova_compute[230713]: 2025-10-02 13:16:10.369 2 DEBUG oslo_concurrency.lockutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquired lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:16:10 np0005465987 nova_compute[230713]: 2025-10-02 13:16:10.369 2 DEBUG nova.network.neutron [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:16:10 np0005465987 nova_compute[230713]: 2025-10-02 13:16:10.369 2 DEBUG nova.objects.instance [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'info_cache' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:16:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:10.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e428 e428: 3 total, 3 up, 3 in
Oct  2 09:16:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:11.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:11 np0005465987 nova_compute[230713]: 2025-10-02 13:16:11.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:12 np0005465987 nova_compute[230713]: 2025-10-02 13:16:12.241 2 DEBUG nova.network.neutron [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Updating instance_info_cache with network_info: [{"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:16:12 np0005465987 nova_compute[230713]: 2025-10-02 13:16:12.287 2 DEBUG oslo_concurrency.lockutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Releasing lock "refresh_cache-0a899aee-b0ae-4350-9f56-446d42ef77d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:16:12 np0005465987 nova_compute[230713]: 2025-10-02 13:16:12.288 2 DEBUG nova.objects.instance [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lazy-loading 'migration_context' on Instance uuid 0a899aee-b0ae-4350-9f56-446d42ef77d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:16:12 np0005465987 nova_compute[230713]: 2025-10-02 13:16:12.463 2 DEBUG nova.storage.rbd_utils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] removing snapshot(nova-resize) on rbd image(0a899aee-b0ae-4350-9f56-446d42ef77d2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 09:16:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:12.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e429 e429: 3 total, 3 up, 3 in
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.016 2 DEBUG nova.virt.libvirt.vif [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:15:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1996915229',display_name='tempest-TestNetworkAdvancedServerOps-server-1996915229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1996915229',id=221,image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDRuo4KdXDZKQ3oIyyNSXE1rik8EaVzgz9MZA+kI2JiH48qHMFlLWYrWwJzNWAoOqZLFOIMkCOlFgIqDZkrdOzURcb4pwQu8clr+WCDZQoFdB5ndJaWWZy3VHIv+OtGSTw==',key_name='tempest-TestNetworkAdvancedServerOps-1841926110',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:16:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08e102ae48244af2ab448a2e1ff757df',ramdisk_id='',reservation_id='r-di1mdv1g',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d0c2bc-fe21-4689-86ae-d6728c15874c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1527846432',owner_user_name='tempest-TestNetworkAdvancedServerOps-1527846432-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:16:08Z,user_data=None,user_id='ffe4d737e4414fb3a3e358f8ca3f3e1e',uuid=0a899aee-b0ae-4350-9f56-446d42ef77d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.017 2 DEBUG nova.network.os_vif_util [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converting VIF {"id": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "address": "fa:16:3e:c7:1d:48", "network": {"id": "faf67f78-db61-49af-8b59-741a7fc7cb4c", "bridge": "br-int", "label": "tempest-network-smoke--1680552744", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08e102ae48244af2ab448a2e1ff757df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7bc4d7b-3f", "ovs_interfaceid": "c7bc4d7b-3f13-4f9c-be12-6edcff4d424e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.018 2 DEBUG nova.network.os_vif_util [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.018 2 DEBUG os_vif [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7bc4d7b-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.022 2 INFO os_vif [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:1d:48,bridge_name='br-int',has_traffic_filtering=True,id=c7bc4d7b-3f13-4f9c-be12-6edcff4d424e,network=Network(faf67f78-db61-49af-8b59-741a7fc7cb4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7bc4d7b-3f')#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.023 2 DEBUG oslo_concurrency.lockutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.023 2 DEBUG oslo_concurrency.lockutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.090 2 DEBUG oslo_concurrency.processutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:16:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2133771657' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:16:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:16:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2133771657' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:16:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:13.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:16:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1352451436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.546 2 DEBUG oslo_concurrency.processutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.553 2 DEBUG nova.compute.provider_tree [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.583 2 DEBUG nova.scheduler.client.report [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.637 2 DEBUG oslo_concurrency.lockutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.785 2 INFO nova.scheduler.client.report [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Deleted allocation for migration f57aae45-e0d4-4519-a990-fd077295e0a0#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:13 np0005465987 nova_compute[230713]: 2025-10-02 13:16:13.827 2 DEBUG oslo_concurrency.lockutils [None req-2d01fce8-272a-4cd5-8087-58485a49c1de ffe4d737e4414fb3a3e358f8ca3f3e1e 08e102ae48244af2ab448a2e1ff757df - - default default] Lock "0a899aee-b0ae-4350-9f56-446d42ef77d2" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:14.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:15.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:16 np0005465987 nova_compute[230713]: 2025-10-02 13:16:16.199 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410961.1984842, 0a899aee-b0ae-4350-9f56-446d42ef77d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:16:16 np0005465987 nova_compute[230713]: 2025-10-02 13:16:16.200 2 INFO nova.compute.manager [-] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:16:16 np0005465987 nova_compute[230713]: 2025-10-02 13:16:16.232 2 DEBUG nova.compute.manager [None req-26b53070-de9c-40f3-9cf9-bf76163bf814 - - - - - -] [instance: 0a899aee-b0ae-4350-9f56-446d42ef77d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:16:16 np0005465987 nova_compute[230713]: 2025-10-02 13:16:16.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:16.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:17.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:18 np0005465987 nova_compute[230713]: 2025-10-02 13:16:18.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:18 np0005465987 nova_compute[230713]: 2025-10-02 13:16:18.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:18 np0005465987 nova_compute[230713]: 2025-10-02 13:16:18.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:16:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:19.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:16:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:20.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 e430: 3 total, 3 up, 3 in
Oct  2 09:16:21 np0005465987 nova_compute[230713]: 2025-10-02 13:16:21.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:21.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:22.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:23.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:23 np0005465987 nova_compute[230713]: 2025-10-02 13:16:23.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:24.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:25.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:26 np0005465987 nova_compute[230713]: 2025-10-02 13:16:26.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:27.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:28.002 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:28.002 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:16:28.002 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:28 np0005465987 nova_compute[230713]: 2025-10-02 13:16:28.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:28.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:29.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:30 np0005465987 podman[319031]: 2025-10-02 13:16:30.828777276 +0000 UTC m=+0.050678195 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 09:16:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:31 np0005465987 nova_compute[230713]: 2025-10-02 13:16:31.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:31.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:33 np0005465987 nova_compute[230713]: 2025-10-02 13:16:33.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:34.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:35.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:35 np0005465987 podman[319051]: 2025-10-02 13:16:35.834755398 +0000 UTC m=+0.050265274 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:16:35 np0005465987 podman[319052]: 2025-10-02 13:16:35.846614779 +0000 UTC m=+0.056775370 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:16:35 np0005465987 podman[319050]: 2025-10-02 13:16:35.858519222 +0000 UTC m=+0.077942474 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct  2 09:16:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:36 np0005465987 nova_compute[230713]: 2025-10-02 13:16:36.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:36 np0005465987 nova_compute[230713]: 2025-10-02 13:16:36.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:38 np0005465987 nova_compute[230713]: 2025-10-02 13:16:38.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:38 np0005465987 nova_compute[230713]: 2025-10-02 13:16:38.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:38.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:39.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:40.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:41 np0005465987 nova_compute[230713]: 2025-10-02 13:16:41.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:41.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:42.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:43.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:43 np0005465987 nova_compute[230713]: 2025-10-02 13:16:43.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:43 np0005465987 nova_compute[230713]: 2025-10-02 13:16:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:43 np0005465987 nova_compute[230713]: 2025-10-02 13:16:43.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:16:43 np0005465987 nova_compute[230713]: 2025-10-02 13:16:43.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:16:43 np0005465987 nova_compute[230713]: 2025-10-02 13:16:43.983 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:16:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:44.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:45.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:45 np0005465987 nova_compute[230713]: 2025-10-02 13:16:45.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:46 np0005465987 nova_compute[230713]: 2025-10-02 13:16:46.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:46.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:47.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:47 np0005465987 nova_compute[230713]: 2025-10-02 13:16:47.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:48 np0005465987 nova_compute[230713]: 2025-10-02 13:16:48.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:48.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:49.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:49 np0005465987 nova_compute[230713]: 2025-10-02 13:16:49.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:50 np0005465987 nova_compute[230713]: 2025-10-02 13:16:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:50.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:16:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:16:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:16:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:51 np0005465987 nova_compute[230713]: 2025-10-02 13:16:51.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:52 np0005465987 nova_compute[230713]: 2025-10-02 13:16:52.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:52 np0005465987 nova_compute[230713]: 2025-10-02 13:16:52.986 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:52 np0005465987 nova_compute[230713]: 2025-10-02 13:16:52.986 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:52 np0005465987 nova_compute[230713]: 2025-10-02 13:16:52.986 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:52 np0005465987 nova_compute[230713]: 2025-10-02 13:16:52.986 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:16:52 np0005465987 nova_compute[230713]: 2025-10-02 13:16:52.987 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:16:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:52.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:16:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:16:53 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4278459100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:16:53 np0005465987 nova_compute[230713]: 2025-10-02 13:16:53.417 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:53.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:16:53 np0005465987 nova_compute[230713]: 2025-10-02 13:16:53.569 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:16:53 np0005465987 nova_compute[230713]: 2025-10-02 13:16:53.570 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4260MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:16:53 np0005465987 nova_compute[230713]: 2025-10-02 13:16:53.570 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:53 np0005465987 nova_compute[230713]: 2025-10-02 13:16:53.570 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:53 np0005465987 nova_compute[230713]: 2025-10-02 13:16:53.623 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:16:53 np0005465987 nova_compute[230713]: 2025-10-02 13:16:53.623 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:16:53 np0005465987 nova_compute[230713]: 2025-10-02 13:16:53.702 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:53 np0005465987 nova_compute[230713]: 2025-10-02 13:16:53.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:16:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1138178111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:16:54 np0005465987 nova_compute[230713]: 2025-10-02 13:16:54.174 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:54 np0005465987 nova_compute[230713]: 2025-10-02 13:16:54.178 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:16:54 np0005465987 nova_compute[230713]: 2025-10-02 13:16:54.196 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:16:54 np0005465987 nova_compute[230713]: 2025-10-02 13:16:54.221 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:16:54 np0005465987 nova_compute[230713]: 2025-10-02 13:16:54.222 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:55.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:16:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/993108586' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:16:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:16:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/993108586' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:16:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:55.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:56 np0005465987 nova_compute[230713]: 2025-10-02 13:16:56.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:57.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:57 np0005465987 nova_compute[230713]: 2025-10-02 13:16:57.222 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:57 np0005465987 nova_compute[230713]: 2025-10-02 13:16:57.222 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:16:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:16:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:16:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:57.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:58 np0005465987 nova_compute[230713]: 2025-10-02 13:16:58.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:59.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:16:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:16:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:59.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:01.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.391078) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021391179, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1339, "num_deletes": 257, "total_data_size": 2833841, "memory_usage": 2874040, "flush_reason": "Manual Compaction"}
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021400672, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 1857610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84826, "largest_seqno": 86160, "table_properties": {"data_size": 1851818, "index_size": 3122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12837, "raw_average_key_size": 20, "raw_value_size": 1839913, "raw_average_value_size": 2874, "num_data_blocks": 137, "num_entries": 640, "num_filter_entries": 640, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410922, "oldest_key_time": 1759410922, "file_creation_time": 1759411021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 9654 microseconds, and 4921 cpu microseconds.
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.400739) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 1857610 bytes OK
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.400761) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.401848) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.401859) EVENT_LOG_v1 {"time_micros": 1759411021401855, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.401878) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 2827499, prev total WAL file size 2827499, number of live WAL files 2.
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.402590) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323630' seq:72057594037927935, type:22 .. '6C6F676D0033353131' seq:0, type:0; will stop at (end)
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(1814KB)], [174(10MB)]
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021402659, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 13159273, "oldest_snapshot_seqno": -1}
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 10502 keys, 13019306 bytes, temperature: kUnknown
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021484913, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 13019306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12952111, "index_size": 39818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26309, "raw_key_size": 277821, "raw_average_key_size": 26, "raw_value_size": 12768964, "raw_average_value_size": 1215, "num_data_blocks": 1511, "num_entries": 10502, "num_filter_entries": 10502, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759411021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.485200) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 13019306 bytes
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.486510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.8 rd, 158.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.8 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(14.1) write-amplify(7.0) OK, records in: 11035, records dropped: 533 output_compression: NoCompression
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.486531) EVENT_LOG_v1 {"time_micros": 1759411021486521, "job": 112, "event": "compaction_finished", "compaction_time_micros": 82327, "compaction_time_cpu_micros": 38706, "output_level": 6, "num_output_files": 1, "total_output_size": 13019306, "num_input_records": 11035, "num_output_records": 10502, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021487027, "job": 112, "event": "table_file_deletion", "file_number": 176}
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411021489665, "job": 112, "event": "table_file_deletion", "file_number": 174}
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.402440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.489703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.489724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.489726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.489727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:01 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:17:01.489729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:01 np0005465987 nova_compute[230713]: 2025-10-02 13:17:01.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:01.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:01 np0005465987 podman[319339]: 2025-10-02 13:17:01.829488048 +0000 UTC m=+0.051112686 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:17:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:03.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:03.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:17:03.701 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:17:03 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:17:03.701 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:17:03 np0005465987 nova_compute[230713]: 2025-10-02 13:17:03.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:03 np0005465987 nova_compute[230713]: 2025-10-02 13:17:03.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:05.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:05.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:06 np0005465987 nova_compute[230713]: 2025-10-02 13:17:06.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:06 np0005465987 podman[319360]: 2025-10-02 13:17:06.5934319 +0000 UTC m=+0.054020365 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 09:17:06 np0005465987 podman[319359]: 2025-10-02 13:17:06.593504272 +0000 UTC m=+0.056264416 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:17:06 np0005465987 podman[319358]: 2025-10-02 13:17:06.619930418 +0000 UTC m=+0.085231331 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 09:17:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:07.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:07.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:07 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:17:07.703 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:17:08 np0005465987 nova_compute[230713]: 2025-10-02 13:17:08.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:11 np0005465987 nova_compute[230713]: 2025-10-02 13:17:11.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:11.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:12 np0005465987 nova_compute[230713]: 2025-10-02 13:17:12.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:13.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:13 np0005465987 nova_compute[230713]: 2025-10-02 13:17:13.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:15.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:15 np0005465987 ovn_controller[129172]: 2025-10-02T13:17:15Z|00881|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct  2 09:17:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:16 np0005465987 nova_compute[230713]: 2025-10-02 13:17:16.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:17.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:17.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:18 np0005465987 nova_compute[230713]: 2025-10-02 13:17:18.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:19.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:19.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:21.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:21 np0005465987 nova_compute[230713]: 2025-10-02 13:17:21.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:21.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:23.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:23.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:23 np0005465987 nova_compute[230713]: 2025-10-02 13:17:23.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:25.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:25.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:26 np0005465987 nova_compute[230713]: 2025-10-02 13:17:26.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:27.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:17:28.003 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:17:28.003 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:17:28.003 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:28 np0005465987 nova_compute[230713]: 2025-10-02 13:17:28.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:29.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:29.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:31.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:31 np0005465987 nova_compute[230713]: 2025-10-02 13:17:31.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:31.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:32 np0005465987 podman[319421]: 2025-10-02 13:17:32.830598887 +0000 UTC m=+0.049564426 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:17:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:33.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:33 np0005465987 nova_compute[230713]: 2025-10-02 13:17:33.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:35.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:35.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:36 np0005465987 nova_compute[230713]: 2025-10-02 13:17:36.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:36 np0005465987 podman[319442]: 2025-10-02 13:17:36.860854525 +0000 UTC m=+0.063939643 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Oct  2 09:17:36 np0005465987 podman[319441]: 2025-10-02 13:17:36.871531275 +0000 UTC m=+0.079325372 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:17:36 np0005465987 podman[319440]: 2025-10-02 13:17:36.878482134 +0000 UTC m=+0.090855845 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 09:17:36 np0005465987 nova_compute[230713]: 2025-10-02 13:17:36.976 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:37.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:37.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:38 np0005465987 nova_compute[230713]: 2025-10-02 13:17:38.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:38 np0005465987 nova_compute[230713]: 2025-10-02 13:17:38.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:39.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:39.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:41.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:41 np0005465987 nova_compute[230713]: 2025-10-02 13:17:41.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:41.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:43.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:43.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:43 np0005465987 nova_compute[230713]: 2025-10-02 13:17:43.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:43 np0005465987 nova_compute[230713]: 2025-10-02 13:17:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:43 np0005465987 nova_compute[230713]: 2025-10-02 13:17:43.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:17:43 np0005465987 nova_compute[230713]: 2025-10-02 13:17:43.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:17:43 np0005465987 nova_compute[230713]: 2025-10-02 13:17:43.983 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:17:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:45.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:45.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:46 np0005465987 nova_compute[230713]: 2025-10-02 13:17:46.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:47.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:47.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:47 np0005465987 nova_compute[230713]: 2025-10-02 13:17:47.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:48 np0005465987 nova_compute[230713]: 2025-10-02 13:17:48.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:48 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:17:48 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:17:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:49.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:49.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:49 np0005465987 nova_compute[230713]: 2025-10-02 13:17:49.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:49 np0005465987 nova_compute[230713]: 2025-10-02 13:17:49.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:51.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:51 np0005465987 nova_compute[230713]: 2025-10-02 13:17:51.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:51.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:52 np0005465987 nova_compute[230713]: 2025-10-02 13:17:52.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:17:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:53.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:17:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:17:53.211 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:17:53 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:17:53.213 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:17:53 np0005465987 nova_compute[230713]: 2025-10-02 13:17:53.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:53.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:53 np0005465987 nova_compute[230713]: 2025-10-02 13:17:53.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:53 np0005465987 nova_compute[230713]: 2025-10-02 13:17:53.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:53 np0005465987 nova_compute[230713]: 2025-10-02 13:17:53.995 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:53 np0005465987 nova_compute[230713]: 2025-10-02 13:17:53.996 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:53 np0005465987 nova_compute[230713]: 2025-10-02 13:17:53.996 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:53 np0005465987 nova_compute[230713]: 2025-10-02 13:17:53.997 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:17:53 np0005465987 nova_compute[230713]: 2025-10-02 13:17:53.997 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:17:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/584603106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:17:54 np0005465987 nova_compute[230713]: 2025-10-02 13:17:54.439 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:54 np0005465987 nova_compute[230713]: 2025-10-02 13:17:54.581 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:17:54 np0005465987 nova_compute[230713]: 2025-10-02 13:17:54.582 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4289MB free_disk=20.942703247070312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:17:54 np0005465987 nova_compute[230713]: 2025-10-02 13:17:54.582 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:54 np0005465987 nova_compute[230713]: 2025-10-02 13:17:54.582 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:54 np0005465987 nova_compute[230713]: 2025-10-02 13:17:54.724 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:17:54 np0005465987 nova_compute[230713]: 2025-10-02 13:17:54.724 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:17:54 np0005465987 nova_compute[230713]: 2025-10-02 13:17:54.764 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:17:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770769049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:17:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:17:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770769049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:17:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:55.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:17:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/148150339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:17:55 np0005465987 nova_compute[230713]: 2025-10-02 13:17:55.204 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:55 np0005465987 nova_compute[230713]: 2025-10-02 13:17:55.210 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:17:55 np0005465987 nova_compute[230713]: 2025-10-02 13:17:55.232 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:17:55 np0005465987 nova_compute[230713]: 2025-10-02 13:17:55.234 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:17:55 np0005465987 nova_compute[230713]: 2025-10-02 13:17:55.234 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:55.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:56 np0005465987 nova_compute[230713]: 2025-10-02 13:17:56.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:57.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:57.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:58 np0005465987 nova_compute[230713]: 2025-10-02 13:17:58.235 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:58 np0005465987 nova_compute[230713]: 2025-10-02 13:17:58.236 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:17:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:17:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:17:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:17:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:17:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:17:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:17:58 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:17:58 np0005465987 nova_compute[230713]: 2025-10-02 13:17:58.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:17:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:17:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:17:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:59.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:18:00 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:18:00.215 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:18:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:01.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:01 np0005465987 nova_compute[230713]: 2025-10-02 13:18:01.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:03.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:03 np0005465987 podman[319800]: 2025-10-02 13:18:03.839117963 +0000 UTC m=+0.060616755 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:18:03 np0005465987 nova_compute[230713]: 2025-10-02 13:18:03.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:05.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:18:05 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:18:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:05.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:06 np0005465987 nova_compute[230713]: 2025-10-02 13:18:06.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:07.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:07.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:07 np0005465987 podman[319876]: 2025-10-02 13:18:07.857415887 +0000 UTC m=+0.058483957 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:18:07 np0005465987 podman[319869]: 2025-10-02 13:18:07.875924339 +0000 UTC m=+0.088944082 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:18:07 np0005465987 podman[319870]: 2025-10-02 13:18:07.88146456 +0000 UTC m=+0.088367048 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 09:18:09 np0005465987 nova_compute[230713]: 2025-10-02 13:18:09.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:09.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:18:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:09.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:18:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:11.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:11 np0005465987 nova_compute[230713]: 2025-10-02 13:18:11.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:18:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:11.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:18:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:18:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:18:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:13.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:14 np0005465987 nova_compute[230713]: 2025-10-02 13:18:14.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:15.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:18:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:15.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:18:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:16 np0005465987 nova_compute[230713]: 2025-10-02 13:18:16.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:17.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:18:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:17.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:18:19 np0005465987 nova_compute[230713]: 2025-10-02 13:18:19.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:19.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:21.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:21 np0005465987 nova_compute[230713]: 2025-10-02 13:18:21.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:21.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:18:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:23.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:18:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:18:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:23.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:18:24 np0005465987 nova_compute[230713]: 2025-10-02 13:18:24.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:25.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:25.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:26 np0005465987 nova_compute[230713]: 2025-10-02 13:18:26.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:27.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:18:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:27.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:18:28.004 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:18:28.004 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:18:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:18:28.004 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:18:29 np0005465987 nova_compute[230713]: 2025-10-02 13:18:29.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:29.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:18:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:29.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:18:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:31.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:31 np0005465987 nova_compute[230713]: 2025-10-02 13:18:31.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:18:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:31.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:18:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:33.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:34 np0005465987 nova_compute[230713]: 2025-10-02 13:18:34.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:34 np0005465987 podman[319929]: 2025-10-02 13:18:34.823494193 +0000 UTC m=+0.046031138 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:18:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:35.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:35.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:36 np0005465987 nova_compute[230713]: 2025-10-02 13:18:36.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:36 np0005465987 nova_compute[230713]: 2025-10-02 13:18:36.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:18:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:18:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:37.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:38 np0005465987 podman[319950]: 2025-10-02 13:18:38.836176615 +0000 UTC m=+0.052299828 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct  2 09:18:38 np0005465987 podman[319951]: 2025-10-02 13:18:38.843951326 +0000 UTC m=+0.054807216 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 09:18:38 np0005465987 podman[319949]: 2025-10-02 13:18:38.861084991 +0000 UTC m=+0.079060694 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:18:39 np0005465987 nova_compute[230713]: 2025-10-02 13:18:39.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:39.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:40 np0005465987 nova_compute[230713]: 2025-10-02 13:18:40.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:41.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:41 np0005465987 nova_compute[230713]: 2025-10-02 13:18:41.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:41.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:43.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:43 np0005465987 nova_compute[230713]: 2025-10-02 13:18:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:43 np0005465987 nova_compute[230713]: 2025-10-02 13:18:43.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:18:43 np0005465987 nova_compute[230713]: 2025-10-02 13:18:43.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:18:43 np0005465987 nova_compute[230713]: 2025-10-02 13:18:43.996 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:18:44 np0005465987 nova_compute[230713]: 2025-10-02 13:18:44.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:45.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:45.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:46 np0005465987 nova_compute[230713]: 2025-10-02 13:18:46.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:47.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:47.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:49 np0005465987 nova_compute[230713]: 2025-10-02 13:18:49.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:49.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:49 np0005465987 nova_compute[230713]: 2025-10-02 13:18:49.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:50 np0005465987 nova_compute[230713]: 2025-10-02 13:18:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:50 np0005465987 nova_compute[230713]: 2025-10-02 13:18:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:18:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:51.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:18:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:51 np0005465987 nova_compute[230713]: 2025-10-02 13:18:51.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:51.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.154123) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132154171, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1302, "num_deletes": 251, "total_data_size": 2920761, "memory_usage": 2960160, "flush_reason": "Manual Compaction"}
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132165061, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 1915902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86165, "largest_seqno": 87462, "table_properties": {"data_size": 1910260, "index_size": 3036, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11991, "raw_average_key_size": 19, "raw_value_size": 1899048, "raw_average_value_size": 3154, "num_data_blocks": 135, "num_entries": 602, "num_filter_entries": 602, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411021, "oldest_key_time": 1759411021, "file_creation_time": 1759411132, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 10970 microseconds, and 5102 cpu microseconds.
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.165096) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 1915902 bytes OK
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.165113) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.166690) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.166701) EVENT_LOG_v1 {"time_micros": 1759411132166697, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.166736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2914684, prev total WAL file size 2914684, number of live WAL files 2.
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.167620) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(1870KB)], [177(12MB)]
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132167651, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 14935208, "oldest_snapshot_seqno": -1}
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 10587 keys, 13021116 bytes, temperature: kUnknown
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132220750, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 13021116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12953413, "index_size": 40134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26501, "raw_key_size": 280242, "raw_average_key_size": 26, "raw_value_size": 12768906, "raw_average_value_size": 1206, "num_data_blocks": 1521, "num_entries": 10587, "num_filter_entries": 10587, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759411132, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.220984) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13021116 bytes
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.222236) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 280.8 rd, 244.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(14.6) write-amplify(6.8) OK, records in: 11104, records dropped: 517 output_compression: NoCompression
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.222270) EVENT_LOG_v1 {"time_micros": 1759411132222257, "job": 114, "event": "compaction_finished", "compaction_time_micros": 53180, "compaction_time_cpu_micros": 28477, "output_level": 6, "num_output_files": 1, "total_output_size": 13021116, "num_input_records": 11104, "num_output_records": 10587, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132222757, "job": 114, "event": "table_file_deletion", "file_number": 179}
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411132224941, "job": 114, "event": "table_file_deletion", "file_number": 177}
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.167540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.225039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.225045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.225047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.225049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:18:52 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:18:52.225051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:18:52 np0005465987 nova_compute[230713]: 2025-10-02 13:18:52.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:18:52.503 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:18:52 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:18:52.505 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:18:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:53.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:53.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:53 np0005465987 nova_compute[230713]: 2025-10-02 13:18:53.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:53 np0005465987 nova_compute[230713]: 2025-10-02 13:18:53.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.021 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.022 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.022 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.022 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.022 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:18:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3475925167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.432 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:18:54 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:18:54.507 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.593 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.594 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4303MB free_disk=20.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.595 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.595 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.699 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.699 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:18:54 np0005465987 nova_compute[230713]: 2025-10-02 13:18:54.803 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:18:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:18:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2906155138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:18:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:18:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2906155138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:18:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:55.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:18:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2756459664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:18:55 np0005465987 nova_compute[230713]: 2025-10-02 13:18:55.231 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:18:55 np0005465987 nova_compute[230713]: 2025-10-02 13:18:55.240 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:18:55 np0005465987 nova_compute[230713]: 2025-10-02 13:18:55.263 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:18:55 np0005465987 nova_compute[230713]: 2025-10-02 13:18:55.266 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:18:55 np0005465987 nova_compute[230713]: 2025-10-02 13:18:55.266 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:18:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:18:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:55.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:18:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:56 np0005465987 nova_compute[230713]: 2025-10-02 13:18:56.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:57.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:57.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:59 np0005465987 nova_compute[230713]: 2025-10-02 13:18:59.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:59.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:59 np0005465987 nova_compute[230713]: 2025-10-02 13:18:59.267 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:59 np0005465987 nova_compute[230713]: 2025-10-02 13:18:59.267 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:18:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:18:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:59.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:19:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:01.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:19:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:01 np0005465987 nova_compute[230713]: 2025-10-02 13:19:01.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:01.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:03.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:03.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:04 np0005465987 nova_compute[230713]: 2025-10-02 13:19:04.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:04 np0005465987 ceph-mgr[81180]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct  2 09:19:04 np0005465987 podman[320106]: 2025-10-02 13:19:04.935461023 +0000 UTC m=+0.050156091 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:19:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:05.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:05.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:19:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:19:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:19:06 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:19:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:06 np0005465987 nova_compute[230713]: 2025-10-02 13:19:06.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:19:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:19:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:19:07 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:19:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:07.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:07.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:09 np0005465987 nova_compute[230713]: 2025-10-02 13:19:09.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:09.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:09.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:09 np0005465987 podman[320210]: 2025-10-02 13:19:09.86672546 +0000 UTC m=+0.066205945 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:19:09 np0005465987 podman[320209]: 2025-10-02 13:19:09.877980566 +0000 UTC m=+0.084141453 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:19:09 np0005465987 podman[320208]: 2025-10-02 13:19:09.901785911 +0000 UTC m=+0.106988521 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:19:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:11.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:11 np0005465987 nova_compute[230713]: 2025-10-02 13:19:11.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:11.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:19:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:19:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:13.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:13.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:14 np0005465987 nova_compute[230713]: 2025-10-02 13:19:14.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:19:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:15.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:19:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:15.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:16 np0005465987 nova_compute[230713]: 2025-10-02 13:19:16.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:16 np0005465987 nova_compute[230713]: 2025-10-02 13:19:16.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:17.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:17.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:19 np0005465987 nova_compute[230713]: 2025-10-02 13:19:19.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:19.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:19.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:21.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:21 np0005465987 nova_compute[230713]: 2025-10-02 13:19:21.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:21.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:23.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:23.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:24 np0005465987 nova_compute[230713]: 2025-10-02 13:19:24.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:25.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:25.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:26 np0005465987 nova_compute[230713]: 2025-10-02 13:19:26.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:27.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:27.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:19:28.005 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:19:28.005 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:19:28.005 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:29 np0005465987 nova_compute[230713]: 2025-10-02 13:19:29.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:29.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:29.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:31.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:31 np0005465987 nova_compute[230713]: 2025-10-02 13:19:31.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:33.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:33.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:34 np0005465987 nova_compute[230713]: 2025-10-02 13:19:34.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:35.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:35.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:35 np0005465987 podman[320319]: 2025-10-02 13:19:35.825443847 +0000 UTC m=+0.049391179 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  2 09:19:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:36 np0005465987 nova_compute[230713]: 2025-10-02 13:19:36.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:36 np0005465987 nova_compute[230713]: 2025-10-02 13:19:36.983 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:37.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:37.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:39 np0005465987 nova_compute[230713]: 2025-10-02 13:19:39.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:39.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:40 np0005465987 podman[320339]: 2025-10-02 13:19:40.849595986 +0000 UTC m=+0.070407422 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:19:40 np0005465987 podman[320340]: 2025-10-02 13:19:40.850301615 +0000 UTC m=+0.069817715 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:19:40 np0005465987 podman[320338]: 2025-10-02 13:19:40.871534044 +0000 UTC m=+0.096386340 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:19:40 np0005465987 nova_compute[230713]: 2025-10-02 13:19:40.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:41.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:41.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:41 np0005465987 nova_compute[230713]: 2025-10-02 13:19:41.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 09:19:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:43.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 09:19:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:44 np0005465987 nova_compute[230713]: 2025-10-02 13:19:44.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:44 np0005465987 nova_compute[230713]: 2025-10-02 13:19:44.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:44 np0005465987 nova_compute[230713]: 2025-10-02 13:19:44.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:19:44 np0005465987 nova_compute[230713]: 2025-10-02 13:19:44.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:19:45 np0005465987 nova_compute[230713]: 2025-10-02 13:19:45.012 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:19:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:45.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:45.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:46 np0005465987 nova_compute[230713]: 2025-10-02 13:19:46.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:47.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:49 np0005465987 nova_compute[230713]: 2025-10-02 13:19:49.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:49.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:50 np0005465987 nova_compute[230713]: 2025-10-02 13:19:50.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:51.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:51 np0005465987 nova_compute[230713]: 2025-10-02 13:19:51.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:51 np0005465987 nova_compute[230713]: 2025-10-02 13:19:51.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:52 np0005465987 nova_compute[230713]: 2025-10-02 13:19:52.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:53.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:53.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:54 np0005465987 nova_compute[230713]: 2025-10-02 13:19:54.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:19:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930833902' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:19:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:19:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930833902' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:19:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:19:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:55.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:19:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:55.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:55 np0005465987 nova_compute[230713]: 2025-10-02 13:19:55.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:55 np0005465987 nova_compute[230713]: 2025-10-02 13:19:55.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.017 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.017 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.018 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.018 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.019 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:19:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2438942268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.463 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.674 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.675 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4296MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.676 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.676 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.778 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.778 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.825 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:56 np0005465987 nova_compute[230713]: 2025-10-02 13:19:56.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:19:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2355559930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:19:57 np0005465987 nova_compute[230713]: 2025-10-02 13:19:57.242 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:57 np0005465987 nova_compute[230713]: 2025-10-02 13:19:57.247 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:19:57 np0005465987 nova_compute[230713]: 2025-10-02 13:19:57.269 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:19:57 np0005465987 nova_compute[230713]: 2025-10-02 13:19:57.270 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:19:57 np0005465987 nova_compute[230713]: 2025-10-02 13:19:57.270 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:57.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:57.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:59.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:59 np0005465987 nova_compute[230713]: 2025-10-02 13:19:59.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:19:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:59.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 09:20:00 np0005465987 nova_compute[230713]: 2025-10-02 13:20:00.269 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:00 np0005465987 nova_compute[230713]: 2025-10-02 13:20:00.270 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:20:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:01.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:01.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:02 np0005465987 nova_compute[230713]: 2025-10-02 13:20:02.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:03.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:04 np0005465987 nova_compute[230713]: 2025-10-02 13:20:04.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:05.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:06 np0005465987 podman[320447]: 2025-10-02 13:20:06.836564313 +0000 UTC m=+0.055610959 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:20:06 np0005465987 nova_compute[230713]: 2025-10-02 13:20:06.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:07 np0005465987 nova_compute[230713]: 2025-10-02 13:20:07.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:07.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:09.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:09 np0005465987 nova_compute[230713]: 2025-10-02 13:20:09.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:09.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:10 np0005465987 nova_compute[230713]: 2025-10-02 13:20:10.983 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:10 np0005465987 nova_compute[230713]: 2025-10-02 13:20:10.983 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:20:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:11.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:11 np0005465987 podman[320469]: 2025-10-02 13:20:11.829221431 +0000 UTC m=+0.047096977 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:20:11 np0005465987 podman[320470]: 2025-10-02 13:20:11.830843694 +0000 UTC m=+0.046115419 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  2 09:20:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:20:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:11.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:20:11 np0005465987 podman[320468]: 2025-10-02 13:20:11.894229235 +0000 UTC m=+0.114938699 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:20:12 np0005465987 nova_compute[230713]: 2025-10-02 13:20:12.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:13.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:14 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:20:14 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:20:14 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:20:14 np0005465987 nova_compute[230713]: 2025-10-02 13:20:14.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:15.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:15.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:16 np0005465987 nova_compute[230713]: 2025-10-02 13:20:16.981 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:16 np0005465987 nova_compute[230713]: 2025-10-02 13:20:16.981 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:20:17 np0005465987 nova_compute[230713]: 2025-10-02 13:20:17.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:17 np0005465987 nova_compute[230713]: 2025-10-02 13:20:17.134 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:20:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:17.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:17.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:19.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:19 np0005465987 nova_compute[230713]: 2025-10-02 13:20:19.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:19.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:20:20 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:20:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:21.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:21.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:22 np0005465987 nova_compute[230713]: 2025-10-02 13:20:22.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:23.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:23.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:24 np0005465987 nova_compute[230713]: 2025-10-02 13:20:24.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:25.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:25.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:27 np0005465987 nova_compute[230713]: 2025-10-02 13:20:27.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:27.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:27.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:20:28.006 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:20:28.007 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:20:28.007 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:20:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:29.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:20:29 np0005465987 nova_compute[230713]: 2025-10-02 13:20:29.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:29.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:31.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:31.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:32 np0005465987 nova_compute[230713]: 2025-10-02 13:20:32.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:33.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:33.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:34 np0005465987 nova_compute[230713]: 2025-10-02 13:20:34.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:35.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:35.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:37 np0005465987 nova_compute[230713]: 2025-10-02 13:20:37.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:37.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:37 np0005465987 podman[320707]: 2025-10-02 13:20:37.832023429 +0000 UTC m=+0.050891440 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:20:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:37.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:37 np0005465987 systemd-logind[794]: New session 65 of user zuul.
Oct  2 09:20:37 np0005465987 systemd[1]: Started Session 65 of User zuul.
Oct  2 09:20:39 np0005465987 nova_compute[230713]: 2025-10-02 13:20:39.110 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:39 np0005465987 nova_compute[230713]: 2025-10-02 13:20:39.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:39.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:41.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 09:20:41 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1021490665' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 09:20:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:41.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:41 np0005465987 nova_compute[230713]: 2025-10-02 13:20:41.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:42 np0005465987 podman[320983]: 2025-10-02 13:20:42.11149439 +0000 UTC m=+0.061482488 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid)
Oct  2 09:20:42 np0005465987 podman[320984]: 2025-10-02 13:20:42.138656941 +0000 UTC m=+0.084849656 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 09:20:42 np0005465987 podman[320982]: 2025-10-02 13:20:42.163084587 +0000 UTC m=+0.113064995 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:20:42 np0005465987 nova_compute[230713]: 2025-10-02 13:20:42.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:43.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:43.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:44 np0005465987 nova_compute[230713]: 2025-10-02 13:20:44.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:44 np0005465987 ovs-vsctl[321075]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  2 09:20:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:45.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:45 np0005465987 virtqemud[230442]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  2 09:20:45 np0005465987 virtqemud[230442]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  2 09:20:45 np0005465987 virtqemud[230442]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 09:20:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:45 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: cache status {prefix=cache status} (starting...)
Oct  2 09:20:45 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:45 np0005465987 nova_compute[230713]: 2025-10-02 13:20:45.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:45 np0005465987 nova_compute[230713]: 2025-10-02 13:20:45.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:20:45 np0005465987 nova_compute[230713]: 2025-10-02 13:20:45.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:20:46 np0005465987 nova_compute[230713]: 2025-10-02 13:20:46.035 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:20:46 np0005465987 lvm[321386]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 09:20:46 np0005465987 lvm[321386]: VG ceph_vg0 finished
Oct  2 09:20:46 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: client ls {prefix=client ls} (starting...)
Oct  2 09:20:46 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:46 np0005465987 kernel: block vda: the capability attribute has been deprecated.
Oct  2 09:20:46 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: damage ls {prefix=damage ls} (starting...)
Oct  2 09:20:46 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  2 09:20:46 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3209301266' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  2 09:20:46 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump loads {prefix=dump loads} (starting...)
Oct  2 09:20:46 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:47 np0005465987 nova_compute[230713]: 2025-10-02 13:20:47.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 09:20:47 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2038607376' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 09:20:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:47.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:47 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  2 09:20:47 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1701019612' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  2 09:20:47 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:48 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: ops {prefix=ops} (starting...)
Oct  2 09:20:48 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  2 09:20:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/856340769' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  2 09:20:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  2 09:20:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2183296961' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  2 09:20:48 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:20:48 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2416550832' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:20:48 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: session ls {prefix=session ls} (starting...)
Oct  2 09:20:48 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:20:48 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: status {prefix=status} (starting...)
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3863448866' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3805049822' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  2 09:20:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:49.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:49 np0005465987 nova_compute[230713]: 2025-10-02 13:20:49.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1009958000' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3311019110' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:20:49 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/882833153' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:20:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:49.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  2 09:20:50 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2544960413' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  2 09:20:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  2 09:20:50 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1889401041' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  2 09:20:50 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:20:50 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3043726876' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:20:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  2 09:20:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2504091879' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  2 09:20:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:51.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:20:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/608220265' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:20:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:20:51 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2447536203' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:20:51 np0005465987 nova_compute[230713]: 2025-10-02 13:20:51.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:52 np0005465987 nova_compute[230713]: 2025-10-02 13:20:52.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a4bd9000/0x0/0x1bfc00000, data 0x4e79430/0x5075000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390643712 unmapped: 56426496 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390643712 unmapped: 56426496 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0e902400 session 0x558e0d3caf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390651904 unmapped: 56418304 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e121bc400 session 0x558e0a0f0780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0cf4d800 session 0x558e0c571860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0d07c400 session 0x558e0d5a3e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4403541 data_alloc: 251658240 data_used: 40370176
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390651904 unmapped: 56418304 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390651904 unmapped: 56418304 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390963200 unmapped: 56107008 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0d07dc00 session 0x558e0d30c5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a48eb000/0x0/0x1bfc00000, data 0x5167430/0x5363000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390971392 unmapped: 56098816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390971392 unmapped: 56098816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0e902400 session 0x558e0bf4ad20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4475164 data_alloc: 251658240 data_used: 40370176
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390987776 unmapped: 56082432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0ac74000 session 0x558e0d30cd20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390987776 unmapped: 56082432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0ac74000 session 0x558e0a0f0f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a444a000/0x0/0x1bfc00000, data 0x5607492/0x5804000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0cf4d800 session 0x558e0d5a32c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390987776 unmapped: 56082432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0d07c400 session 0x558e0a0f10e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0d07dc00 session 0x558e0c6841e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390987776 unmapped: 56082432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0e902400 session 0x558e0ce90000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0e902400 session 0x558e0c3150e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390987776 unmapped: 56082432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a444a000/0x0/0x1bfc00000, data 0x5607492/0x5804000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4475180 data_alloc: 251658240 data_used: 40370176
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.684037209s of 13.825182915s, submitted: 45
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390987776 unmapped: 56082432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0ac74000 session 0x558e0a0f1c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390987776 unmapped: 56082432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a444a000/0x0/0x1bfc00000, data 0x5607492/0x5804000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [0,0,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390987776 unmapped: 56082432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0cf4d800 session 0x558e0c030960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390987776 unmapped: 56082432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391299072 unmapped: 55771136 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4508896 data_alloc: 251658240 data_used: 44720128
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391454720 unmapped: 55615488 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391487488 unmapped: 55582720 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 55394304 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a4448000/0x0/0x1bfc00000, data 0x5608492/0x5805000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 55394304 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0c668c00 session 0x558e0c6d54a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0cccec00 session 0x558e0d3ca780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0cccec00 session 0x558e0d5a30e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0ac74000 session 0x558e0b29c000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 55394304 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4602614 data_alloc: 251658240 data_used: 46563328
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0c668c00 session 0x558e0a5494a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391700480 unmapped: 55369728 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391700480 unmapped: 55369728 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391700480 unmapped: 55369728 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a39d8000/0x0/0x1bfc00000, data 0x6079492/0x6276000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391708672 unmapped: 55361536 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391708672 unmapped: 55361536 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4602774 data_alloc: 251658240 data_used: 46567424
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.930329323s of 15.009560585s, submitted: 16
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393568256 unmapped: 53501952 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a33f6000/0x0/0x1bfc00000, data 0x6653492/0x6850000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 394264576 unmapped: 52805632 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a3394000/0x0/0x1bfc00000, data 0x66b5492/0x68b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 52797440 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0cf4d800 session 0x558e0a4c4780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395182080 unmapped: 51888128 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0e902400 session 0x558e0bf4b2c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a2f9a000/0x0/0x1bfc00000, data 0x6aaf492/0x6cac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396558336 unmapped: 50511872 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0ac74000 session 0x558e0cf0f860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0c668c00 session 0x558e0d30dc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4692362 data_alloc: 251658240 data_used: 47935488
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395943936 unmapped: 51126272 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395943936 unmapped: 51126272 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395943936 unmapped: 51126272 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a2eb9000/0x0/0x1bfc00000, data 0x6b97492/0x6d94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 45850624 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 45850624 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4767474 data_alloc: 251658240 data_used: 55685120
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 45850624 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 45850624 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a2eb9000/0x0/0x1bfc00000, data 0x6b97492/0x6d94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 45850624 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 45850624 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401227776 unmapped: 45842432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a2eb9000/0x0/0x1bfc00000, data 0x6b97492/0x6d94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4767474 data_alloc: 251658240 data_used: 55685120
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a2eb9000/0x0/0x1bfc00000, data 0x6b97492/0x6d94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 45834240 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401244160 unmapped: 45826048 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0d07dc00 session 0x558e0c56f4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.002025604s of 17.286148071s, submitted: 130
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401244160 unmapped: 45826048 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a2eb8000/0x0/0x1bfc00000, data 0x6b97492/0x6d94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 45809664 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 45547520 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a2a81000/0x0/0x1bfc00000, data 0x6fce492/0x71cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803040 data_alloc: 251658240 data_used: 55734272
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401580032 unmapped: 45490176 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401580032 unmapped: 45490176 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 45473792 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 45473792 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a2a14000/0x0/0x1bfc00000, data 0x703d492/0x723a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402259968 unmapped: 44810240 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4729808 data_alloc: 251658240 data_used: 54616064
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 44793856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 44793856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 44793856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.262098312s of 10.396976471s, submitted: 41
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 44793856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a31da000/0x0/0x1bfc00000, data 0x6876492/0x6a73000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 44793856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4729808 data_alloc: 251658240 data_used: 54616064
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 44793856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 ms_handle_reset con 0x558e0cf4d800 session 0x558e0ae334a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 44793856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 44793856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a31db000/0x0/0x1bfc00000, data 0x6876492/0x6a73000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 44785664 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 379 ms_handle_reset con 0x558e0f836800 session 0x558e0c6d43c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410329088 unmapped: 36741120 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4816268 data_alloc: 268435456 data_used: 60215296
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a2b5d000/0x0/0x1bfc00000, data 0x6ef115d/0x70f1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410427392 unmapped: 36642816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 379 ms_handle_reset con 0x558e0ac74000 session 0x558e0c66d2c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410427392 unmapped: 36642816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a2b5d000/0x0/0x1bfc00000, data 0x6ef115d/0x70f1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410427392 unmapped: 36642816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a2b5d000/0x0/0x1bfc00000, data 0x6ef115d/0x70f1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410443776 unmapped: 36626432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.238699913s of 11.440184593s, submitted: 61
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408166400 unmapped: 38903808 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4820108 data_alloc: 268435456 data_used: 60227584
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 380 ms_handle_reset con 0x558e0c668c00 session 0x558e0bf4b680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408174592 unmapped: 38895616 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a2b59000/0x0/0x1bfc00000, data 0x6ef2e0a/0x70f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [0,0,0,0,0,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408182784 unmapped: 38887424 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 381 ms_handle_reset con 0x558e0cf4d800 session 0x558e0d30cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408199168 unmapped: 38871040 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408199168 unmapped: 38871040 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408199168 unmapped: 38871040 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4823402 data_alloc: 268435456 data_used: 60235776
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 381 heartbeat osd_stat(store_statfs(0x1a2b56000/0x0/0x1bfc00000, data 0x6ef4a7f/0x70f7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408199168 unmapped: 38871040 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408207360 unmapped: 38862848 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 38846464 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 38846464 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408223744 unmapped: 38846464 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a2b53000/0x0/0x1bfc00000, data 0x6ef65be/0x70fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.094248772s of 10.877305984s, submitted: 30
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0d07dc00 session 0x558e0d3ca000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4825320 data_alloc: 268435456 data_used: 60235776
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0f836800 session 0x558e0c02b0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0ac74000 session 0x558e0d5a3680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 38805504 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a2b54000/0x0/0x1bfc00000, data 0x6ef65be/0x70fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 38805504 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 38805504 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 38805504 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 38797312 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4825320 data_alloc: 268435456 data_used: 60235776
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 38797312 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a2b54000/0x0/0x1bfc00000, data 0x6ef65be/0x70fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 38797312 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 38797312 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 38797312 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 38797312 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a2b54000/0x0/0x1bfc00000, data 0x6ef65be/0x70fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4825320 data_alloc: 268435456 data_used: 60235776
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a2b54000/0x0/0x1bfc00000, data 0x6ef65be/0x70fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 38797312 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 38797312 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c56f860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408281088 unmapped: 38789120 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.576865196s of 13.597122192s, submitted: 6
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 38780928 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 38780928 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679390 data_alloc: 251658240 data_used: 54988800
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0c668c00 session 0x558e0c570b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 38780928 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 38780928 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 38780928 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 38780928 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408289280 unmapped: 38780928 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679390 data_alloc: 251658240 data_used: 54988800
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 38772736 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 51K writes, 195K keys, 51K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 51K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6403 writes, 23K keys, 6403 commit groups, 1.0 writes per commit group, ingest: 25.11 MB, 0.04 MB/s#012Interval WAL: 6403 writes, 2590 syncs, 2.47 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 38772736 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 38772736 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 38772736 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 38772736 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679390 data_alloc: 251658240 data_used: 54988800
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 38772736 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 38772736 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 38772736 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 38764544 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 38764544 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679390 data_alloc: 251658240 data_used: 54988800
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 38764544 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 38764544 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 38764544 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408313856 unmapped: 38756352 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408322048 unmapped: 38748160 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679550 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408330240 unmapped: 38739968 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408330240 unmapped: 38739968 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408330240 unmapped: 38739968 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408330240 unmapped: 38739968 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408338432 unmapped: 38731776 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679550 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408338432 unmapped: 38731776 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408338432 unmapped: 38731776 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408338432 unmapped: 38731776 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408338432 unmapped: 38731776 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408346624 unmapped: 38723584 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679550 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408346624 unmapped: 38723584 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408346624 unmapped: 38723584 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408346624 unmapped: 38723584 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408346624 unmapped: 38723584 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408354816 unmapped: 38715392 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679550 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408354816 unmapped: 38715392 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408354816 unmapped: 38715392 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408363008 unmapped: 38707200 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 38690816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 38690816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679550 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 38690816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 38690816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 38690816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 38690816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 38690816 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679550 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408395776 unmapped: 38674432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408395776 unmapped: 38674432 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408403968 unmapped: 38666240 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408403968 unmapped: 38666240 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408403968 unmapped: 38666240 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679550 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408403968 unmapped: 38666240 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408403968 unmapped: 38666240 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408403968 unmapped: 38666240 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 38658048 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 38658048 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679550 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 38658048 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 38658048 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 38658048 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 60.153274536s of 60.179008484s, submitted: 8
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0d07c400 session 0x558e0a4c45a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0cf4d800 session 0x558e0c5123c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408420352 unmapped: 38649856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408420352 unmapped: 38649856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4679478 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408420352 unmapped: 38649856 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408428544 unmapped: 38641664 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a38cd000/0x0/0x1bfc00000, data 0x617d5be/0x6381000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408428544 unmapped: 38641664 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408428544 unmapped: 38641664 heap: 447070208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0a5a2800 session 0x558e0ae172c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408428544 unmapped: 42319872 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0ac74000 session 0x558e0c6ceb40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0d07c400 session 0x558e0a4c4780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0c668c00 session 0x558e0c4b61e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0d07dc00 session 0x558e0d457860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4738150 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 42287104 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 42287104 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 42287104 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0a5a2800 session 0x558e0cc892c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a31eb000/0x0/0x1bfc00000, data 0x685f5be/0x6a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 42287104 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0ac74000 session 0x558e0cc3ed20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a31eb000/0x0/0x1bfc00000, data 0x685f5be/0x6a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0c668c00 session 0x558e0bf4b0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408469504 unmapped: 42278912 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.439356804s of 11.519376755s, submitted: 16
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4737578 data_alloc: 251658240 data_used: 54992896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 42262528 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 42262528 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 42254336 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a31eb000/0x0/0x1bfc00000, data 0x685f5be/0x6a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 42254336 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 42254336 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4738538 data_alloc: 251658240 data_used: 55107584
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 42254336 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e1cd82800 session 0x558e08c6a960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 42106880 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 42106880 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a31ea000/0x0/0x1bfc00000, data 0x685f5e1/0x6a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 42106880 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a31ea000/0x0/0x1bfc00000, data 0x685f5e1/0x6a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 42106880 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4741996 data_alloc: 251658240 data_used: 55107584
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 42106880 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409042944 unmapped: 41705472 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0d07c400 session 0x558e0c6d45a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.291287422s of 12.329145432s, submitted: 12
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0cf4c800 session 0x558e0c6cfc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a31ea000/0x0/0x1bfc00000, data 0x685f5e1/0x6a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410320896 unmapped: 40427520 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410320896 unmapped: 40427520 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410320896 unmapped: 40427520 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4794063 data_alloc: 268435456 data_used: 60833792
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a31ea000/0x0/0x1bfc00000, data 0x685f5e1/0x6a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [1,0,0,0,0,0,0,2])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410533888 unmapped: 40214528 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0ac74000 session 0x558e0d5a2780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410632192 unmapped: 40116224 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410632192 unmapped: 40116224 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410632192 unmapped: 40116224 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a31ea000/0x0/0x1bfc00000, data 0x685f5e1/0x6a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410632192 unmapped: 40116224 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4793247 data_alloc: 268435456 data_used: 60833792
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 40099840 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a31ea000/0x0/0x1bfc00000, data 0x685f5e1/0x6a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 40099840 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 40099840 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 40099840 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.666627884s of 11.886530876s, submitted: 18
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413065216 unmapped: 37683200 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4890195 data_alloc: 268435456 data_used: 60829696
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a2502000/0x0/0x1bfc00000, data 0x75475e1/0x774c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 37322752 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413507584 unmapped: 37240832 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a24c9000/0x0/0x1bfc00000, data 0x75805e1/0x7785000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 37109760 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 37109760 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a24b5000/0x0/0x1bfc00000, data 0x758c5e1/0x7791000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 37109760 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4899609 data_alloc: 268435456 data_used: 61747200
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 37109760 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e1cd82800 session 0x558e0b29d860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0cfbcc00 session 0x558e0d30de00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413728768 unmapped: 37019648 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 36954112 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 36954112 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a1be0000/0x0/0x1bfc00000, data 0x7e68643/0x806e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 36954112 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e121bcc00 session 0x558e0c6cf680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.830121994s of 11.166106224s, submitted: 131
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e104ee800 session 0x558e0af02b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4966144 data_alloc: 268435456 data_used: 61640704
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 36954112 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0cf4c800 session 0x558e0c56fa40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409468928 unmapped: 41279488 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0ac74000 session 0x558e0c571e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409477120 unmapped: 41271296 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409477120 unmapped: 41271296 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409477120 unmapped: 41271296 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a2fee000/0x0/0x1bfc00000, data 0x6a5a643/0x6c60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x15faf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764497 data_alloc: 251658240 data_used: 53760000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409485312 unmapped: 41263104 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409501696 unmapped: 41246720 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409534464 unmapped: 41213952 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409657344 unmapped: 41091072 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409714688 unmapped: 41033728 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c6d5860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a2bde000/0x0/0x1bfc00000, data 0x6a5a643/0x6c60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4819633 data_alloc: 268435456 data_used: 59039744
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409714688 unmapped: 41033728 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.416577339s of 10.702444077s, submitted: 396
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a2bde000/0x0/0x1bfc00000, data 0x6a5a643/0x6c60000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409714688 unmapped: 41033728 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 ms_handle_reset con 0x558e0cccec00 session 0x558e0cc3f4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 383 ms_handle_reset con 0x558e1cd82800 session 0x558e0d25d0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409747456 unmapped: 41000960 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 383 ms_handle_reset con 0x558e0a5a2800 session 0x558e0cc89c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409747456 unmapped: 41000960 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 383 ms_handle_reset con 0x558e0c668c00 session 0x558e0c030b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409747456 unmapped: 41000960 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4580386 data_alloc: 251658240 data_used: 47300608
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409747456 unmapped: 41000960 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409747456 unmapped: 41000960 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 383 heartbeat osd_stat(store_statfs(0x1a4176000/0x0/0x1bfc00000, data 0x54c327e/0x56c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 383 heartbeat osd_stat(store_statfs(0x1a4176000/0x0/0x1bfc00000, data 0x54c327e/0x56c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409755648 unmapped: 40992768 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401080320 unmapped: 49668096 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 383 ms_handle_reset con 0x558e0ac74000 session 0x558e0c4b63c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 383 heartbeat osd_stat(store_statfs(0x1a4176000/0x0/0x1bfc00000, data 0x54c327e/0x56c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401080320 unmapped: 49668096 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4516512 data_alloc: 234881024 data_used: 37777408
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400416768 unmapped: 50331648 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.699891090s of 10.118889809s, submitted: 149
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 50724864 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a43a7000/0x0/0x1bfc00000, data 0x5291d5b/0x5497000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 50724864 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a43a7000/0x0/0x1bfc00000, data 0x5291d5b/0x5497000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 50724864 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 384 ms_handle_reset con 0x558e0cf4c800 session 0x558e0d30c3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 384 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c513860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a43a7000/0x0/0x1bfc00000, data 0x5291d5b/0x5497000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 50724864 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4527270 data_alloc: 234881024 data_used: 38113280
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 50724864 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 385 ms_handle_reset con 0x558e0ac74000 session 0x558e0c60c780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 56131584 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 385 ms_handle_reset con 0x558e0cccf400 session 0x558e098d9a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 56131584 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a4bc1000/0x0/0x1bfc00000, data 0x4a769f8/0x4c7c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [0,0,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 385 ms_handle_reset con 0x558e0c668c00 session 0x558e0c56f4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 56131584 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a4d96000/0x0/0x1bfc00000, data 0x48a29d5/0x4aa7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 56131584 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4386548 data_alloc: 234881024 data_used: 28184576
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 56131584 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.369226456s of 10.518086433s, submitted: 54
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 56123392 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 386 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 386 ms_handle_reset con 0x558e1cd82800 session 0x558e08c6b680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 386 ms_handle_reset con 0x558e0b387800 session 0x558e0c6cef00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 386 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6ce000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 394657792 unmapped: 56090624 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 386 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c5130e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 58900480 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 58900480 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a619c000/0x0/0x1bfc00000, data 0x349e610/0x36a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 386 handle_osd_map epochs [387,387], i have 387, src has [1,387]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4208547 data_alloc: 234881024 data_used: 26894336
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 58900480 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 58900480 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 58900480 heap: 450748416 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 388 ms_handle_reset con 0x558e0c668c00 session 0x558e0d1c70e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391888896 unmapped: 66215936 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 388 ms_handle_reset con 0x558e0ac74000 session 0x558e0d3cb0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 388 ms_handle_reset con 0x558e0bfa0800 session 0x558e0c56cb40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391897088 unmapped: 66207744 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 388 ms_handle_reset con 0x558e0c67bc00 session 0x558e0c570d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4249695 data_alloc: 234881024 data_used: 26902528
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 389 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a0e34a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391905280 unmapped: 66199552 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 389 heartbeat osd_stat(store_statfs(0x1a580d000/0x0/0x1bfc00000, data 0x3e29921/0x4030000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391905280 unmapped: 66199552 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391905280 unmapped: 66199552 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391905280 unmapped: 66199552 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 391905280 unmapped: 66199552 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.630528450s of 13.451127052s, submitted: 138
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0ce914a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4302112 data_alloc: 234881024 data_used: 26787840
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0c668c00 session 0x558e0c4b74a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5b76000/0x0/0x1bfc00000, data 0x3ac142d/0x3cc7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0b387800 session 0x558e0c4b7680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 65552384 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 65552384 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 65552384 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 65544192 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a546b000/0x0/0x1bfc00000, data 0x41ca4b2/0x43d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393125888 unmapped: 64978944 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4383723 data_alloc: 234881024 data_used: 37908480
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393125888 unmapped: 64978944 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393125888 unmapped: 64978944 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393125888 unmapped: 64978944 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5469000/0x0/0x1bfc00000, data 0x41cb4b2/0x43d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393125888 unmapped: 64978944 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393125888 unmapped: 64978944 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4384671 data_alloc: 234881024 data_used: 37969920
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393125888 unmapped: 64978944 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5469000/0x0/0x1bfc00000, data 0x41cb4b2/0x43d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393125888 unmapped: 64978944 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393125888 unmapped: 64978944 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5469000/0x0/0x1bfc00000, data 0x41cb4b2/0x43d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 62341120 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.525462151s of 14.631435394s, submitted: 46
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398639104 unmapped: 59465728 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4498527 data_alloc: 251658240 data_used: 45780992
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 59392000 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a4dea000/0x0/0x1bfc00000, data 0x484c4b2/0x4a54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 59392000 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 59392000 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 59392000 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 59392000 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4505249 data_alloc: 251658240 data_used: 46067712
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 59392000 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 59392000 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a4dea000/0x0/0x1bfc00000, data 0x484c4b2/0x4a54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 59392000 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 59392000 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e08c6a1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d5a25a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.923766136s of 10.030280113s, submitted: 40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 399351808 unmapped: 58753024 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0c67bc00 session 0x558e0d3ca000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a4a54000/0x0/0x1bfc00000, data 0x4be24b2/0x4dea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4316171 data_alloc: 234881024 data_used: 34947072
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398917632 unmapped: 59187200 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5452000/0x0/0x1bfc00000, data 0x3aa348f/0x3caa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398925824 unmapped: 59179008 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5446000/0x0/0x1bfc00000, data 0x3aaf48f/0x3cb6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398925824 unmapped: 59179008 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398925824 unmapped: 59179008 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398934016 unmapped: 59170816 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4324033 data_alloc: 234881024 data_used: 35110912
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398934016 unmapped: 59170816 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5446000/0x0/0x1bfc00000, data 0x3aaf48f/0x3cb6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398934016 unmapped: 59170816 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398934016 unmapped: 59170816 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398934016 unmapped: 59170816 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398934016 unmapped: 59170816 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4324033 data_alloc: 234881024 data_used: 35110912
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398934016 unmapped: 59170816 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398934016 unmapped: 59170816 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5446000/0x0/0x1bfc00000, data 0x3aaf48f/0x3cb6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.837563515s of 13.038318634s, submitted: 78
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cfbcc00 session 0x558e0c571860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398934016 unmapped: 59170816 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398802944 unmapped: 59301888 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 59293696 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4111051 data_alloc: 234881024 data_used: 25120768
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d1c7e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a69cc000/0x0/0x1bfc00000, data 0x292840a/0x2b2d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a69cc000/0x0/0x1bfc00000, data 0x292840a/0x2b2d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4161739 data_alloc: 234881024 data_used: 25120768
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c60c1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a6781000/0x0/0x1bfc00000, data 0x2eb840a/0x30bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a6781000/0x0/0x1bfc00000, data 0x2eb840a/0x30bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4152779 data_alloc: 218103808 data_used: 25120768
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a6781000/0x0/0x1bfc00000, data 0x2eb840a/0x30bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4152779 data_alloc: 218103808 data_used: 25120768
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.323705673s of 19.688594818s, submitted: 37
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a6781000/0x0/0x1bfc00000, data 0x2eb840a/0x30bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a6781000/0x0/0x1bfc00000, data 0x2eb840a/0x30bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 65667072 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0b387800 session 0x558e0a0f12c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4157524 data_alloc: 218103808 data_used: 25120768
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392593408 unmapped: 65511424 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a675c000/0x0/0x1bfc00000, data 0x2edc42d/0x30e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392593408 unmapped: 65511424 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392593408 unmapped: 65511424 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a675c000/0x0/0x1bfc00000, data 0x2edc42d/0x30e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392986624 unmapped: 65118208 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392986624 unmapped: 65118208 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4199284 data_alloc: 234881024 data_used: 30961664
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392986624 unmapped: 65118208 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392986624 unmapped: 65118208 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a675c000/0x0/0x1bfc00000, data 0x2edc42d/0x30e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392986624 unmapped: 65118208 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392986624 unmapped: 65118208 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a675c000/0x0/0x1bfc00000, data 0x2edc42d/0x30e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392986624 unmapped: 65118208 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4199284 data_alloc: 234881024 data_used: 30961664
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cccf400 session 0x558e0d4561e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0c67bc00 session 0x558e0c56fc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392986624 unmapped: 65118208 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a675c000/0x0/0x1bfc00000, data 0x2edc42d/0x30e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.394467354s of 14.422055244s, submitted: 7
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66da40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390619136 unmapped: 67485696 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390619136 unmapped: 67485696 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390619136 unmapped: 67485696 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390619136 unmapped: 67485696 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a6d10000/0x0/0x1bfc00000, data 0x292840a/0x2b2d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4115797 data_alloc: 218103808 data_used: 25124864
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390619136 unmapped: 67485696 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c66c3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0b387800 session 0x558e0ae321e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cccf400 session 0x558e0c6d5c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e104ee800 session 0x558e0bf4a960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390619136 unmapped: 67485696 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a0f0d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0d25d860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0b387800 session 0x558e0cc892c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cccf400 session 0x558e0ae172c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e109da000 session 0x558e0a71c780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4223416 data_alloc: 218103808 data_used: 25124864
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4223416 data_alloc: 218103808 data_used: 25124864
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.126358032s of 16.298910141s, submitted: 45
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 67469312 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390643712 unmapped: 67461120 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4223592 data_alloc: 218103808 data_used: 25124864
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390643712 unmapped: 67461120 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390643712 unmapped: 67461120 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390643712 unmapped: 67461120 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390643712 unmapped: 67461120 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390651904 unmapped: 67452928 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4223592 data_alloc: 218103808 data_used: 25124864
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390651904 unmapped: 67452928 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390651904 unmapped: 67452928 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390651904 unmapped: 67452928 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 390651904 unmapped: 67452928 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4329512 data_alloc: 234881024 data_used: 39804928
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4329672 data_alloc: 234881024 data_used: 39809024
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5ec5000/0x0/0x1bfc00000, data 0x377247c/0x3979000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 65519616 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.511255264s of 21.515905380s, submitted: 1
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393781248 unmapped: 64323584 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4372354 data_alloc: 234881024 data_used: 39981056
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393789440 unmapped: 64315392 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5943000/0x0/0x1bfc00000, data 0x3cf447c/0x3efb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393879552 unmapped: 64225280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a5926000/0x0/0x1bfc00000, data 0x3d1147c/0x3f18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393887744 unmapped: 64217088 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393887744 unmapped: 64217088 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393887744 unmapped: 64217088 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4380802 data_alloc: 234881024 data_used: 40169472
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393887744 unmapped: 64217088 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393887744 unmapped: 64217088 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a591a000/0x0/0x1bfc00000, data 0x3d1d47c/0x3f24000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393887744 unmapped: 64217088 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393854976 unmapped: 64249856 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a591a000/0x0/0x1bfc00000, data 0x3d1d47c/0x3f24000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.725028992s of 10.078880310s, submitted: 59
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c315e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0b387800 session 0x558e0d1c6d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cccf400 session 0x558e0c66da40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cfbdc00 session 0x558e0d4561e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e19cd3800 session 0x558e0c571860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393879552 unmapped: 64225280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4457207 data_alloc: 234881024 data_used: 40173568
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393879552 unmapped: 64225280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3d9b000/0x0/0x1bfc00000, data 0x46fc47c/0x4903000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1755f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393879552 unmapped: 64225280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393879552 unmapped: 64225280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393879552 unmapped: 64225280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e19cd3800 session 0x558e08c6a1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c4b7680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0b387800 session 0x558e0ce914a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 393879552 unmapped: 64225280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cccf400 session 0x558e0a0e34a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cfbdc00 session 0x558e0c570d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4485929 data_alloc: 234881024 data_used: 40173568
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 63012864 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cfbdc00 session 0x558e0d1c70e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3627000/0x0/0x1bfc00000, data 0x4e7047c/0x5077000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1755f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 62693376 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 62693376 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c6ce000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395436032 unmapped: 62668800 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3626000/0x0/0x1bfc00000, data 0x4e7049f/0x5078000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1755f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.039798737s of 10.111831665s, submitted: 50
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e19cd3800 session 0x558e0c6d5e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0ea8a400 session 0x558e0c685680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0c5e1000 session 0x558e0a5492c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c60d860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 395436032 unmapped: 62668800 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cfbdc00 session 0x558e0c6cf0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0ea8a400 session 0x558e0d5a23c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e19cd3800 session 0x558e0c0305a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4563203 data_alloc: 234881024 data_used: 40177664
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0e3bc800 session 0x558e0d457e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0d1c7680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396271616 unmapped: 61833216 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cfbdc00 session 0x558e0d5a2960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398802944 unmapped: 59301888 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0ea8a400 session 0x558e08c6be00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e19cd3800 session 0x558e0c684b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398802944 unmapped: 59301888 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0f836c00 session 0x558e0c3152c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3209000/0x0/0x1bfc00000, data 0x528c4af/0x5495000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1755f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c56ed20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0f836c00 session 0x558e0c5123c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398802944 unmapped: 59301888 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cfbdc00 session 0x558e0c6852c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398802944 unmapped: 59301888 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0ea8a400 session 0x558e0c6d45a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4648643 data_alloc: 251658240 data_used: 51032064
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 58777600 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 399491072 unmapped: 58613760 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401129472 unmapped: 56975360 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3209000/0x0/0x1bfc00000, data 0x528c4af/0x5495000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1755f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401129472 unmapped: 56975360 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0c5e0400 session 0x558e0c6ce5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401137664 unmapped: 56967168 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.458106995s of 10.734622002s, submitted: 10
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4686240 data_alloc: 251658240 data_used: 53190656
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401145856 unmapped: 56958976 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405323776 unmapped: 52781056 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407199744 unmapped: 50905088 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a29c9000/0x0/0x1bfc00000, data 0x5acb4af/0x5cd4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1755f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407404544 unmapped: 50700288 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a29c9000/0x0/0x1bfc00000, data 0x5acb4af/0x5cd4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1755f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2400 session 0x558e0cc3f2c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cfbdc00 session 0x558e0d30d860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407437312 unmapped: 50667520 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0ea8a400 session 0x558e0c56ef00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4716560 data_alloc: 251658240 data_used: 51933184
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407478272 unmapped: 50626560 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a2dc8000/0x0/0x1bfc00000, data 0x56ce49f/0x58d6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1755f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 48660480 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409600000 unmapped: 48504832 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0a4c45a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 44728320 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0f836c00 session 0x558e0a0f0d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412721152 unmapped: 45383680 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.564447403s of 10.097050667s, submitted: 198
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4745836 data_alloc: 251658240 data_used: 50589696
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412786688 unmapped: 45318144 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a2a27000/0x0/0x1bfc00000, data 0x5a6349f/0x5c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1755f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412786688 unmapped: 45318144 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412786688 unmapped: 45318144 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0b387800 session 0x558e0c513860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cccf400 session 0x558e0d3cb0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 51978240 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c60d4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a2acf000/0x0/0x1bfc00000, data 0x482747c/0x4a2e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407191552 unmapped: 50913280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524876 data_alloc: 234881024 data_used: 39337984
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407191552 unmapped: 50913280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407191552 unmapped: 50913280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c60c1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a2acf000/0x0/0x1bfc00000, data 0x482748c/0x4a2f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407191552 unmapped: 50913280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407191552 unmapped: 50913280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407191552 unmapped: 50913280 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4528134 data_alloc: 234881024 data_used: 39337984
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.227614403s of 10.642716408s, submitted: 66
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a2acd000/0x0/0x1bfc00000, data 0x482848c/0x4a30000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e19cd3800 session 0x558e0af02b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cfbdc00 session 0x558e0d3ca3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a2acd000/0x0/0x1bfc00000, data 0x482848c/0x4a30000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4533062 data_alloc: 234881024 data_used: 38797312
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a2acd000/0x0/0x1bfc00000, data 0x482848c/0x4a30000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d25cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407207936 unmapped: 50896896 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3ec8000/0x0/0x1bfc00000, data 0x342f47c/0x3636000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c4b70e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4275740 data_alloc: 218103808 data_used: 26292224
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.155278206s of 10.260153770s, submitted: 51
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402898944 unmapped: 55205888 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402898944 unmapped: 55205888 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4277152 data_alloc: 218103808 data_used: 26419200
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3ec8000/0x0/0x1bfc00000, data 0x342f41a/0x3635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3ec8000/0x0/0x1bfc00000, data 0x342f41a/0x3635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4277152 data_alloc: 218103808 data_used: 26419200
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3ec8000/0x0/0x1bfc00000, data 0x342f41a/0x3635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3ec8000/0x0/0x1bfc00000, data 0x342f41a/0x3635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 55222272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.200887680s of 11.203542709s, submitted: 1
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0bfa0800 session 0x558e0c030f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3ec8000/0x0/0x1bfc00000, data 0x342f41a/0x3635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0cccf400 session 0x558e0c6ce780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4164254 data_alloc: 218103808 data_used: 23625728
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a494e000/0x0/0x1bfc00000, data 0x29a83b8/0x2bad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a494e000/0x0/0x1bfc00000, data 0x29a83b8/0x2bad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4164574 data_alloc: 218103808 data_used: 23633920
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a494e000/0x0/0x1bfc00000, data 0x29a83b8/0x2bad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401825792 unmapped: 56279040 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 56270848 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 56270848 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a494e000/0x0/0x1bfc00000, data 0x29a83b8/0x2bad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4164574 data_alloc: 218103808 data_used: 23633920
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 56270848 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 56270848 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401842176 unmapped: 56262656 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a494e000/0x0/0x1bfc00000, data 0x29a83b8/0x2bad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401842176 unmapped: 56262656 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.117391586s of 17.176603317s, submitted: 35
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401842176 unmapped: 56262656 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4163048 data_alloc: 218103808 data_used: 23621632
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401842176 unmapped: 56262656 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401842176 unmapped: 56262656 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a4948000/0x0/0x1bfc00000, data 0x29ae3b8/0x2bb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401842176 unmapped: 56262656 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401842176 unmapped: 56262656 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401842176 unmapped: 56262656 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4163048 data_alloc: 218103808 data_used: 23621632
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401858560 unmapped: 56246272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401858560 unmapped: 56246272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a4948000/0x0/0x1bfc00000, data 0x29ae3b8/0x2bb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401858560 unmapped: 56246272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0a14f860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401858560 unmapped: 56246272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401858560 unmapped: 56246272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4163236 data_alloc: 218103808 data_used: 23629824
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.987949371s of 11.467575073s, submitted: 10
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d5a3a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401858560 unmapped: 56246272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a494b000/0x0/0x1bfc00000, data 0x29ae3b8/0x2bb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401858560 unmapped: 56246272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401858560 unmapped: 56246272 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0b387800 session 0x558e0c56f860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2400 session 0x558e0a4c43c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 56238080 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a494b000/0x0/0x1bfc00000, data 0x29ae3b8/0x2bb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a494b000/0x0/0x1bfc00000, data 0x29ae3b8/0x2bb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 56238080 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4163318 data_alloc: 234881024 data_used: 23629824
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0bfa0800 session 0x558e0c4b6960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 56238080 heap: 458104832 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d5a23c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c66cd20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 ms_handle_reset con 0x558e0a5a2800 session 0x558e0a4c4780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 59498496 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 59498496 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a413f000/0x0/0x1bfc00000, data 0x31b941a/0x33bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 59498496 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 59498496 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231240 data_alloc: 234881024 data_used: 23629824
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 59498496 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.435894966s of 10.861941338s, submitted: 58
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402112512 unmapped: 59670528 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 391 ms_handle_reset con 0x558e0ea8a400 session 0x558e0a0f10e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 391 ms_handle_reset con 0x558e0b387400 session 0x558e0c4b63c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 392 ms_handle_reset con 0x558e0b387800 session 0x558e0c6d5c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 59654144 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402685952 unmapped: 59097088 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 393 ms_handle_reset con 0x558e0a5a2400 session 0x558e0cc89c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a3680000/0x0/0x1bfc00000, data 0x3c739f7/0x3e7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c3150e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 ms_handle_reset con 0x558e0b387400 session 0x558e0d5a3c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402595840 unmapped: 59187200 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a367b000/0x0/0x1bfc00000, data 0x3c756ce/0x3e82000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 ms_handle_reset con 0x558e0a5a2800 session 0x558e0d3ca3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335136 data_alloc: 234881024 data_used: 23678976
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402604032 unmapped: 59179008 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402604032 unmapped: 59179008 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a367a000/0x0/0x1bfc00000, data 0x3c75730/0x3e83000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c6d5c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a367a000/0x0/0x1bfc00000, data 0x3c75730/0x3e83000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [2])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c6d5680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404078592 unmapped: 57704448 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 ms_handle_reset con 0x558e0c67cc00 session 0x558e0ae325a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404078592 unmapped: 57704448 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 395 ms_handle_reset con 0x558e0b387400 session 0x558e0c4b6960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0c669000 session 0x558e0d1c7e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2c2f000/0x0/0x1bfc00000, data 0x42acee4/0x44bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404103168 unmapped: 57679872 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2c2f000/0x0/0x1bfc00000, data 0x42acee4/0x44bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c60c1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4390622 data_alloc: 234881024 data_used: 23691264
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404103168 unmapped: 57679872 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2c31000/0x0/0x1bfc00000, data 0x42acee4/0x44bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 57671680 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.528700829s of 11.021935463s, submitted: 112
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 57671680 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 57671680 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2c28000/0x0/0x1bfc00000, data 0x42acee4/0x44bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 57671680 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4438014 data_alloc: 234881024 data_used: 29741056
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 57671680 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2c28000/0x0/0x1bfc00000, data 0x42acee4/0x44bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 57671680 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c56f0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0bfa1800 session 0x558e0a5490e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 57671680 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 57671680 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c4b6d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404119552 unmapped: 57663488 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4348106 data_alloc: 234881024 data_used: 23445504
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404119552 unmapped: 57663488 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a3256000/0x0/0x1bfc00000, data 0x3c88e82/0x3e98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0cfbdc00 session 0x558e0c314d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404119552 unmapped: 57663488 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404119552 unmapped: 57663488 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.919573784s of 10.978419304s, submitted: 25
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 59383808 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c315860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 59383808 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213608 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a3d63000/0x0/0x1bfc00000, data 0x317be82/0x338b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 59383808 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 59383808 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 59383808 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 59383808 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 59383808 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213608 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 59383808 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a3d63000/0x0/0x1bfc00000, data 0x317be82/0x338b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402161664 unmapped: 59621376 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402169856 unmapped: 59613184 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a3d63000/0x0/0x1bfc00000, data 0x317be82/0x338b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402169856 unmapped: 59613184 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402169856 unmapped: 59613184 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4259020 data_alloc: 234881024 data_used: 23191552
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.733194351s of 12.767077446s, submitted: 10
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 58572800 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a5a2800 session 0x558e0d3cbe00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402178048 unmapped: 59604992 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a3656000/0x0/0x1bfc00000, data 0x3888e82/0x3a98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402178048 unmapped: 59604992 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a3656000/0x0/0x1bfc00000, data 0x3888e82/0x3a98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402178048 unmapped: 59604992 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402178048 unmapped: 59604992 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4319988 data_alloc: 234881024 data_used: 23191552
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402178048 unmapped: 59604992 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402178048 unmapped: 59604992 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0bfa1800 session 0x558e0d25da40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0b387400 session 0x558e0a4c43c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0c669000 session 0x558e0c66cb40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a0f10e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a3656000/0x0/0x1bfc00000, data 0x3888e82/0x3a98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c314b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0b387400 session 0x558e0cc88d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0bfa1800 session 0x558e0cc88f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403546112 unmapped: 58236928 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0c67cc00 session 0x558e0c66cd20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6ce780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403152896 unmapped: 58630144 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a5a2800 session 0x558e0af03a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 58613760 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2695000/0x0/0x1bfc00000, data 0x4a53e82/0x4a59000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4479734 data_alloc: 234881024 data_used: 23838720
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 58605568 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 58793984 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 58793984 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0b387400 session 0x558e0bf4a3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.780861855s of 12.146421432s, submitted: 150
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0bfa1800 session 0x558e0c60da40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0f836800 session 0x558e0d1c7680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 58908672 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 58908672 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419807 data_alloc: 234881024 data_used: 23838720
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2d8d000/0x0/0x1bfc00000, data 0x435be82/0x4361000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 58900480 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66da40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c6d4f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2d8c000/0x0/0x1bfc00000, data 0x435bea5/0x4362000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 ms_handle_reset con 0x558e0a5a2800 session 0x558e0d25d2c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402907136 unmapped: 58875904 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402907136 unmapped: 58875904 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402907136 unmapped: 58875904 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 58736640 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4468140 data_alloc: 234881024 data_used: 30244864
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 397 ms_handle_reset con 0x558e0bfa1800 session 0x558e0bf4a960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 58712064 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 397 ms_handle_reset con 0x558e0c54a400 session 0x558e0c6850e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 397 ms_handle_reset con 0x558e0de69400 session 0x558e0a71c780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 398 ms_handle_reset con 0x558e0a16a000 session 0x558e0c570780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401375232 unmapped: 60407808 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a5235000/0x0/0x1bfc00000, data 0x2ce7c34/0x2ef7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 398 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6cf2c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 60399616 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 60399616 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 60399616 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4191610 data_alloc: 234881024 data_used: 23240704
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 60399616 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 60399616 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 60399616 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a5928000/0x0/0x1bfc00000, data 0x24de89b/0x26ee000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.492330551s of 15.873806000s, submitted: 158
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 60399616 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 60399616 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213120 data_alloc: 234881024 data_used: 23236608
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402792448 unmapped: 58990592 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 58818560 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a53b3000/0x0/0x1bfc00000, data 0x2b64412/0x2d75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 57917440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 57917440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 57917440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4251920 data_alloc: 234881024 data_used: 23797760
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 57917440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 57917440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 57917440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a537e000/0x0/0x1bfc00000, data 0x2b9f412/0x2db0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 57917440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 57917440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4250408 data_alloc: 234881024 data_used: 23797760
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0a14f860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c60dc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e0a0e34a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 57917440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0af03e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.141830444s of 12.399270058s, submitted: 121
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0d5a25a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c314000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0de69400 session 0x558e0d25c960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e0c6cf860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d5a3860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 57901056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 57901056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 57901056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0a4c52c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f5d000/0x0/0x1bfc00000, data 0x2fbf422/0x31d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 57901056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0d456f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286704 data_alloc: 234881024 data_used: 23797760
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 57901056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f5d000/0x0/0x1bfc00000, data 0x2fbf422/0x31d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0bfa1800 session 0x558e0c6845a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e0d456000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404185088 unmapped: 57597952 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404185088 unmapped: 57597952 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404185088 unmapped: 57597952 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 57581568 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4323759 data_alloc: 234881024 data_used: 28131328
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 57581568 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f38000/0x0/0x1bfc00000, data 0x2fe3432/0x31f6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f38000/0x0/0x1bfc00000, data 0x2fe3432/0x31f6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 57581568 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 57581568 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f38000/0x0/0x1bfc00000, data 0x2fe3432/0x31f6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.051894188s of 12.122841835s, submitted: 14
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0d25cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f38000/0x0/0x1bfc00000, data 0x2fe3432/0x31f6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404209664 unmapped: 57573376 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404217856 unmapped: 57565184 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4328667 data_alloc: 234881024 data_used: 28131328
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404217856 unmapped: 57565184 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404217856 unmapped: 57565184 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e104ef400 session 0x558e0d3ca000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404217856 unmapped: 57565184 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f36000/0x0/0x1bfc00000, data 0x2fe34a4/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f35000/0x0/0x1bfc00000, data 0x2fe3506/0x31f9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404217856 unmapped: 57565184 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404217856 unmapped: 57565184 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4344961 data_alloc: 234881024 data_used: 28147712
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a49fa000/0x0/0x1bfc00000, data 0x351e506/0x3734000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405061632 unmapped: 56721408 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 56098816 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c54a400 session 0x558e0bf4b680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a45a6000/0x0/0x1bfc00000, data 0x3972506/0x3b88000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 56066048 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406773760 unmapped: 55009280 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4588000/0x0/0x1bfc00000, data 0x3988506/0x3b9e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406773760 unmapped: 55009280 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4419281 data_alloc: 234881024 data_used: 28692480
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406773760 unmapped: 55009280 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.890554428s of 13.103409767s, submitted: 76
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0b387400 session 0x558e0d30c5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404258816 unmapped: 57524224 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e08c6a960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404258816 unmapped: 57524224 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404258816 unmapped: 57524224 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a527e000/0x0/0x1bfc00000, data 0x2c9a4e3/0x2eaf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404258816 unmapped: 57524224 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a527e000/0x0/0x1bfc00000, data 0x2c9a4e3/0x2eaf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4267127 data_alloc: 218103808 data_used: 21741568
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 57516032 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 57516032 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 57516032 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 57516032 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0ad5b0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c513a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c685c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396632064 unmapped: 65150976 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0b387400 session 0x558e0a0f12c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4120763 data_alloc: 218103808 data_used: 16838656
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606b000/0x0/0x1bfc00000, data 0x1eb1461/0x20c3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396640256 unmapped: 65142784 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.675402641s of 10.009902954s, submitted: 73
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e0a0f0d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c56ef00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396664832 unmapped: 65118208 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0a14f0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c56ed20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:20:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2366614009' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396673024 unmapped: 65110016 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396681216 unmapped: 65101824 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396681216 unmapped: 65101824 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396681216 unmapped: 65101824 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396681216 unmapped: 65101824 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396681216 unmapped: 65101824 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396681216 unmapped: 65101824 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396681216 unmapped: 65101824 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396681216 unmapped: 65101824 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396689408 unmapped: 65093632 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396689408 unmapped: 65093632 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396689408 unmapped: 65093632 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396697600 unmapped: 65085440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396697600 unmapped: 65085440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396697600 unmapped: 65085440 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396705792 unmapped: 65077248 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396705792 unmapped: 65077248 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396713984 unmapped: 65069056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396713984 unmapped: 65069056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396713984 unmapped: 65069056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396713984 unmapped: 65069056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396713984 unmapped: 65069056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117328 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396713984 unmapped: 65069056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 396713984 unmapped: 65069056 heap: 461783040 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c54a400 session 0x558e0c6ce000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e0d30c3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d3cb860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e08c6b680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 56.678520203s of 56.812660217s, submitted: 42
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 66600960 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c030f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e104ef400 session 0x558e0c5701e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e0a14fa40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0ce912c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c60cb40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c6d4780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397180928 unmapped: 75104256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e12993c00 session 0x558e0d1c7680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397180928 unmapped: 75104256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4261599 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397180928 unmapped: 75104256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397180928 unmapped: 75104256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2f55451/0x3166000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397180928 unmapped: 75104256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2f55451/0x3166000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397180928 unmapped: 75104256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397180928 unmapped: 75104256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4261599 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397180928 unmapped: 75104256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397189120 unmapped: 75096064 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e0bf4a3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397189120 unmapped: 75096064 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2f55451/0x3166000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0af03a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397197312 unmapped: 75087872 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c6ce780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.751578331s of 12.291110992s, submitted: 52
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0c66cd20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397484032 unmapped: 74801152 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e12993c00 session 0x558e0cc89e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4263863 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397484032 unmapped: 74801152 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f9d000/0x0/0x1bfc00000, data 0x2f7f461/0x3191000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397500416 unmapped: 74784768 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f9d000/0x0/0x1bfc00000, data 0x2f7f461/0x3191000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397508608 unmapped: 74776576 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397508608 unmapped: 74776576 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397508608 unmapped: 74776576 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4312023 data_alloc: 234881024 data_used: 23662592
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 397508608 unmapped: 74776576 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f9d000/0x0/0x1bfc00000, data 0x2f7f461/0x3191000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398647296 unmapped: 73637888 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f9d000/0x0/0x1bfc00000, data 0x2f7f461/0x3191000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 399433728 unmapped: 72851456 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 399433728 unmapped: 72851456 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 399433728 unmapped: 72851456 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4378103 data_alloc: 234881024 data_used: 32776192
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 399433728 unmapped: 72851456 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4f9d000/0x0/0x1bfc00000, data 0x2f7f461/0x3191000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 399433728 unmapped: 72851456 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.006931305s of 13.017009735s, submitted: 2
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 399810560 unmapped: 72474624 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400154624 unmapped: 72130560 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400162816 unmapped: 72122368 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4405089 data_alloc: 234881024 data_used: 32952320
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400162816 unmapped: 72122368 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e05000/0x0/0x1bfc00000, data 0x3116461/0x3328000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400162816 unmapped: 72122368 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406528000 unmapped: 65757184 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4207000/0x0/0x1bfc00000, data 0x3d07461/0x3f19000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [0,0,0,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407379968 unmapped: 64905216 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407379968 unmapped: 64905216 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4508191 data_alloc: 234881024 data_used: 34316288
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407379968 unmapped: 64905216 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407379968 unmapped: 64905216 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407379968 unmapped: 64905216 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407379968 unmapped: 64905216 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4174000/0x0/0x1bfc00000, data 0x3da2461/0x3fb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.102943420s of 11.451324463s, submitted: 180
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4505835 data_alloc: 234881024 data_used: 34316288
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0d3cbc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a415a000/0x0/0x1bfc00000, data 0x3dc14c3/0x3fd4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a415a000/0x0/0x1bfc00000, data 0x3dc14c3/0x3fd4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a415a000/0x0/0x1bfc00000, data 0x3dc14c3/0x3fd4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4507509 data_alloc: 234881024 data_used: 34316288
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407396352 unmapped: 64888832 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.437378883s of 10.456358910s, submitted: 6
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407470080 unmapped: 64815104 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4522927 data_alloc: 234881024 data_used: 34316288
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8a800 session 0x558e0c685c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0c4b7860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0a71d0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4084000/0x0/0x1bfc00000, data 0x3e964ec/0x40aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8b800 session 0x558e0bf4b0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8b800 session 0x558e0c570960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407494656 unmapped: 64790528 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4084000/0x0/0x1bfc00000, data 0x3e96525/0x40aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407494656 unmapped: 64790528 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4084000/0x0/0x1bfc00000, data 0x3e96525/0x40aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4521555 data_alloc: 234881024 data_used: 34316288
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4081000/0x0/0x1bfc00000, data 0x3e99525/0x40ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4081000/0x0/0x1bfc00000, data 0x3e99525/0x40ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0c314d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4521555 data_alloc: 234881024 data_used: 34316288
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8a800 session 0x558e0c56f0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4081000/0x0/0x1bfc00000, data 0x3e99525/0x40ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0a5490e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.657582283s of 11.743950844s, submitted: 39
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0c4b6d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4529956 data_alloc: 234881024 data_used: 34934784
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a407e000/0x0/0x1bfc00000, data 0x3e9a534/0x40af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 56K writes, 211K keys, 56K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.04 MB/s#012Cumulative WAL: 56K writes, 20K syncs, 2.71 writes per sync, written: 0.21 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4720 writes, 16K keys, 4720 commit groups, 1.0 writes per commit group, ingest: 17.63 MB, 0.03 MB/s#012Interval WAL: 4720 writes, 1941 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8a800 session 0x558e0c6ce000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a407e000/0x0/0x1bfc00000, data 0x3e9a534/0x40af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x3fb4534/0x41c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4541482 data_alloc: 234881024 data_used: 34934784
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x3fb4534/0x41c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.362330437s of 13.426147461s, submitted: 20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407707648 unmapped: 64577536 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8b800 session 0x558e0c66d2c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4547110 data_alloc: 234881024 data_used: 34947072
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0c030000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3ded000/0x0/0x1bfc00000, data 0x412c534/0x4341000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408068096 unmapped: 64217088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 63987712 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: mgrc ms_handle_reset ms_handle_reset con 0x558e0de69c00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3158772141
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3158772141,v1:192.168.122.100:6801/3158772141]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: mgrc handle_mgr_configure stats_period=5
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3da9000/0x0/0x1bfc00000, data 0x4170534/0x4385000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 63987712 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0d25c1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 63987712 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c2e8c00 session 0x558e0c6d54a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408313856 unmapped: 63971328 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cf4bc00 session 0x558e0cc3e5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4562468 data_alloc: 234881024 data_used: 34971648
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408322048 unmapped: 63963136 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d71000/0x0/0x1bfc00000, data 0x41a8534/0x43bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d52000/0x0/0x1bfc00000, data 0x41c7534/0x43dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4561344 data_alloc: 234881024 data_used: 34996224
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d52000/0x0/0x1bfc00000, data 0x41c7534/0x43dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408453120 unmapped: 63832064 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408453120 unmapped: 63832064 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d52000/0x0/0x1bfc00000, data 0x41c7534/0x43dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.371699333s of 13.565903664s, submitted: 57
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408567808 unmapped: 63717376 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408567808 unmapped: 63717376 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4561608 data_alloc: 234881024 data_used: 34996224
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408567808 unmapped: 63717376 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408576000 unmapped: 63709184 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d44000/0x0/0x1bfc00000, data 0x41d5534/0x43ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408584192 unmapped: 63700992 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a39fa000/0x0/0x1bfc00000, data 0x451f534/0x4734000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409632768 unmapped: 62652416 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409862144 unmapped: 62423040 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4590294 data_alloc: 234881024 data_used: 35237888
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409862144 unmapped: 62423040 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a39bd000/0x0/0x1bfc00000, data 0x4556534/0x476b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 62414848 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409878528 unmapped: 62406656 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409878528 unmapped: 62406656 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.749568939s of 10.976778984s, submitted: 48
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0d5a30e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0c4b7680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409886720 unmapped: 62398464 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593286 data_alloc: 234881024 data_used: 35057664
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a39a7000/0x0/0x1bfc00000, data 0x4572534/0x4787000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cf4bc00 session 0x558e0c56f0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3c34000/0x0/0x1bfc00000, data 0x42834c3/0x4496000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4558884 data_alloc: 234881024 data_used: 34394112
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0a0f0d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0cc88d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.785618782s of 10.672557831s, submitted: 50
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3c35000/0x0/0x1bfc00000, data 0x42834b3/0x4495000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4322596 data_alloc: 234881024 data_used: 23928832
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0cc3fa40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5269000/0x0/0x1bfc00000, data 0x2cb44a3/0x2ec5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4321871 data_alloc: 234881024 data_used: 23928832
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407019520 unmapped: 65265664 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407027712 unmapped: 65257472 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407027712 unmapped: 65257472 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c60cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5269000/0x0/0x1bfc00000, data 0x2cb44a3/0x2ec5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.960060120s of 10.001210213s, submitted: 20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0ad5a960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283431 data_alloc: 234881024 data_used: 23838720
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5714000/0x0/0x1bfc00000, data 0x28094a3/0x2a1a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283431 data_alloc: 234881024 data_used: 23838720
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5714000/0x0/0x1bfc00000, data 0x28094a3/0x2a1a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f837000 session 0x558e0a0e3a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5714000/0x0/0x1bfc00000, data 0x28094a3/0x2a1a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.565967560s of 10.593852997s, submitted: 5
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283359 data_alloc: 234881024 data_used: 23838720
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e0cc88f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a4c4780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5714000/0x0/0x1bfc00000, data 0x28094a3/0x2a1a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407060480 unmapped: 65224704 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398753792 unmapped: 73531392 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0b29cd20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4155389 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4155389 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4155389 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4155389 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 73506816 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.332891464s of 23.415493011s, submitted: 20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c030f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f837000 session 0x558e08c6b680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cf4bc00 session 0x558e0d3cb860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0a0f10e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c56fe00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0d1c74a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c030000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 71401472 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f837000 session 0x558e0c6ce000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66c3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0d30c3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0d457e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0cc894a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0b29de00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d5a21e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57e3000/0x0/0x1bfc00000, data 0x273b441/0x294b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281374 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fef000/0x0/0x1bfc00000, data 0x2f2e451/0x313f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c6d43c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281374 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0d30dc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fef000/0x0/0x1bfc00000, data 0x2f2e451/0x313f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0c684780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8b800 session 0x558e0c6d5680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0c6d54a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401244160 unmapped: 75243520 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 75235328 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 73572352 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4412060 data_alloc: 234881024 data_used: 33955840
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc3000/0x0/0x1bfc00000, data 0x2f58484/0x316b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4412060 data_alloc: 234881024 data_used: 33955840
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc3000/0x0/0x1bfc00000, data 0x2f58484/0x316b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc3000/0x0/0x1bfc00000, data 0x2f58484/0x316b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc3000/0x0/0x1bfc00000, data 0x2f58484/0x316b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.165388107s of 19.341264725s, submitted: 39
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403546112 unmapped: 72941568 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409624576 unmapped: 66863104 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4510974 data_alloc: 234881024 data_used: 35069952
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4089000/0x0/0x1bfc00000, data 0x3a82484/0x3c95000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4522164 data_alloc: 234881024 data_used: 35672064
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4087000/0x0/0x1bfc00000, data 0x3a84484/0x3c97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409886720 unmapped: 66600960 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409886720 unmapped: 66600960 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 66592768 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4520192 data_alloc: 234881024 data_used: 35680256
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 66592768 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.241706848s of 13.720140457s, submitted: 470
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 66592768 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409911296 unmapped: 66576384 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a71c5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0a0e3e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4086000/0x0/0x1bfc00000, data 0x3a85484/0x3c98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a3400 session 0x558e0d1c6780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0d07c800 session 0x558e0c60c1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408666112 unmapped: 67821568 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 67813376 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4331577 data_alloc: 234881024 data_used: 26632192
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0af021e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0d457c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 67813376 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a0f12c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4172895 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4172895 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0d07a800 session 0x558e0d1c6960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 72425472 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 72425472 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4172895 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 72425472 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 72425472 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.704429626s of 20.876094818s, submitted: 60
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a3400 session 0x558e0c684b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 80101376 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 80101376 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 80101376 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281101 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404275200 unmapped: 80093184 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404283392 unmapped: 80084992 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404283392 unmapped: 80084992 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4db1000/0x0/0x1bfc00000, data 0x2d5d441/0x2f6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404283392 unmapped: 80084992 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404299776 unmapped: 80068608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281101 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404299776 unmapped: 80068608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404299776 unmapped: 80068608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404299776 unmapped: 80068608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0d07c800 session 0x558e0c314960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.310799599s of 11.459929466s, submitted: 13
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404242432 unmapped: 80125952 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a3400 session 0x558e0c56e1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d29000/0x0/0x1bfc00000, data 0x2de5441/0x2ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c315680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0bf4b860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404242432 unmapped: 80125952 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0c5712c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291355 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404242432 unmapped: 80125952 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404242432 unmapped: 80125952 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405372928 unmapped: 78995456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405372928 unmapped: 78995456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405372928 unmapped: 78995456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4403355 data_alloc: 234881024 data_used: 30060544
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405372928 unmapped: 78995456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0d1c7a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4403355 data_alloc: 234881024 data_used: 30060544
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.166716576s of 14.232734680s, submitted: 5
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 78741504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,1,4])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407838720 unmapped: 76529664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a44f9000/0x0/0x1bfc00000, data 0x3614451/0x3825000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4487753 data_alloc: 234881024 data_used: 30928896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4483793 data_alloc: 234881024 data_used: 30941184
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4473000/0x0/0x1bfc00000, data 0x369a451/0x38ab000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0a0e2960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a44d9000/0x0/0x1bfc00000, data 0x3634451/0x3845000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4473185 data_alloc: 234881024 data_used: 30420992
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.639778137s of 13.917524338s, submitted: 75
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409460736 unmapped: 74907648 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a44d4000/0x0/0x1bfc00000, data 0x3639451/0x384a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0ce90780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409460736 unmapped: 74907648 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f836c00 session 0x558e0d25d860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0c571680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409460736 unmapped: 74907648 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406626304 unmapped: 77742080 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4187461 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a3400 session 0x558e0a4c45a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186736 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e098d9a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.108777046s of 10.413861275s, submitted: 17
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0ae33c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406642688 unmapped: 77725696 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f836c00 session 0x558e0c56f4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186039 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186039 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406659072 unmapped: 77709312 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406659072 unmapped: 77709312 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186039 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406675456 unmapped: 77692928 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186039 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0ae325a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0cf0ed20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c4b6f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406675456 unmapped: 77692928 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0a5490e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.797447205s of 19.174055099s, submitted: 9
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406683648 unmapped: 77684736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0a548b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f836c00 session 0x558e0ae16b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0ae321e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0cc3f4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0d25de00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57ef000/0x0/0x1bfc00000, data 0x231f3ef/0x252f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406683648 unmapped: 77684736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406683648 unmapped: 77684736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406691840 unmapped: 77676544 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4226107 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57ef000/0x0/0x1bfc00000, data 0x231f3ef/0x252f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406691840 unmapped: 77676544 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406691840 unmapped: 77676544 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406700032 unmapped: 77668352 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406700032 unmapped: 77668352 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406700032 unmapped: 77668352 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0c6d4780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4228757 data_alloc: 218103808 data_used: 16834560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407052288 unmapped: 77316096 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57c5000/0x0/0x1bfc00000, data 0x23493ef/0x2559000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407052288 unmapped: 77316096 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.692783356s of 10.731881142s, submitted: 9
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0a0f0d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0b3d2800 session 0x558e0d30dc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d1c63c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0c314000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0ce912c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4267989 data_alloc: 218103808 data_used: 21397504
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0d1c6f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cf4d400 session 0x558e0c6d41e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0ce90000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57be000/0x0/0x1bfc00000, data 0x234f451/0x2560000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0a14f0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4269287 data_alloc: 218103808 data_used: 21422080
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57bd000/0x0/0x1bfc00000, data 0x234f461/0x2561000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406708224 unmapped: 77660160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57bd000/0x0/0x1bfc00000, data 0x234f461/0x2561000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406708224 unmapped: 77660160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57bd000/0x0/0x1bfc00000, data 0x234f461/0x2561000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406708224 unmapped: 77660160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.362232208s of 11.464373589s, submitted: 34
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406937600 unmapped: 77430784 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5589000/0x0/0x1bfc00000, data 0x2583461/0x2795000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407609344 unmapped: 76759040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4317003 data_alloc: 218103808 data_used: 21434368
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407912448 unmapped: 76455936 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407240704 unmapped: 77127680 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a514b000/0x0/0x1bfc00000, data 0x29c1461/0x2bd3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407707648 unmapped: 76660736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407707648 unmapped: 76660736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407715840 unmapped: 76652544 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4339349 data_alloc: 234881024 data_used: 22343680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408764416 unmapped: 75603968 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408920064 unmapped: 75448320 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e75000/0x0/0x1bfc00000, data 0x2c89461/0x2e9b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 75440128 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.758886337s of 10.001246452s, submitted: 100
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4364119 data_alloc: 234881024 data_used: 22368256
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4364119 data_alloc: 234881024 data_used: 22368256
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4364119 data_alloc: 234881024 data_used: 22368256
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.248374939s of 15.257328033s, submitted: 11
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409034752 unmapped: 75333632 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409034752 unmapped: 75333632 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363399 data_alloc: 234881024 data_used: 22360064
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0c684d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0a4c45a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409034752 unmapped: 75333632 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f836c00 session 0x558e0c66cd20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0d5a34a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409042944 unmapped: 75325440 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e71000/0x0/0x1bfc00000, data 0x2c9b451/0x2eac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,0,0,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0bf4b0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409042944 unmapped: 75325440 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409042944 unmapped: 75325440 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0c56fe00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409042944 unmapped: 75325440 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0cccf800 session 0x558e0a71cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0f836c00 session 0x558e0b29d860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a71da40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231339 data_alloc: 218103808 data_used: 16879616
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0c667c00 session 0x558e0c6cf0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409067520 unmapped: 75300864 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0cccf800 session 0x558e0c4b7a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a59bd000/0x0/0x1bfc00000, data 0x214e0aa/0x2360000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409075712 unmapped: 75292672 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409075712 unmapped: 75292672 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a59bd000/0x0/0x1bfc00000, data 0x214e0aa/0x2360000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 75538432 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4232271 data_alloc: 218103808 data_used: 16965632
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a59bd000/0x0/0x1bfc00000, data 0x214e0aa/0x2360000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 75522048 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e11154c00 session 0x558e0d5a23c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.663591385s of 16.776542664s, submitted: 48
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231687 data_alloc: 218103808 data_used: 16965632
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 401 ms_handle_reset con 0x558e0b340000 session 0x558e0d1c6b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 401 heartbeat osd_stat(store_statfs(0x1a59be000/0x0/0x1bfc00000, data 0x214e0aa/0x2360000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 75505664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 401 heartbeat osd_stat(store_statfs(0x1a59bb000/0x0/0x1bfc00000, data 0x214fd57/0x2363000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4239553 data_alloc: 218103808 data_used: 17051648
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 75505664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 75505664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 75505664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 75522048 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b7000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 75522048 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4241247 data_alloc: 218103808 data_used: 17059840
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b7000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b7000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b7000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.227296829s of 12.311328888s, submitted: 44
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b8000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0f836c00 session 0x558e0c6cfc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e109db400 session 0x558e0b29da40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240071 data_alloc: 218103808 data_used: 17059840
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0b340000 session 0x558e0d5a2960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213106 data_alloc: 218103808 data_used: 16859136
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213106 data_alloc: 218103808 data_used: 16859136
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213106 data_alloc: 218103808 data_used: 16859136
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213106 data_alloc: 218103808 data_used: 16859136
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.005113602s of 24.130119324s, submitted: 35
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6cfe00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 75538432 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0c667c00 session 0x558e0d457680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a71d0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408625152 unmapped: 75743232 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408633344 unmapped: 75735040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408633344 unmapped: 75735040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0b340000 session 0x558e0a0f1860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301691 data_alloc: 218103808 data_used: 16859136
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0c667c00 session 0x558e0d25de00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408633344 unmapped: 75735040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0f836c00 session 0x558e0d1c74a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e109db400 session 0x558e0cc88d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408633344 unmapped: 75735040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 75923456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4380091 data_alloc: 234881024 data_used: 27590656
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4380091 data_alloc: 234881024 data_used: 27590656
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.854196548s of 17.023889542s, submitted: 38
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 75177984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410050560 unmapped: 74317824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a4ce6000/0x0/0x1bfc00000, data 0x2e1e886/0x3032000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4431225 data_alloc: 234881024 data_used: 28008448
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4431865 data_alloc: 234881024 data_used: 28090368
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a3a98000/0x0/0x1bfc00000, data 0x2ed2886/0x30e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.631714821s of 11.070816994s, submitted: 104
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a3a97000/0x0/0x1bfc00000, data 0x2ed3886/0x30e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4422481 data_alloc: 234881024 data_used: 28094464
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a3a95000/0x0/0x1bfc00000, data 0x2ed4886/0x30e8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 403 ms_handle_reset con 0x558e0c667c00 session 0x558e0a71c5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 417325056 unmapped: 67043328 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 403 ms_handle_reset con 0x558e0f836c00 session 0x558e08c6b680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 404 ms_handle_reset con 0x558e0cccf800 session 0x558e0c314b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413450240 unmapped: 70918144 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e11154c00 session 0x558e0c685a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 70893568 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4513340 data_alloc: 234881024 data_used: 28905472
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 heartbeat osd_stat(store_statfs(0x1a32c0000/0x0/0x1bfc00000, data 0x36a3e11/0x38bc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 70885376 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0ccce000 session 0x558e0c6ce3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0c667c00 session 0x558e0c4b6000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0cccf800 session 0x558e0c6d5c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410288128 unmapped: 74080256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410296320 unmapped: 74072064 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 74063872 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 74063872 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4509472 data_alloc: 234881024 data_used: 28901376
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.822659492s of 11.073749542s, submitted: 77
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0a2bf000 session 0x558e0bf4a5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 74063872 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 heartbeat osd_stat(store_statfs(0x1a32b5000/0x0/0x1bfc00000, data 0x36b0e11/0x38c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 74063872 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0f836c00 session 0x558e0a0f1c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e11154c00 session 0x558e0d30d680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d1c6960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0c667c00 session 0x558e0c4b6b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410329088 unmapped: 74039296 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0f836c00 session 0x558e0d5a2000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0cccf800 session 0x558e0ce90780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e11154c00 session 0x558e0a5492c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c56f0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0c667c00 session 0x558e0c6d4d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4509750 data_alloc: 234881024 data_used: 28909568
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a32b0000/0x0/0x1bfc00000, data 0x36b2960/0x38cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a32b0000/0x0/0x1bfc00000, data 0x36b2960/0x38cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4509822 data_alloc: 234881024 data_used: 28909568
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.139338493s of 10.163581848s, submitted: 14
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0cccf800 session 0x558e0c314780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410353664 unmapped: 74014720 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a32ad000/0x0/0x1bfc00000, data 0x36b45b9/0x38d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0f836c00 session 0x558e0d3cba40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410353664 unmapped: 74014720 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0cf4c400 session 0x558e0d1c61e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 74006528 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410370048 unmapped: 73998336 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0c667c00 session 0x558e0c030000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4514644 data_alloc: 234881024 data_used: 29429760
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0cccf800 session 0x558e0bf4bc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410370048 unmapped: 73998336 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a32ad000/0x0/0x1bfc00000, data 0x36b45b9/0x38d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410370048 unmapped: 73998336 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410378240 unmapped: 73990144 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d5a3860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410443776 unmapped: 73924608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410443776 unmapped: 73924608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4543604 data_alloc: 234881024 data_used: 32706560
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410443776 unmapped: 73924608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a32ad000/0x0/0x1bfc00000, data 0x36b45b9/0x38d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410443776 unmapped: 73924608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.790160179s of 11.805708885s, submitted: 4
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e11155400 session 0x558e0cc89a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410460160 unmapped: 73908224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 408 ms_handle_reset con 0x558e0ccce400 session 0x558e0c66d680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410468352 unmapped: 73900032 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410468352 unmapped: 73900032 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4572762 data_alloc: 234881024 data_used: 32714752
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 408 heartbeat osd_stat(store_statfs(0x1a32aa000/0x0/0x1bfc00000, data 0x36b6266/0x38d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410468352 unmapped: 73900032 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410501120 unmapped: 73867264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410501120 unmapped: 73867264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4614288 data_alloc: 234881024 data_used: 33460224
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f9c000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f9c000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416186368 unmapped: 68182016 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4616848 data_alloc: 234881024 data_used: 33906688
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416186368 unmapped: 68182016 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416186368 unmapped: 68182016 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.853892326s of 14.974067688s, submitted: 36
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f9c000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 70172672 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4632678 data_alloc: 234881024 data_used: 34930688
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4633158 data_alloc: 234881024 data_used: 34942976
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c60c1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [0,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 ms_handle_reset con 0x558e0c667c00 session 0x558e0a71d0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a346e000/0x0/0x1bfc00000, data 0x3113d43/0x3331000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a346e000/0x0/0x1bfc00000, data 0x3113d43/0x3331000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524596 data_alloc: 234881024 data_used: 32493568
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a346e000/0x0/0x1bfc00000, data 0x3113d43/0x3331000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.275312424s of 16.359561920s, submitted: 35
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525396 data_alloc: 234881024 data_used: 32514048
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 ms_handle_reset con 0x558e0cccf800 session 0x558e0ad5a780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a346e000/0x0/0x1bfc00000, data 0x3113d43/0x3331000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e11155400 session 0x558e0d1c65a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e0cd2d000 session 0x558e0a0f03c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e0b3d3400 session 0x558e0a4c4b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423624704 unmapped: 60743680 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c60d2c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a177a000/0x0/0x1bfc00000, data 0x51e49fe/0x5404000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e0c667c00 session 0x558e0c5123c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 419741696 unmapped: 77234176 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 412 ms_handle_reset con 0x558e0cccf800 session 0x558e0c6cf680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 419758080 unmapped: 77217792 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 412 ms_handle_reset con 0x558e11155400 session 0x558e0d30d4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 419758080 unmapped: 77217792 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 412 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a548960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 412 ms_handle_reset con 0x558e0b3d3400 session 0x558e0a4c4780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 78233600 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764253 data_alloc: 234881024 data_used: 32800768
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 78233600 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a1549000/0x0/0x1bfc00000, data 0x54142be/0x5635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 78217216 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 413 ms_handle_reset con 0x558e0c667c00 session 0x558e098d9a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416489472 unmapped: 80486400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 413 ms_handle_reset con 0x558e0f836c00 session 0x558e0d5a3c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.862730026s of 10.283346176s, submitted: 116
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 80445440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 413 ms_handle_reset con 0x558e0cccf800 session 0x558e0d1c65a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 80445440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4489765 data_alloc: 234881024 data_used: 29618176
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 80445440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 80445440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3b49000/0x0/0x1bfc00000, data 0x2e11c40/0x3034000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66d680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416555008 unmapped: 80420864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0b340000 session 0x558e0a549c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416555008 unmapped: 80420864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0b3d3400 session 0x558e0c685a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 88064000 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291724 data_alloc: 218103808 data_used: 16338944
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 88064000 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0c667c00 session 0x558e0cc88d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a4a8e000/0x0/0x1bfc00000, data 0x1ecd7c3/0x20f0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0f836c00 session 0x558e0cc3ed20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c684780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0b340000 session 0x558e0c6ce5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0b3d3400 session 0x558e0bf4b860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4312252 data_alloc: 218103808 data_used: 16338944
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a48b2000/0x0/0x1bfc00000, data 0x20a97c3/0x22cc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.552254677s of 13.795706749s, submitted: 89
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4320330 data_alloc: 218103808 data_used: 16969728
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4320330 data_alloc: 218103808 data_used: 16969728
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.908812523s of 11.926033020s, submitted: 12
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409862144 unmapped: 87113728 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4366102 data_alloc: 218103808 data_used: 17080320
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 87105536 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a303e000/0x0/0x1bfc00000, data 0x277b302/0x299f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4378202 data_alloc: 218103808 data_used: 17108992
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3020000/0x0/0x1bfc00000, data 0x279a302/0x29be000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4374966 data_alloc: 218103808 data_used: 17108992
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.472076416s of 13.670102119s, submitted: 55
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3013000/0x0/0x1bfc00000, data 0x27a7302/0x29cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4375226 data_alloc: 218103808 data_used: 17108992
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3013000/0x0/0x1bfc00000, data 0x27a7302/0x29cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3013000/0x0/0x1bfc00000, data 0x27a7302/0x29cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377610 data_alloc: 218103808 data_used: 17108992
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e12992800 session 0x558e0cc3fa40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0c667c00 session 0x558e0c56eb40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a300f000/0x0/0x1bfc00000, data 0x27aa312/0x29cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377770 data_alloc: 218103808 data_used: 17113088
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a300f000/0x0/0x1bfc00000, data 0x27aa312/0x29cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377770 data_alloc: 218103808 data_used: 17113088
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 84738048 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c4b7a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.976377487s of 19.017902374s, submitted: 5
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 84738048 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 84729856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b340000 session 0x558e0a0e2960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b3d3400 session 0x558e0d30d0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e12992800 session 0x558e0ce90b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a300c000/0x0/0x1bfc00000, data 0x27ab322/0x29d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 84729856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0de69000 session 0x558e0c6d41e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 84729856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305796 data_alloc: 218103808 data_used: 16351232
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6854a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0f496400 session 0x558e0c60d680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0cd2c800 session 0x558e0c6d4780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38e9000/0x0/0x1bfc00000, data 0x1ecf322/0x20f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b340000 session 0x558e0c4b70e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304511 data_alloc: 218103808 data_used: 16351232
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b3d3400 session 0x558e0d3cad20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.843249321s of 11.028274536s, submitted: 15
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c56e960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x1ecf302/0x20f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303631 data_alloc: 218103808 data_used: 16347136
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x1ecf302/0x20f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b340000 session 0x558e0d3cbe00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x1ecf302/0x20f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0cd2c800 session 0x558e08c6be00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38ea000/0x0/0x1bfc00000, data 0x1ecf364/0x20f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305413 data_alloc: 218103808 data_used: 16347136
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0f496400 session 0x558e0c5714a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 84688896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38ea000/0x0/0x1bfc00000, data 0x1ecf364/0x20f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [2,0,0,0,0,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0de69000 session 0x558e0cc3f2c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66de00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412688384 unmapped: 84287488 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.943180084s of 10.085391045s, submitted: 40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412696576 unmapped: 84279296 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0b340000 session 0x558e0bf4bc20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0cd2c800 session 0x558e0af02960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 84303872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2cc4000/0x0/0x1bfc00000, data 0x26e3fbd/0x290a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 84303872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4407523 data_alloc: 218103808 data_used: 16355328
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a287a000/0x0/0x1bfc00000, data 0x2b2dfbd/0x2d54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a287a000/0x0/0x1bfc00000, data 0x2b2dfbd/0x2d54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 84295680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a287a000/0x0/0x1bfc00000, data 0x2b2dfbd/0x2d54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 84295680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0f496400 session 0x558e0d3cb860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409100288 unmapped: 87875584 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e12992800 session 0x558e0d1c7e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2856000/0x0/0x1bfc00000, data 0x2b51fbd/0x2d78000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409100288 unmapped: 87875584 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409116672 unmapped: 87859200 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446431 data_alloc: 218103808 data_used: 20959232
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2855000/0x0/0x1bfc00000, data 0x2b5201f/0x2d79000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4468351 data_alloc: 234881024 data_used: 24096768
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.258309364s of 12.342646599s, submitted: 19
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2855000/0x0/0x1bfc00000, data 0x2b5201f/0x2d79000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 85942272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411328512 unmapped: 85647360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411328512 unmapped: 85647360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412581888 unmapped: 84393984 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525595 data_alloc: 234881024 data_used: 28930048
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0cd2c800 session 0x558e0c571a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 84852736 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0f496400 session 0x558e0d30d4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a245e000/0x0/0x1bfc00000, data 0x2f4901f/0x3170000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412426240 unmapped: 84549632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412426240 unmapped: 84549632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e12992800 session 0x558e0d1c65a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2426000/0x0/0x1bfc00000, data 0x2f8101f/0x31a8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c56e960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412434432 unmapped: 84541440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412434432 unmapped: 84541440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4542345 data_alloc: 234881024 data_used: 29458432
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e15232c00 session 0x558e0d1c6960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.196988106s of 10.386419296s, submitted: 61
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412434432 unmapped: 84541440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 418 ms_handle_reset con 0x558e0cd2c800 session 0x558e0ce91860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 418 ms_handle_reset con 0x558e0ea8a000 session 0x558e0a0e3680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a28ce000/0x0/0x1bfc00000, data 0x2ad8c08/0x2cff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4474491 data_alloc: 234881024 data_used: 24653824
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 418 ms_handle_reset con 0x558e0f496400 session 0x558e0d3ca960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a28ce000/0x0/0x1bfc00000, data 0x2ad8c08/0x2cff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4474491 data_alloc: 234881024 data_used: 24653824
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a28ce000/0x0/0x1bfc00000, data 0x2ad8c08/0x2cff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.505303383s of 11.614216805s, submitted: 31
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e12992800 session 0x558e0cc88f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0b340000 session 0x558e0bf4a780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a4c45a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0cd2c800 session 0x558e0b29c000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a28cb000/0x0/0x1bfc00000, data 0x2ada747/0x2d02000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0ea8a000 session 0x558e0d3cbe00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405253 data_alloc: 218103808 data_used: 16371712
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405253 data_alloc: 218103808 data_used: 16371712
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405253 data_alloc: 218103808 data_used: 16371712
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0f496400 session 0x558e0c4b7680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e12992800 session 0x558e0cc89e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0a2bf000 session 0x558e0cc89c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0cd2c800 session 0x558e0c314b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405253 data_alloc: 218103808 data_used: 16371712
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4477893 data_alloc: 234881024 data_used: 26533888
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4477893 data_alloc: 234881024 data_used: 26533888
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.765428543s of 28.864784241s, submitted: 33
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [0,0,3,1,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 76546048 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 77209600 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0f496400 session 0x558e0c60cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0b341800 session 0x558e0c6d5a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a14e5000/0x0/0x1bfc00000, data 0x3ec07a9/0x40e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a14e5000/0x0/0x1bfc00000, data 0x3ec07a9/0x40e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4660750 data_alloc: 234881024 data_used: 26886144
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e104ef000 session 0x558e0af02f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a14c3000/0x0/0x1bfc00000, data 0x3ee27a9/0x410b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c684f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0b341800 session 0x558e0cc3e5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4657126 data_alloc: 234881024 data_used: 26886144
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0cd2c800 session 0x558e0a0f14a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a14c3000/0x0/0x1bfc00000, data 0x3ee27a9/0x410b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.701116562s of 11.043125153s, submitted: 137
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421076992 unmapped: 75898880 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 420 ms_handle_reset con 0x558e0f496400 session 0x558e0c60d680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422682624 unmapped: 74293248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a14bb000/0x0/0x1bfc00000, data 0x3ee605b/0x4111000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422682624 unmapped: 74293248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4740938 data_alloc: 234881024 data_used: 35729408
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a14b8000/0x0/0x1bfc00000, data 0x40a105b/0x4116000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4740938 data_alloc: 234881024 data_used: 35729408
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a14b8000/0x0/0x1bfc00000, data 0x40a105b/0x4116000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0c618c00 session 0x558e0cf0f860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422699008 unmapped: 74276864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422699008 unmapped: 74276864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.234736443s of 11.276307106s, submitted: 10
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422699008 unmapped: 74276864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 69681152 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427311104 unmapped: 69664768 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4863206 data_alloc: 234881024 data_used: 36593664
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0615000/0x0/0x1bfc00000, data 0x4f4306b/0x4fb9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0615000/0x0/0x1bfc00000, data 0x4f4306b/0x4fb9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 69648384 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6d4b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0b341800 session 0x558e0ad5b0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 69648384 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0c618c00 session 0x558e0c685a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 69640192 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0cd2c800 session 0x558e0ae32d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427343872 unmapped: 69632000 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427343872 unmapped: 69632000 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4871830 data_alloc: 234881024 data_used: 36737024
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427532288 unmapped: 69443584 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a060b000/0x0/0x1bfc00000, data 0x4f4d06b/0x4fc3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427532288 unmapped: 69443584 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.665211678s of 10.000015259s, submitted: 103
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0605000/0x0/0x1bfc00000, data 0x4f5306b/0x4fc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e104ef000 session 0x558e0c56ed20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4873286 data_alloc: 234881024 data_used: 36773888
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0605000/0x0/0x1bfc00000, data 0x4f5306b/0x4fc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4873446 data_alloc: 234881024 data_used: 36798464
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427687936 unmapped: 69287936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0605000/0x0/0x1bfc00000, data 0x4f5306b/0x4fc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4890086 data_alloc: 251658240 data_used: 39841792
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 59K writes, 225K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s#012Cumulative WAL: 59K writes, 22K syncs, 2.68 writes per sync, written: 0.22 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3818 writes, 14K keys, 3818 commit groups, 1.0 writes per commit group, ingest: 14.28 MB, 0.02 MB/s#012Interval WAL: 3818 writes, 1605 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.779505730s of 14.846899033s, submitted: 3
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428474368 unmapped: 68501504 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428728320 unmapped: 68247552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4908390 data_alloc: 251658240 data_used: 41259008
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0599000/0x0/0x1bfc00000, data 0x4fc006b/0x5035000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0599000/0x0/0x1bfc00000, data 0x4fc006b/0x5035000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4908770 data_alloc: 251658240 data_used: 41263104
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0598000/0x0/0x1bfc00000, data 0x4fc106b/0x5036000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0598000/0x0/0x1bfc00000, data 0x4fc106b/0x5036000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428425216 unmapped: 68550656 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.186706543s of 11.215800285s, submitted: 11
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428425216 unmapped: 68550656 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a0e2000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649418 data_alloc: 234881024 data_used: 30011392
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0b341800 session 0x558e0c315680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0596000/0x0/0x1bfc00000, data 0x4fc206b/0x5037000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a12b1000/0x0/0x1bfc00000, data 0x3879009/0x38ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649418 data_alloc: 234881024 data_used: 30011392
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0f496400 session 0x558e0d30d680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0c618c00 session 0x558e0c56e780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1ce1000/0x0/0x1bfc00000, data 0x3879009/0x38ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1ce1000/0x0/0x1bfc00000, data 0x3879009/0x38ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649066 data_alloc: 234881024 data_used: 30011392
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0cd2c800 session 0x558e0c571860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.735739708s of 13.791164398s, submitted: 25
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6d4780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1ce1000/0x0/0x1bfc00000, data 0x3878ff9/0x38ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1d51000/0x0/0x1bfc00000, data 0x3809ff9/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4641654 data_alloc: 234881024 data_used: 30007296
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1d51000/0x0/0x1bfc00000, data 0x3809ff9/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0b341800 session 0x558e08c6a1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1d51000/0x0/0x1bfc00000, data 0x3809ff9/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1d51000/0x0/0x1bfc00000, data 0x3809ff9/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a1d4e000/0x0/0x1bfc00000, data 0x3807ca6/0x387f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 422 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c6d5680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 422 ms_handle_reset con 0x558e0c618c00 session 0x558e0d457680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4637380 data_alloc: 234881024 data_used: 30015488
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a1d50000/0x0/0x1bfc00000, data 0x3651ca6/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429531136 unmapped: 67444736 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.924095154s of 10.024603844s, submitted: 47
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 423 ms_handle_reset con 0x558e0f496400 session 0x558e0d1c6780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a34c5000/0x0/0x1bfc00000, data 0x1edb7e5/0x2108000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4371570 data_alloc: 218103808 data_used: 16392192
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0a2bf000 session 0x558e0bf4b0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423280640 unmapped: 73695232 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0b341800 session 0x558e0d25cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0c618c00 session 0x558e0d457e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0ea8a000 session 0x558e0d3ca000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e1cd82800 session 0x558e0d456960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a34c2000/0x0/0x1bfc00000, data 0x1edd492/0x210b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426958848 unmapped: 70017024 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0a2bf000 session 0x558e0ae33860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a2f98000/0x0/0x1bfc00000, data 0x2407492/0x2635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 72646656 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417202 data_alloc: 218103808 data_used: 16392192
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 72646656 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424337408 unmapped: 72638464 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420000 data_alloc: 218103808 data_used: 16392192
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2f95000/0x0/0x1bfc00000, data 0x2408fd1/0x2638000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b341800 session 0x558e0a71cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424353792 unmapped: 72622080 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 72589312 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.533414841s of 16.642957687s, submitted: 39
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c570b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2f95000/0x0/0x1bfc00000, data 0x2408fd1/0x2638000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 72589312 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459394 data_alloc: 234881024 data_used: 21716992
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 72589312 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 72589312 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c684b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8b800 session 0x558e0cc3f860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541033 data_alloc: 234881024 data_used: 21716992
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2528000/0x0/0x1bfc00000, data 0x2e76fd1/0x30a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 71360512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.110908508s of 10.324708939s, submitted: 77
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427384832 unmapped: 69591040 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4607333 data_alloc: 234881024 data_used: 21712896
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427393024 unmapped: 69582848 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1c7e000/0x0/0x1bfc00000, data 0x3718fd1/0x3948000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1c7d000/0x0/0x1bfc00000, data 0x3721fd1/0x3951000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b341800 session 0x558e0d3ca5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67dc00 session 0x558e098d8960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4619143 data_alloc: 234881024 data_used: 22691840
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c513a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b206400 session 0x558e0c571680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1c7c000/0x0/0x1bfc00000, data 0x3721fe1/0x3952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426541056 unmapped: 70434816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1c7c000/0x0/0x1bfc00000, data 0x3721fe1/0x3952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426639360 unmapped: 70336512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426639360 unmapped: 70336512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426639360 unmapped: 70336512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4699282 data_alloc: 234881024 data_used: 33542144
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426647552 unmapped: 70328320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c618c00 session 0x558e08c6a960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426647552 unmapped: 70328320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.568646431s of 12.707002640s, submitted: 28
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67dc00 session 0x558e0d25d860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8a000 session 0x558e0cf0f860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67c800 session 0x558e0a0e2b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c619800 session 0x558e0c6d43c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c618c00 session 0x558e0c4b6000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 70762496 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67c800 session 0x558e0d1c7860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67dc00 session 0x558e0d30c5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2a51000/0x0/0x1bfc00000, data 0x294cfe1/0x2b7d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 72982528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 72982528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4610407 data_alloc: 234881024 data_used: 27230208
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c56e1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 72982528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 72974336 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424173568 unmapped: 72802304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2268000/0x0/0x1bfc00000, data 0x3135043/0x3366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [0,0,0,0,7])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427737088 unmapped: 69238784 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427884544 unmapped: 69091328 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4751766 data_alloc: 234881024 data_used: 35971072
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1831000/0x0/0x1bfc00000, data 0x3b64043/0x3d95000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759170 data_alloc: 234881024 data_used: 36057088
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.375086784s of 13.775465012s, submitted: 154
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427909120 unmapped: 69066752 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1835000/0x0/0x1bfc00000, data 0x3b68043/0x3d99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d30de00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b341800 session 0x558e0c314000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428294144 unmapped: 68681728 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66d2c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a0df9000/0x0/0x1bfc00000, data 0x459e033/0x47ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432095232 unmapped: 64880640 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4849623 data_alloc: 234881024 data_used: 36495360
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a0d47000/0x0/0x1bfc00000, data 0x4648033/0x4878000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4849783 data_alloc: 234881024 data_used: 36499456
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a0d36000/0x0/0x1bfc00000, data 0x4668033/0x4898000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a0d36000/0x0/0x1bfc00000, data 0x4668033/0x4898000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4841275 data_alloc: 234881024 data_used: 36503552
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.999837875s of 14.550086975s, submitted: 164
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0c4b6960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c71b000 session 0x558e0c6cf4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432365568 unmapped: 64610304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c618c00 session 0x558e0a4c45a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426991616 unmapped: 69984256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffd1/0x35af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [0,0,0,0,0,0,1,1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427048960 unmapped: 69926912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201f000/0x0/0x1bfc00000, data 0x337ffd1/0x35af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427065344 unmapped: 69910528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427081728 unmapped: 69894144 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4641177 data_alloc: 234881024 data_used: 27955200
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427122688 unmapped: 69853184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d456000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427139072 unmapped: 69836800 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b341800 session 0x558e0d1c7c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427155456 unmapped: 69820416 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0c56f4a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c618c00 session 0x558e0a0f0780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427155456 unmapped: 69820416 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427163648 unmapped: 69812224 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4643015 data_alloc: 234881024 data_used: 27955200
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4644455 data_alloc: 234881024 data_used: 28127232
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4644455 data_alloc: 234881024 data_used: 28127232
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.808780670s of 20.876974106s, submitted: 392
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428220416 unmapped: 68755456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4665527 data_alloc: 234881024 data_used: 30130176
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4665527 data_alloc: 234881024 data_used: 30130176
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4669367 data_alloc: 234881024 data_used: 30838784
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.971826553s of 13.989375114s, submitted: 17
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a201a000/0x0/0x1bfc00000, data 0x3381c3a/0x35b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4673541 data_alloc: 234881024 data_used: 30846976
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c67dc00 session 0x558e0a0e3a40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c4b65a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a8b000/0x0/0x1bfc00000, data 0x3910c9c/0x3b43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4723414 data_alloc: 234881024 data_used: 30846976
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a8b000/0x0/0x1bfc00000, data 0x3910c9c/0x3b43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4723414 data_alloc: 234881024 data_used: 30846976
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a8b000/0x0/0x1bfc00000, data 0x3910c9c/0x3b43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.506410599s of 16.591121674s, submitted: 32
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a8b000/0x0/0x1bfc00000, data 0x3910c9c/0x3b43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0b341800 session 0x558e0d30cd20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428679168 unmapped: 68296704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428679168 unmapped: 68296704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4784980 data_alloc: 234881024 data_used: 36495360
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a66000/0x0/0x1bfc00000, data 0x3b60c9c/0x3b68000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4785940 data_alloc: 234881024 data_used: 36536320
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436420608 unmapped: 60555264 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.235580444s of 10.482039452s, submitted: 97
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a0e02000/0x0/0x1bfc00000, data 0x47c4c9c/0x47cc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x47cac9c/0x47d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4880954 data_alloc: 234881024 data_used: 37404672
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x47cac9c/0x47d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439377920 unmapped: 57597952 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439386112 unmapped: 57589760 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439386112 unmapped: 57589760 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4946272 data_alloc: 234881024 data_used: 38883328
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04cd000/0x0/0x1bfc00000, data 0x4ce9c9c/0x4cf1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04cd000/0x0/0x1bfc00000, data 0x4ce9c9c/0x4cf1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04cd000/0x0/0x1bfc00000, data 0x4ce9c9c/0x4cf1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4946752 data_alloc: 234881024 data_used: 39026688
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04cd000/0x0/0x1bfc00000, data 0x4ce9c9c/0x4cf1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0a71cf00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c618c00 session 0x558e0c5705a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.299722672s of 15.453255653s, submitted: 26
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0ea8a000 session 0x558e0af02f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04f1000/0x0/0x1bfc00000, data 0x4cc5c9c/0x4ccd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04f1000/0x0/0x1bfc00000, data 0x4cc5c9c/0x4ccd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4933476 data_alloc: 234881024 data_used: 38961152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66de00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04f1000/0x0/0x1bfc00000, data 0x4cc5c9c/0x4ccd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0b341800 session 0x558e0d3cb0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436232192 unmapped: 60743680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0d1c72c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 427 ms_handle_reset con 0x558e0c618c00 session 0x558e0d25c1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436248576 unmapped: 60727296 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a09ff000/0x0/0x1bfc00000, data 0x4af7c9c/0x47bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4914724 data_alloc: 234881024 data_used: 38768640
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436248576 unmapped: 60727296 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a0a0c000/0x0/0x1bfc00000, data 0x457cafd/0x47b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 427 ms_handle_reset con 0x558e0c71b000 session 0x558e0c5712c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 427 ms_handle_reset con 0x558e0c67c800 session 0x558e0c4b6b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a0a0c000/0x0/0x1bfc00000, data 0x457cafd/0x47b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.103678703s of 10.367775917s, submitted: 42
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 427 ms_handle_reset con 0x558e0a2bf000 session 0x558e0cc885a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4903765 data_alloc: 234881024 data_used: 38871040
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436264960 unmapped: 60710912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 429 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0ce91860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436281344 unmapped: 60694528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a0a0a000/0x0/0x1bfc00000, data 0x457e62c/0x47b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 429 ms_handle_reset con 0x558e0c618c00 session 0x558e0cc88f00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4623165 data_alloc: 234881024 data_used: 23072768
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ea9000/0x0/0x1bfc00000, data 0x30df277/0x3314000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ea9000/0x0/0x1bfc00000, data 0x30df277/0x3314000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.837158203s of 11.938447952s, submitted: 44
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ea9000/0x0/0x1bfc00000, data 0x30df277/0x3314000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434733056 unmapped: 62242816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4633765 data_alloc: 234881024 data_used: 24612864
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434733056 unmapped: 62242816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434733056 unmapped: 62242816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434741248 unmapped: 62234624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434741248 unmapped: 62234624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1ea6000/0x0/0x1bfc00000, data 0x30e0db6/0x3317000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434749440 unmapped: 62226432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4643795 data_alloc: 234881024 data_used: 25104384
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435798016 unmapped: 61177856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435798016 unmapped: 61177856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435798016 unmapped: 61177856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e08c6a960
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1ea7000/0x0/0x1bfc00000, data 0x30e0db6/0x3317000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c513e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435806208 unmapped: 61169664 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435806208 unmapped: 61169664 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435806208 unmapped: 61169664 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435806208 unmapped: 61169664 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435822592 unmapped: 61153280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435822592 unmapped: 61153280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435822592 unmapped: 61153280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.666606903s of 31.999231339s, submitted: 37
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0d25da40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0c6d4d20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c618c00 session 0x558e0bf4b680
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c67c800 session 0x558e0d5a32c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d3ca3c0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4488421 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0d3ca000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0d5a25a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c618c00 session 0x558e0d457c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c71b000 session 0x558e0c315e00
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436002816 unmapped: 60973056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4497833 data_alloc: 218103808 data_used: 17612800
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436011008 unmapped: 60964864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521193 data_alloc: 218103808 data_used: 20926464
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521193 data_alloc: 218103808 data_used: 20926464
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.873628616s of 18.985849380s, submitted: 42
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 438075392 unmapped: 58900480 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439517184 unmapped: 57458688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 438853632 unmapped: 58122240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c66db6/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [1])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4601825 data_alloc: 234881024 data_used: 22523904
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600273 data_alloc: 234881024 data_used: 22536192
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a71c780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4599449 data_alloc: 234881024 data_used: 22536192
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.488086700s of 19.132707596s, submitted: 110
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4599581 data_alloc: 234881024 data_used: 22536192
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600541 data_alloc: 234881024 data_used: 22593536
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600541 data_alloc: 234881024 data_used: 22593536
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.338050842s of 12.457947731s, submitted: 1
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4606689 data_alloc: 234881024 data_used: 23166976
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609505 data_alloc: 234881024 data_used: 23175168
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439934976 unmapped: 57040896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2314000/0x0/0x1bfc00000, data 0x2c73db6/0x2eaa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0bf4ba40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0cc88000
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454343 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454343 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454343 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e15232400 session 0x558e0c6850e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454343 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.194274902s of 32.377357483s, submitted: 50
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e15232400 session 0x558e0d1c6780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c67dc00 session 0x558e0d5a30e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c56e780
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0c570b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0ce91860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480452 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2d7d000/0x0/0x1bfc00000, data 0x220bd54/0x2441000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480452 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e15232400 session 0x558e0c4b6b40
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2d7d000/0x0/0x1bfc00000, data 0x220bd54/0x2441000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4498652 data_alloc: 218103808 data_used: 18604032
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2d59000/0x0/0x1bfc00000, data 0x222fd54/0x2465000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4498652 data_alloc: 218103808 data_used: 18604032
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432177152 unmapped: 64798720 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432177152 unmapped: 64798720 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2d59000/0x0/0x1bfc00000, data 0x222fd54/0x2465000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432177152 unmapped: 64798720 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.752426147s of 18.832401276s, submitted: 18
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432177152 unmapped: 64798720 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4545632 data_alloc: 218103808 data_used: 18874368
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2840000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4545632 data_alloc: 218103808 data_used: 18874368
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b387800 session 0x558e0d25c1e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c54b000 session 0x558e0a549c20
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2840000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2841000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4543984 data_alloc: 218103808 data_used: 18874368
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.182659149s of 14.319707870s, submitted: 28
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2841000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4544276 data_alloc: 218103808 data_used: 18878464
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2841000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433143808 unmapped: 63832064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a5494a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433143808 unmapped: 63832064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0c6cf0e0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0c6ce5a0
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a307d000/0x0/0x1bfc00000, data 0x1f0bd54/0x2141000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 63807488 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 63807488 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431775744 unmapped: 65200128 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431775744 unmapped: 65200128 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 65191936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 65191936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 65191936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 65191936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 65183744 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 65183744 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 65183744 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 65183744 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431833088 unmapped: 65142784 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431833088 unmapped: 65142784 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a5a2400 session 0x558e08c6b860
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431865856 unmapped: 65110016 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431865856 unmapped: 65110016 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431865856 unmapped: 65110016 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431865856 unmapped: 65110016 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431874048 unmapped: 65101824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 65077248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 65077248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 65077248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431906816 unmapped: 65069056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: do_command 'config diff' '{prefix=config diff}'
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: do_command 'config show' '{prefix=config show}'
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 65667072 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431071232 unmapped: 65904640 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:20:52 np0005465987 ceph-osd[78159]: do_command 'log dump' '{prefix=log dump}'
Oct  2 09:20:52 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:20:52 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2931863105' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:20:52 np0005465987 nova_compute[230713]: 2025-10-02 13:20:52.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:52 np0005465987 nova_compute[230713]: 2025-10-02 13:20:52.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:20:53 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/319070693' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:20:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:53 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  2 09:20:53 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2268979104' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  2 09:20:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:54 np0005465987 nova_compute[230713]: 2025-10-02 13:20:54.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  2 09:20:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/47274639' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  2 09:20:54 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  2 09:20:54 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/768032342' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  2 09:20:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  2 09:20:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/682358552' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  2 09:20:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  2 09:20:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3803297933' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  2 09:20:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  2 09:20:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/234934425' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  2 09:20:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  2 09:20:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3104217782' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  2 09:20:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:55.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:55 np0005465987 nova_compute[230713]: 2025-10-02 13:20:55.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:55 np0005465987 nova_compute[230713]: 2025-10-02 13:20:55.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.003 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.003 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.003 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3545378520' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1195385828' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2299104660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.418 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3830493280' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.579 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.580 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3992MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.581 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.581 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3720196201' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.656 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.656 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.675 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.703 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.704 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.727 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.755 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:20:56 np0005465987 nova_compute[230713]: 2025-10-02 13:20:56.796 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct  2 09:20:56 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/394210393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/693468167' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/647639629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:20:57 np0005465987 nova_compute[230713]: 2025-10-02 13:20:57.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:57 np0005465987 nova_compute[230713]: 2025-10-02 13:20:57.240 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:57 np0005465987 nova_compute[230713]: 2025-10-02 13:20:57.246 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1062998975' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  2 09:20:57 np0005465987 nova_compute[230713]: 2025-10-02 13:20:57.269 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:20:57 np0005465987 nova_compute[230713]: 2025-10-02 13:20:57.272 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:20:57 np0005465987 nova_compute[230713]: 2025-10-02 13:20:57.273 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:57 np0005465987 systemd[1]: Starting Hostname Service...
Oct  2 09:20:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:57.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:57 np0005465987 systemd[1]: Started Hostname Service.
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/740542697' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct  2 09:20:57 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2482861771' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  2 09:20:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:57.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct  2 09:20:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3150948890' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  2 09:20:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:59.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:59 np0005465987 nova_compute[230713]: 2025-10-02 13:20:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct  2 09:20:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1993493330' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  2 09:20:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:20:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:20:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:20:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct  2 09:20:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2958749579' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  2 09:21:00 np0005465987 nova_compute[230713]: 2025-10-02 13:21:00.273 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:00 np0005465987 nova_compute[230713]: 2025-10-02 13:21:00.273 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:21:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:21:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1016760165' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:21:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct  2 09:21:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1234391447' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  2 09:21:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:21:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:21:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:01.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:21:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:21:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct  2 09:21:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/707602679' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  2 09:21:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:02 np0005465987 nova_compute[230713]: 2025-10-02 13:21:02.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct  2 09:21:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1368806525' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  2 09:21:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  2 09:21:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3241487011' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  2 09:21:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct  2 09:21:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4212749076' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct  2 09:21:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct  2 09:21:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2755522120' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct  2 09:21:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:21:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:03.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:21:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct  2 09:21:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1480918496' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct  2 09:21:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:03.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:04 np0005465987 nova_compute[230713]: 2025-10-02 13:21:04.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct  2 09:21:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2024468315' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct  2 09:21:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct  2 09:21:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3781967353' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct  2 09:21:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:05.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct  2 09:21:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3719616468' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct  2 09:21:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:05.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct  2 09:21:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3208837824' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct  2 09:21:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct  2 09:21:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3067747308' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct  2 09:21:07 np0005465987 nova_compute[230713]: 2025-10-02 13:21:07.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:07.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:07.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct  2 09:21:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2494512526' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct  2 09:21:08 np0005465987 podman[324954]: 2025-10-02 13:21:08.178093029 +0000 UTC m=+0.077389733 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 09:21:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct  2 09:21:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/590510402' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct  2 09:21:08 np0005465987 ovs-appctl[325340]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  2 09:21:08 np0005465987 ovs-appctl[325348]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  2 09:21:08 np0005465987 ovs-appctl[325356]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  2 09:21:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:21:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:21:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:09.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:09 np0005465987 nova_compute[230713]: 2025-10-02 13:21:09.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 09:21:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4269806587' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 09:21:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:09.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct  2 09:21:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/226297357' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct  2 09:21:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct  2 09:21:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2907860677' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct  2 09:21:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:11.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:11.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:21:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4245540599' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:21:12 np0005465987 nova_compute[230713]: 2025-10-02 13:21:12.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:12 np0005465987 podman[326322]: 2025-10-02 13:21:12.272620743 +0000 UTC m=+0.109128098 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:21:12 np0005465987 podman[326323]: 2025-10-02 13:21:12.279194792 +0000 UTC m=+0.108027638 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:21:12 np0005465987 podman[326334]: 2025-10-02 13:21:12.306149388 +0000 UTC m=+0.111999487 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  2 09:21:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:21:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:13.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:21:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct  2 09:21:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4021928338' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct  2 09:21:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:13.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct  2 09:21:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2362213958' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct  2 09:21:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct  2 09:21:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2556280337' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct  2 09:21:14 np0005465987 nova_compute[230713]: 2025-10-02 13:21:14.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct  2 09:21:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3734773083' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct  2 09:21:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:15.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct  2 09:21:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/728846324' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct  2 09:21:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:15.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct  2 09:21:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/590357678' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct  2 09:21:17 np0005465987 nova_compute[230713]: 2025-10-02 13:21:17.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:17.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct  2 09:21:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2608124106' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct  2 09:21:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:17.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct  2 09:21:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3138812586' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct  2 09:21:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:21:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1128627087' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:21:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:19.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:19 np0005465987 nova_compute[230713]: 2025-10-02 13:21:19.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct  2 09:21:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3768860084' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct  2 09:21:19 np0005465987 nova_compute[230713]: 2025-10-02 13:21:19.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:19.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 09:21:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2296609078' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 09:21:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct  2 09:21:21 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3111701462' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct  2 09:21:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:21.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:21 np0005465987 virtqemud[230442]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 09:21:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:21.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:22 np0005465987 nova_compute[230713]: 2025-10-02 13:21:22.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:21:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:21:22 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:21:23 np0005465987 systemd[1]: Starting Time & Date Service...
Oct  2 09:21:23 np0005465987 systemd[1]: Started Time & Date Service.
Oct  2 09:21:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:23.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:21:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:23.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:21:24 np0005465987 nova_compute[230713]: 2025-10-02 13:21:24.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:25.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:25.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:27 np0005465987 nova_compute[230713]: 2025-10-02 13:21:27.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:27.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:27.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:21:28.007 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:21:28.007 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:21:28.008 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:21:28 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:21:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:29.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:29 np0005465987 nova_compute[230713]: 2025-10-02 13:21:29.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:21:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:29.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:21:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:31.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:31.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:32 np0005465987 nova_compute[230713]: 2025-10-02 13:21:32.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:33.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:34 np0005465987 nova_compute[230713]: 2025-10-02 13:21:34.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:35.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:37 np0005465987 nova_compute[230713]: 2025-10-02 13:21:37.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:37.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:38 np0005465987 podman[327592]: 2025-10-02 13:21:38.782396843 +0000 UTC m=+0.055469744 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:21:39 np0005465987 nova_compute[230713]: 2025-10-02 13:21:39.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:21:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:39.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:21:39 np0005465987 nova_compute[230713]: 2025-10-02 13:21:39.994 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:39.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:41.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:41.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:42 np0005465987 nova_compute[230713]: 2025-10-02 13:21:42.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:42 np0005465987 podman[327610]: 2025-10-02 13:21:42.671559546 +0000 UTC m=+0.059371842 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 09:21:42 np0005465987 podman[327609]: 2025-10-02 13:21:42.672357367 +0000 UTC m=+0.060224334 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:21:42 np0005465987 podman[327608]: 2025-10-02 13:21:42.690342348 +0000 UTC m=+0.083740206 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 09:21:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:21:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:43.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:21:43 np0005465987 nova_compute[230713]: 2025-10-02 13:21:43.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000028s ======
Oct  2 09:21:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:44.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct  2 09:21:44 np0005465987 nova_compute[230713]: 2025-10-02 13:21:44.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:21:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:45.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:21:45 np0005465987 nova_compute[230713]: 2025-10-02 13:21:45.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:45 np0005465987 nova_compute[230713]: 2025-10-02 13:21:45.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:21:45 np0005465987 nova_compute[230713]: 2025-10-02 13:21:45.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:21:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:21:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:46.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:21:46 np0005465987 nova_compute[230713]: 2025-10-02 13:21:46.037 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:21:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:47 np0005465987 nova_compute[230713]: 2025-10-02 13:21:47.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:47.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:48.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:21:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:49.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:21:49 np0005465987 nova_compute[230713]: 2025-10-02 13:21:49.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:50.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:51.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:52.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:52 np0005465987 nova_compute[230713]: 2025-10-02 13:21:52.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:52 np0005465987 nova_compute[230713]: 2025-10-02 13:21:52.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:53 np0005465987 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 09:21:53 np0005465987 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 09:21:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:21:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:53.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:21:53 np0005465987 nova_compute[230713]: 2025-10-02 13:21:53.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:53 np0005465987 nova_compute[230713]: 2025-10-02 13:21:53.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:54.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:54 np0005465987 nova_compute[230713]: 2025-10-02 13:21:54.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:55.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:56.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:56 np0005465987 nova_compute[230713]: 2025-10-02 13:21:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:57 np0005465987 nova_compute[230713]: 2025-10-02 13:21:57.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:57.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:57 np0005465987 nova_compute[230713]: 2025-10-02 13:21:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:21:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:58.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.038 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.038 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.039 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.039 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.039 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:21:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:21:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/509542363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.454 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.593 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.594 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4144MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.594 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.594 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.902 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.902 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:21:58 np0005465987 nova_compute[230713]: 2025-10-02 13:21:58.919 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:21:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:21:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/114957763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:21:59 np0005465987 nova_compute[230713]: 2025-10-02 13:21:59.355 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:21:59 np0005465987 nova_compute[230713]: 2025-10-02 13:21:59.361 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:21:59 np0005465987 nova_compute[230713]: 2025-10-02 13:21:59.383 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:21:59 np0005465987 nova_compute[230713]: 2025-10-02 13:21:59.385 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:21:59 np0005465987 nova_compute[230713]: 2025-10-02 13:21:59.386 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:21:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:59.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:59 np0005465987 nova_compute[230713]: 2025-10-02 13:21:59.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:00.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:01.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:02 np0005465987 nova_compute[230713]: 2025-10-02 13:22:02.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:02 np0005465987 nova_compute[230713]: 2025-10-02 13:22:02.386 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:02 np0005465987 nova_compute[230713]: 2025-10-02 13:22:02.386 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:22:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:03.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:03 np0005465987 systemd[1]: session-65.scope: Deactivated successfully.
Oct  2 09:22:03 np0005465987 systemd[1]: session-65.scope: Consumed 2min 37.251s CPU time, 941.6M memory peak, read 379.0M from disk, written 329.3M to disk.
Oct  2 09:22:03 np0005465987 systemd-logind[794]: Session 65 logged out. Waiting for processes to exit.
Oct  2 09:22:03 np0005465987 systemd-logind[794]: Removed session 65.
Oct  2 09:22:03 np0005465987 systemd-logind[794]: New session 66 of user zuul.
Oct  2 09:22:03 np0005465987 systemd[1]: Started Session 66 of User zuul.
Oct  2 09:22:03 np0005465987 systemd[1]: session-66.scope: Deactivated successfully.
Oct  2 09:22:04 np0005465987 systemd-logind[794]: Session 66 logged out. Waiting for processes to exit.
Oct  2 09:22:04 np0005465987 systemd-logind[794]: Removed session 66.
Oct  2 09:22:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:04 np0005465987 systemd-logind[794]: New session 67 of user zuul.
Oct  2 09:22:04 np0005465987 systemd[1]: Started Session 67 of User zuul.
Oct  2 09:22:04 np0005465987 systemd[1]: session-67.scope: Deactivated successfully.
Oct  2 09:22:04 np0005465987 systemd-logind[794]: Session 67 logged out. Waiting for processes to exit.
Oct  2 09:22:04 np0005465987 systemd-logind[794]: Removed session 67.
Oct  2 09:22:04 np0005465987 nova_compute[230713]: 2025-10-02 13:22:04.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:05.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:07 np0005465987 nova_compute[230713]: 2025-10-02 13:22:07.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:08.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:09.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:09 np0005465987 nova_compute[230713]: 2025-10-02 13:22:09.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:09 np0005465987 podman[327778]: 2025-10-02 13:22:09.863627351 +0000 UTC m=+0.079280864 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:22:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:10.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:11.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:12.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:12 np0005465987 nova_compute[230713]: 2025-10-02 13:22:12.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:12 np0005465987 podman[327799]: 2025-10-02 13:22:12.83748566 +0000 UTC m=+0.056659497 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:22:12 np0005465987 podman[327800]: 2025-10-02 13:22:12.840596565 +0000 UTC m=+0.056411810 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:22:12 np0005465987 podman[327798]: 2025-10-02 13:22:12.894172356 +0000 UTC m=+0.116077537 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:22:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:14.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:14 np0005465987 nova_compute[230713]: 2025-10-02 13:22:14.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:15.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:16.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.478390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336478428, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 2637, "num_deletes": 250, "total_data_size": 5920265, "memory_usage": 5993112, "flush_reason": "Manual Compaction"}
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336493560, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 2412864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87468, "largest_seqno": 90099, "table_properties": {"data_size": 2403874, "index_size": 4907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26958, "raw_average_key_size": 22, "raw_value_size": 2382946, "raw_average_value_size": 2007, "num_data_blocks": 215, "num_entries": 1187, "num_filter_entries": 1187, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411132, "oldest_key_time": 1759411132, "file_creation_time": 1759411336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 15294 microseconds, and 6118 cpu microseconds.
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.493681) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 2412864 bytes OK
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.493772) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.495184) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.495201) EVENT_LOG_v1 {"time_micros": 1759411336495196, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.495218) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 5907856, prev total WAL file size 5907856, number of live WAL files 2.
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.496784) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303132' seq:72057594037927935, type:22 .. '6D6772737461740033323633' seq:0, type:0; will stop at (end)
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(2356KB)], [180(12MB)]
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336496849, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 15433980, "oldest_snapshot_seqno": -1}
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 11355 keys, 13053860 bytes, temperature: kUnknown
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336560644, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 13053860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12982931, "index_size": 41440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 298584, "raw_average_key_size": 26, "raw_value_size": 12787020, "raw_average_value_size": 1126, "num_data_blocks": 1578, "num_entries": 11355, "num_filter_entries": 11355, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759411336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.561091) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 13053860 bytes
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.562701) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.3 rd, 204.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 12.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 11774, records dropped: 419 output_compression: NoCompression
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.562746) EVENT_LOG_v1 {"time_micros": 1759411336562737, "job": 116, "event": "compaction_finished", "compaction_time_micros": 63969, "compaction_time_cpu_micros": 29077, "output_level": 6, "num_output_files": 1, "total_output_size": 13053860, "num_input_records": 11774, "num_output_records": 11355, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336563362, "job": 116, "event": "table_file_deletion", "file_number": 182}
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411336566460, "job": 116, "event": "table_file_deletion", "file_number": 180}
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.496634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.566558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.566564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.566566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.566567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:16 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:16.566569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:17 np0005465987 nova_compute[230713]: 2025-10-02 13:22:17.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:17.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:18.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:19.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:19 np0005465987 nova_compute[230713]: 2025-10-02 13:22:19.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:20.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:21.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:22.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:22 np0005465987 nova_compute[230713]: 2025-10-02 13:22:22.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:23.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.549288) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343549318, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 323, "num_deletes": 251, "total_data_size": 213828, "memory_usage": 220904, "flush_reason": "Manual Compaction"}
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343552362, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 140566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90105, "largest_seqno": 90422, "table_properties": {"data_size": 138568, "index_size": 225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5162, "raw_average_key_size": 18, "raw_value_size": 134605, "raw_average_value_size": 480, "num_data_blocks": 10, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411337, "oldest_key_time": 1759411337, "file_creation_time": 1759411343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 3110 microseconds, and 1078 cpu microseconds.
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.552397) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 140566 bytes OK
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.552412) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.553553) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.553573) EVENT_LOG_v1 {"time_micros": 1759411343553566, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.553591) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 211556, prev total WAL file size 211556, number of live WAL files 2.
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.554031) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(137KB)], [183(12MB)]
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343554102, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13194426, "oldest_snapshot_seqno": -1}
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 11125 keys, 11281275 bytes, temperature: kUnknown
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343604338, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 11281275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11213423, "index_size": 38939, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27845, "raw_key_size": 294492, "raw_average_key_size": 26, "raw_value_size": 11022969, "raw_average_value_size": 990, "num_data_blocks": 1465, "num_entries": 11125, "num_filter_entries": 11125, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759411343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.604648) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11281275 bytes
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.605898) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 262.9 rd, 224.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(174.1) write-amplify(80.3) OK, records in: 11635, records dropped: 510 output_compression: NoCompression
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.605920) EVENT_LOG_v1 {"time_micros": 1759411343605911, "job": 118, "event": "compaction_finished", "compaction_time_micros": 50180, "compaction_time_cpu_micros": 27351, "output_level": 6, "num_output_files": 1, "total_output_size": 11281275, "num_input_records": 11635, "num_output_records": 11125, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343606088, "job": 118, "event": "table_file_deletion", "file_number": 185}
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411343609028, "job": 118, "event": "table_file_deletion", "file_number": 183}
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.553939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.609162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.609168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.609169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.609171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:22:23.609172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:22:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:24.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:24 np0005465987 nova_compute[230713]: 2025-10-02 13:22:24.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:26.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:27 np0005465987 nova_compute[230713]: 2025-10-02 13:22:27.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:27.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:22:28.007 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:22:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:22:28.008 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:22:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:22:28.008 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:22:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:28.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:28 np0005465987 podman[328033]: 2025-10-02 13:22:28.64570082 +0000 UTC m=+0.060325927 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 09:22:28 np0005465987 podman[328033]: 2025-10-02 13:22:28.748037782 +0000 UTC m=+0.162662869 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 09:22:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:29 np0005465987 nova_compute[230713]: 2025-10-02 13:22:29.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:30.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:22:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:22:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:22:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:22:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:22:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:22:30 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:22:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:32.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:32 np0005465987 nova_compute[230713]: 2025-10-02 13:22:32.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:33.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:34.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:34 np0005465987 nova_compute[230713]: 2025-10-02 13:22:34.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:35.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:36.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:37 np0005465987 nova_compute[230713]: 2025-10-02 13:22:37.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:22:37 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:22:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:38.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:39.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:39 np0005465987 nova_compute[230713]: 2025-10-02 13:22:39.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:39 np0005465987 nova_compute[230713]: 2025-10-02 13:22:39.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:40 np0005465987 podman[328334]: 2025-10-02 13:22:40.862659736 +0000 UTC m=+0.074030131 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 09:22:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:41.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:42.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:42 np0005465987 nova_compute[230713]: 2025-10-02 13:22:42.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:43.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:43 np0005465987 podman[328355]: 2025-10-02 13:22:43.836133463 +0000 UTC m=+0.052870414 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:22:43 np0005465987 podman[328356]: 2025-10-02 13:22:43.836284087 +0000 UTC m=+0.049397309 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 09:22:43 np0005465987 podman[328354]: 2025-10-02 13:22:43.858800541 +0000 UTC m=+0.078514753 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:22:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:44.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:44 np0005465987 nova_compute[230713]: 2025-10-02 13:22:44.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:45 np0005465987 nova_compute[230713]: 2025-10-02 13:22:45.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:46.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:47 np0005465987 nova_compute[230713]: 2025-10-02 13:22:47.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:47.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:47 np0005465987 nova_compute[230713]: 2025-10-02 13:22:47.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:47 np0005465987 nova_compute[230713]: 2025-10-02 13:22:47.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:22:47 np0005465987 nova_compute[230713]: 2025-10-02 13:22:47.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:22:48 np0005465987 nova_compute[230713]: 2025-10-02 13:22:48.008 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:22:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:48.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:49.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:49 np0005465987 nova_compute[230713]: 2025-10-02 13:22:49.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:50.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:51.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:52.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:52 np0005465987 nova_compute[230713]: 2025-10-02 13:22:52.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:53.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:54.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:54 np0005465987 nova_compute[230713]: 2025-10-02 13:22:54.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:22:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 62K writes, 235K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.23 GB, 0.04 MB/s#012Cumulative WAL: 62K writes, 23K syncs, 2.67 writes per sync, written: 0.23 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2539 writes, 9349 keys, 2539 commit groups, 1.0 writes per commit group, ingest: 9.91 MB, 0.02 MB/s#012Interval WAL: 2539 writes, 1034 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:22:54 np0005465987 nova_compute[230713]: 2025-10-02 13:22:54.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:55.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:55 np0005465987 nova_compute[230713]: 2025-10-02 13:22:55.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:55 np0005465987 nova_compute[230713]: 2025-10-02 13:22:55.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:56.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:57 np0005465987 nova_compute[230713]: 2025-10-02 13:22:57.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:57.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:57 np0005465987 nova_compute[230713]: 2025-10-02 13:22:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:57 np0005465987 nova_compute[230713]: 2025-10-02 13:22:57.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:22:57 np0005465987 nova_compute[230713]: 2025-10-02 13:22:57.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:22:57 np0005465987 nova_compute[230713]: 2025-10-02 13:22:57.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:22:57 np0005465987 nova_compute[230713]: 2025-10-02 13:22:57.993 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:22:57 np0005465987 nova_compute[230713]: 2025-10-02 13:22:57.994 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:22:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:58.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:22:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1297171564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:22:58 np0005465987 nova_compute[230713]: 2025-10-02 13:22:58.426 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:22:58 np0005465987 nova_compute[230713]: 2025-10-02 13:22:58.581 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:22:58 np0005465987 nova_compute[230713]: 2025-10-02 13:22:58.582 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4254MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:22:58 np0005465987 nova_compute[230713]: 2025-10-02 13:22:58.582 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:22:58 np0005465987 nova_compute[230713]: 2025-10-02 13:22:58.582 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:22:58 np0005465987 nova_compute[230713]: 2025-10-02 13:22:58.702 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:22:58 np0005465987 nova_compute[230713]: 2025-10-02 13:22:58.702 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:22:58 np0005465987 nova_compute[230713]: 2025-10-02 13:22:58.791 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:22:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:22:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/876074866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:22:59 np0005465987 nova_compute[230713]: 2025-10-02 13:22:59.226 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:22:59 np0005465987 nova_compute[230713]: 2025-10-02 13:22:59.231 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:22:59 np0005465987 nova_compute[230713]: 2025-10-02 13:22:59.340 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:22:59 np0005465987 nova_compute[230713]: 2025-10-02 13:22:59.341 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:22:59 np0005465987 nova_compute[230713]: 2025-10-02 13:22:59.341 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:22:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:22:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:22:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:59.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:22:59 np0005465987 nova_compute[230713]: 2025-10-02 13:22:59.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:00.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:00 np0005465987 nova_compute[230713]: 2025-10-02 13:23:00.342 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:00 np0005465987 nova_compute[230713]: 2025-10-02 13:23:00.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:00 np0005465987 nova_compute[230713]: 2025-10-02 13:23:00.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:23:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:02.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:02 np0005465987 nova_compute[230713]: 2025-10-02 13:23:02.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:03.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:04.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:04 np0005465987 nova_compute[230713]: 2025-10-02 13:23:04.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:06.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:07 np0005465987 nova_compute[230713]: 2025-10-02 13:23:07.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:23:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:07.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:23:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:08.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:09.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:09 np0005465987 nova_compute[230713]: 2025-10-02 13:23:09.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:23:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:10.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:23:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:11.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:11 np0005465987 podman[328463]: 2025-10-02 13:23:11.854640248 +0000 UTC m=+0.076992162 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:23:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:12.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:12 np0005465987 nova_compute[230713]: 2025-10-02 13:23:12.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:23:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:13.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:23:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:14.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:14 np0005465987 nova_compute[230713]: 2025-10-02 13:23:14.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:14 np0005465987 podman[328485]: 2025-10-02 13:23:14.854048995 +0000 UTC m=+0.057613704 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:23:14 np0005465987 podman[328484]: 2025-10-02 13:23:14.87440798 +0000 UTC m=+0.085949646 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:23:14 np0005465987 podman[328486]: 2025-10-02 13:23:14.894502438 +0000 UTC m=+0.088349842 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:23:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:15.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:16.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:17 np0005465987 nova_compute[230713]: 2025-10-02 13:23:17.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:17.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:18.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:19.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:19 np0005465987 nova_compute[230713]: 2025-10-02 13:23:19.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:19 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:23:19 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:23:19 np0005465987 nova_compute[230713]: 2025-10-02 13:23:19.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:20.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:21.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:22.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:22 np0005465987 nova_compute[230713]: 2025-10-02 13:23:22.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:23.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:24.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:24 np0005465987 nova_compute[230713]: 2025-10-02 13:23:24.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:25.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:26.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:27 np0005465987 nova_compute[230713]: 2025-10-02 13:23:27.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:27.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:23:28.009 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:23:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:23:28.009 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:23:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:23:28.009 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:23:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:28.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:29.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:29 np0005465987 nova_compute[230713]: 2025-10-02 13:23:29.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:30.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:23:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 18K writes, 91K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1551 writes, 7888 keys, 1551 commit groups, 1.0 writes per commit group, ingest: 15.94 MB, 0.03 MB/s#012Interval WAL: 1551 writes, 1551 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     60.0      1.85              0.34        59    0.031       0      0       0.0       0.0#012  L6      1/0   10.76 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5    124.3    106.7      5.67              1.75        58    0.098    458K    31K       0.0       0.0#012 Sum      1/0   10.76 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.5     93.8     95.2      7.52              2.10       117    0.064    458K    31K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.0    190.4    185.8      0.42              0.21        12    0.035     67K   2966       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    124.3    106.7      5.67              1.75        58    0.098    458K    31K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     60.1      1.84              0.34        58    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.108, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.70 GB write, 0.11 MB/s write, 0.69 GB read, 0.11 MB/s read, 7.5 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 78.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000584 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4839,74.86 MB,24.6266%) FilterBlock(117,1.23 MB,0.40359%) IndexBlock(117,2.03 MB,0.669148%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:23:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:31.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:23:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:32.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:23:32 np0005465987 nova_compute[230713]: 2025-10-02 13:23:32.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:33.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:34.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:34 np0005465987 nova_compute[230713]: 2025-10-02 13:23:34.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:35.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:36.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:37 np0005465987 nova_compute[230713]: 2025-10-02 13:23:37.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:37.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:38.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:23:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:23:39 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:23:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:39.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:39 np0005465987 nova_compute[230713]: 2025-10-02 13:23:39.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:40.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:40 np0005465987 nova_compute[230713]: 2025-10-02 13:23:40.980 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:41.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:42.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:42 np0005465987 nova_compute[230713]: 2025-10-02 13:23:42.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:42 np0005465987 podman[328680]: 2025-10-02 13:23:42.829548786 +0000 UTC m=+0.051258879 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:23:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:43.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:23:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:23:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:44.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:44 np0005465987 nova_compute[230713]: 2025-10-02 13:23:44.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:45.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:45 np0005465987 podman[328751]: 2025-10-02 13:23:45.870494335 +0000 UTC m=+0.078621287 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:23:45 np0005465987 podman[328750]: 2025-10-02 13:23:45.891496538 +0000 UTC m=+0.104146753 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:23:45 np0005465987 podman[328757]: 2025-10-02 13:23:45.89266348 +0000 UTC m=+0.090280975 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:23:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:46 np0005465987 nova_compute[230713]: 2025-10-02 13:23:46.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:47 np0005465987 nova_compute[230713]: 2025-10-02 13:23:47.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:47.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:48.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:48 np0005465987 nova_compute[230713]: 2025-10-02 13:23:48.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:48 np0005465987 nova_compute[230713]: 2025-10-02 13:23:48.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:23:48 np0005465987 nova_compute[230713]: 2025-10-02 13:23:48.967 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:23:49 np0005465987 nova_compute[230713]: 2025-10-02 13:23:49.019 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:23:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:49.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:49 np0005465987 nova_compute[230713]: 2025-10-02 13:23:49.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:50.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:51.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:52.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:52 np0005465987 nova_compute[230713]: 2025-10-02 13:23:52.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:23:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:23:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:54 np0005465987 nova_compute[230713]: 2025-10-02 13:23:54.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:23:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/877821466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:23:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:23:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/877821466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:23:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:23:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:55.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:23:55 np0005465987 nova_compute[230713]: 2025-10-02 13:23:55.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:55 np0005465987 nova_compute[230713]: 2025-10-02 13:23:55.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:56.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.583099) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436583142, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1173, "num_deletes": 255, "total_data_size": 2548917, "memory_usage": 2588288, "flush_reason": "Manual Compaction"}
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436594046, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 1659869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90427, "largest_seqno": 91595, "table_properties": {"data_size": 1654697, "index_size": 2631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11036, "raw_average_key_size": 19, "raw_value_size": 1644332, "raw_average_value_size": 2900, "num_data_blocks": 116, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411345, "oldest_key_time": 1759411345, "file_creation_time": 1759411436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 11012 microseconds, and 5767 cpu microseconds.
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.594107) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 1659869 bytes OK
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.594177) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.595396) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.595412) EVENT_LOG_v1 {"time_micros": 1759411436595406, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.595429) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 2543283, prev total WAL file size 2543283, number of live WAL files 2.
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.596329) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353130' seq:72057594037927935, type:22 .. '6C6F676D0033373631' seq:0, type:0; will stop at (end)
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(1620KB)], [186(10MB)]
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436596402, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 12941144, "oldest_snapshot_seqno": -1}
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 11167 keys, 12802756 bytes, temperature: kUnknown
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436665105, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 12802756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12732733, "index_size": 40982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 296275, "raw_average_key_size": 26, "raw_value_size": 12539611, "raw_average_value_size": 1122, "num_data_blocks": 1551, "num_entries": 11167, "num_filter_entries": 11167, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759411436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.665755) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 12802756 bytes
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.667014) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.3 rd, 185.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.8 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(15.5) write-amplify(7.7) OK, records in: 11692, records dropped: 525 output_compression: NoCompression
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.667036) EVENT_LOG_v1 {"time_micros": 1759411436667026, "job": 120, "event": "compaction_finished", "compaction_time_micros": 69096, "compaction_time_cpu_micros": 30327, "output_level": 6, "num_output_files": 1, "total_output_size": 12802756, "num_input_records": 11692, "num_output_records": 11167, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436667620, "job": 120, "event": "table_file_deletion", "file_number": 188}
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411436670575, "job": 120, "event": "table_file_deletion", "file_number": 186}
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.596171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.670724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.670731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.670733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.670734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:56 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:23:56.670735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:57 np0005465987 nova_compute[230713]: 2025-10-02 13:23:57.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:57.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:57 np0005465987 nova_compute[230713]: 2025-10-02 13:23:57.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:57 np0005465987 nova_compute[230713]: 2025-10-02 13:23:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.093 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.093 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.093 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.093 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.094 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:23:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:58.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:23:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1639181627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.511 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.660 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.661 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4251MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.662 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.662 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.849 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.849 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:23:58 np0005465987 nova_compute[230713]: 2025-10-02 13:23:58.956 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:23:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:23:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579191126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:23:59 np0005465987 nova_compute[230713]: 2025-10-02 13:23:59.368 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:23:59 np0005465987 nova_compute[230713]: 2025-10-02 13:23:59.374 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:23:59 np0005465987 nova_compute[230713]: 2025-10-02 13:23:59.412 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:23:59 np0005465987 nova_compute[230713]: 2025-10-02 13:23:59.415 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:23:59 np0005465987 nova_compute[230713]: 2025-10-02 13:23:59.416 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:23:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:23:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:59.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:59 np0005465987 nova_compute[230713]: 2025-10-02 13:23:59.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:24:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:00.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:24:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:02.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:02 np0005465987 nova_compute[230713]: 2025-10-02 13:24:02.417 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:02 np0005465987 nova_compute[230713]: 2025-10-02 13:24:02.417 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:02 np0005465987 nova_compute[230713]: 2025-10-02 13:24:02.418 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:24:02 np0005465987 nova_compute[230713]: 2025-10-02 13:24:02.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:04.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:04 np0005465987 nova_compute[230713]: 2025-10-02 13:24:04.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:05.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:06.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:07 np0005465987 nova_compute[230713]: 2025-10-02 13:24:07.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:07.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:08.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:09.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:09 np0005465987 nova_compute[230713]: 2025-10-02 13:24:09.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:10.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:12 np0005465987 nova_compute[230713]: 2025-10-02 13:24:12.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:13.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:13 np0005465987 podman[328859]: 2025-10-02 13:24:13.833549736 +0000 UTC m=+0.055771992 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:24:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:14.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:14 np0005465987 nova_compute[230713]: 2025-10-02 13:24:14.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:16.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:16 np0005465987 podman[328879]: 2025-10-02 13:24:16.860685779 +0000 UTC m=+0.078881153 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 09:24:16 np0005465987 podman[328880]: 2025-10-02 13:24:16.9043504 +0000 UTC m=+0.108821860 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct  2 09:24:16 np0005465987 podman[328878]: 2025-10-02 13:24:16.905000788 +0000 UTC m=+0.126098651 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 09:24:17 np0005465987 nova_compute[230713]: 2025-10-02 13:24:17.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:17.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:18.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:19.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:19 np0005465987 nova_compute[230713]: 2025-10-02 13:24:19.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:20.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:22.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:22 np0005465987 nova_compute[230713]: 2025-10-02 13:24:22.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:23.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:24.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:24 np0005465987 nova_compute[230713]: 2025-10-02 13:24:24.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:25.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:26.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:27 np0005465987 nova_compute[230713]: 2025-10-02 13:24:27.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:27.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:24:28.010 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:24:28.010 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:24:28.010 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:28.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:29.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:29 np0005465987 nova_compute[230713]: 2025-10-02 13:24:29.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:30.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:31.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:32.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:32 np0005465987 nova_compute[230713]: 2025-10-02 13:24:32.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:33.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:34.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:34 np0005465987 nova_compute[230713]: 2025-10-02 13:24:34.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:35.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:36.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:37 np0005465987 nova_compute[230713]: 2025-10-02 13:24:37.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:37.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:38 np0005465987 nova_compute[230713]: 2025-10-02 13:24:38.053 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:38.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:39.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:39 np0005465987 nova_compute[230713]: 2025-10-02 13:24:39.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:40.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:40 np0005465987 nova_compute[230713]: 2025-10-02 13:24:40.960 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:40 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  2 09:24:41 np0005465987 radosgw[82824]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct  2 09:24:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:42.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:42 np0005465987 nova_compute[230713]: 2025-10-02 13:24:42.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:44 np0005465987 podman[328966]: 2025-10-02 13:24:44.249934564 +0000 UTC m=+0.055108975 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:24:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:44.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:44 np0005465987 nova_compute[230713]: 2025-10-02 13:24:44.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:24:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:24:45 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:24:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:46.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:47 np0005465987 nova_compute[230713]: 2025-10-02 13:24:47.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:47.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:47 np0005465987 podman[329094]: 2025-10-02 13:24:47.840986161 +0000 UTC m=+0.053610934 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:24:47 np0005465987 podman[329093]: 2025-10-02 13:24:47.841653449 +0000 UTC m=+0.058807766 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:24:47 np0005465987 podman[329092]: 2025-10-02 13:24:47.864256806 +0000 UTC m=+0.081895206 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:24:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:48.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:48 np0005465987 nova_compute[230713]: 2025-10-02 13:24:48.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:48 np0005465987 nova_compute[230713]: 2025-10-02 13:24:48.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:24:48 np0005465987 nova_compute[230713]: 2025-10-02 13:24:48.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:24:48 np0005465987 nova_compute[230713]: 2025-10-02 13:24:48.987 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:24:48 np0005465987 nova_compute[230713]: 2025-10-02 13:24:48.987 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:49.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:49 np0005465987 nova_compute[230713]: 2025-10-02 13:24:49.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:50.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:51.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:24:52 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:24:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:52.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:52 np0005465987 nova_compute[230713]: 2025-10-02 13:24:52.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:54.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:54 np0005465987 nova_compute[230713]: 2025-10-02 13:24:54.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:24:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:55.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:24:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:56.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:57 np0005465987 nova_compute[230713]: 2025-10-02 13:24:57.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:57.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:57 np0005465987 nova_compute[230713]: 2025-10-02 13:24:57.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:57 np0005465987 nova_compute[230713]: 2025-10-02 13:24:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:57 np0005465987 nova_compute[230713]: 2025-10-02 13:24:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:57 np0005465987 nova_compute[230713]: 2025-10-02 13:24:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.019 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.019 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.019 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.020 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.020 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:58.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:24:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1705947348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.456 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.599 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.601 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4265MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.601 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.601 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.705 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.706 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:24:58 np0005465987 nova_compute[230713]: 2025-10-02 13:24:58.728 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:24:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2004890090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:24:59 np0005465987 nova_compute[230713]: 2025-10-02 13:24:59.149 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:59 np0005465987 nova_compute[230713]: 2025-10-02 13:24:59.155 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:24:59 np0005465987 nova_compute[230713]: 2025-10-02 13:24:59.348 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:24:59 np0005465987 nova_compute[230713]: 2025-10-02 13:24:59.349 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:24:59 np0005465987 nova_compute[230713]: 2025-10-02 13:24:59.350 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:24:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:59.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:59 np0005465987 nova_compute[230713]: 2025-10-02 13:24:59.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:00.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:01.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:02.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:02 np0005465987 nova_compute[230713]: 2025-10-02 13:25:02.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:03 np0005465987 nova_compute[230713]: 2025-10-02 13:25:03.350 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:03 np0005465987 nova_compute[230713]: 2025-10-02 13:25:03.351 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:03 np0005465987 nova_compute[230713]: 2025-10-02 13:25:03.351 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:25:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:03.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:04 np0005465987 nova_compute[230713]: 2025-10-02 13:25:04.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:05.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:06.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:07 np0005465987 nova_compute[230713]: 2025-10-02 13:25:07.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:07.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:08.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:09.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:09 np0005465987 nova_compute[230713]: 2025-10-02 13:25:09.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:09 np0005465987 nova_compute[230713]: 2025-10-02 13:25:09.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:25:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:10.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:25:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:11.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:12.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:12 np0005465987 nova_compute[230713]: 2025-10-02 13:25:12.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:13.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:14 np0005465987 podman[329249]: 2025-10-02 13:25:14.822256804 +0000 UTC m=+0.043953601 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:25:14 np0005465987 nova_compute[230713]: 2025-10-02 13:25:14.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:15.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:17 np0005465987 nova_compute[230713]: 2025-10-02 13:25:17.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:17.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:18.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:18 np0005465987 podman[329271]: 2025-10-02 13:25:18.839589682 +0000 UTC m=+0.051104185 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:25:18 np0005465987 podman[329270]: 2025-10-02 13:25:18.860576345 +0000 UTC m=+0.074943616 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:25:18 np0005465987 podman[329272]: 2025-10-02 13:25:18.867047851 +0000 UTC m=+0.074904714 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:25:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:19.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:19 np0005465987 nova_compute[230713]: 2025-10-02 13:25:19.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:20.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:20 np0005465987 nova_compute[230713]: 2025-10-02 13:25:20.796 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:20 np0005465987 nova_compute[230713]: 2025-10-02 13:25:20.796 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:25:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:21.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:22 np0005465987 nova_compute[230713]: 2025-10-02 13:25:22.116 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:22 np0005465987 nova_compute[230713]: 2025-10-02 13:25:22.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:23.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:24 np0005465987 nova_compute[230713]: 2025-10-02 13:25:24.029 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:24.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:24 np0005465987 nova_compute[230713]: 2025-10-02 13:25:24.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:25.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:26.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:27 np0005465987 nova_compute[230713]: 2025-10-02 13:25:27.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:27.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:25:28.010 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:25:28.010 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:25:28.010 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:28.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:28 np0005465987 nova_compute[230713]: 2025-10-02 13:25:28.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:28 np0005465987 nova_compute[230713]: 2025-10-02 13:25:28.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:25:29 np0005465987 nova_compute[230713]: 2025-10-02 13:25:29.039 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:25:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:29 np0005465987 nova_compute[230713]: 2025-10-02 13:25:29.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:30.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:32.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:32 np0005465987 nova_compute[230713]: 2025-10-02 13:25:32.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:33.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:34.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:34 np0005465987 nova_compute[230713]: 2025-10-02 13:25:34.577 2 DEBUG oslo_concurrency.processutils [None req-5849a452-6b6a-4a68-83cf-321285824690 57823c2cf8a04b2abc574ed057efc3db 1533ac528d35434c826050eed402afba - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:25:34 np0005465987 nova_compute[230713]: 2025-10-02 13:25:34.618 2 DEBUG oslo_concurrency.processutils [None req-5849a452-6b6a-4a68-83cf-321285824690 57823c2cf8a04b2abc574ed057efc3db 1533ac528d35434c826050eed402afba - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:25:35 np0005465987 nova_compute[230713]: 2025-10-02 13:25:35.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:36.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:37 np0005465987 nova_compute[230713]: 2025-10-02 13:25:37.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:38.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:39.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:40 np0005465987 nova_compute[230713]: 2025-10-02 13:25:40.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:40.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:42.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:42 np0005465987 nova_compute[230713]: 2025-10-02 13:25:42.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:42 np0005465987 nova_compute[230713]: 2025-10-02 13:25:42.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:25:42.776 138325 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:77:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f6:43:27:7a:f9:f1'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:25:42 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:25:42.777 138325 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:25:43 np0005465987 nova_compute[230713]: 2025-10-02 13:25:43.032 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:44.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:45 np0005465987 nova_compute[230713]: 2025-10-02 13:25:45.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:45 np0005465987 podman[329334]: 2025-10-02 13:25:45.825469665 +0000 UTC m=+0.045205925 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 09:25:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:45.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:46.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:47 np0005465987 nova_compute[230713]: 2025-10-02 13:25:47.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:47 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:25:47.779 138325 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e4c8874a-a81b-4869-98e4-aca2e3f3bf40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:25:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:47.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:48.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:49.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:49 np0005465987 podman[329354]: 2025-10-02 13:25:49.8834337 +0000 UTC m=+0.094928739 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:25:49 np0005465987 podman[329353]: 2025-10-02 13:25:49.883411 +0000 UTC m=+0.095864205 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid)
Oct  2 09:25:49 np0005465987 podman[329352]: 2025-10-02 13:25:49.905640936 +0000 UTC m=+0.117675629 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 09:25:49 np0005465987 nova_compute[230713]: 2025-10-02 13:25:49.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:49 np0005465987 nova_compute[230713]: 2025-10-02 13:25:49.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:25:49 np0005465987 nova_compute[230713]: 2025-10-02 13:25:49.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:25:49 np0005465987 nova_compute[230713]: 2025-10-02 13:25:49.979 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:25:50 np0005465987 nova_compute[230713]: 2025-10-02 13:25:50.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:25:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:50.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:25:50 np0005465987 nova_compute[230713]: 2025-10-02 13:25:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:51.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:52.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:52 np0005465987 nova_compute[230713]: 2025-10-02 13:25:52.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:53.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:54.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:55 np0005465987 nova_compute[230713]: 2025-10-02 13:25:55.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:25:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1754120143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:25:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:25:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1754120143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:25:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:25:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:25:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:25:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:25:55 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:25:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:55.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:57 np0005465987 nova_compute[230713]: 2025-10-02 13:25:57.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:57.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:57 np0005465987 nova_compute[230713]: 2025-10-02 13:25:57.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:57 np0005465987 nova_compute[230713]: 2025-10-02 13:25:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:57 np0005465987 nova_compute[230713]: 2025-10-02 13:25:57.993 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:57 np0005465987 nova_compute[230713]: 2025-10-02 13:25:57.994 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:57 np0005465987 nova_compute[230713]: 2025-10-02 13:25:57.994 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:57 np0005465987 nova_compute[230713]: 2025-10-02 13:25:57.994 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:25:57 np0005465987 nova_compute[230713]: 2025-10-02 13:25:57.994 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:25:58 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:25:58 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2468123150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:25:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.431 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.561 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.562 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4274MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.563 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.563 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.640 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.641 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.654 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.670 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.670 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.682 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.700 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:25:58 np0005465987 nova_compute[230713]: 2025-10-02 13:25:58.715 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:25:59 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:25:59 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1508713232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:25:59 np0005465987 nova_compute[230713]: 2025-10-02 13:25:59.144 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:25:59 np0005465987 nova_compute[230713]: 2025-10-02 13:25:59.151 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:25:59 np0005465987 nova_compute[230713]: 2025-10-02 13:25:59.170 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:25:59 np0005465987 nova_compute[230713]: 2025-10-02 13:25:59.173 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:25:59 np0005465987 nova_compute[230713]: 2025-10-02 13:25:59.173 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:25:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:25:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:59.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:00 np0005465987 nova_compute[230713]: 2025-10-02 13:26:00.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:00 np0005465987 nova_compute[230713]: 2025-10-02 13:26:00.174 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:00 np0005465987 nova_compute[230713]: 2025-10-02 13:26:00.175 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:00.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:01.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:01 np0005465987 nova_compute[230713]: 2025-10-02 13:26:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.094919) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562094954, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1439, "num_deletes": 251, "total_data_size": 3319642, "memory_usage": 3358976, "flush_reason": "Manual Compaction"}
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562106986, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 2179196, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91601, "largest_seqno": 93034, "table_properties": {"data_size": 2173059, "index_size": 3399, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12937, "raw_average_key_size": 19, "raw_value_size": 2160817, "raw_average_value_size": 3339, "num_data_blocks": 150, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411437, "oldest_key_time": 1759411437, "file_creation_time": 1759411562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 12115 microseconds, and 5542 cpu microseconds.
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.107031) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 2179196 bytes OK
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.107050) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.108656) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.108670) EVENT_LOG_v1 {"time_micros": 1759411562108666, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.108687) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 3313026, prev total WAL file size 3313026, number of live WAL files 2.
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.109577) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(2128KB)], [189(12MB)]
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562109617, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 14981952, "oldest_snapshot_seqno": -1}
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 11297 keys, 13035281 bytes, temperature: kUnknown
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562166819, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 13035281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12964239, "index_size": 41681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28293, "raw_key_size": 299642, "raw_average_key_size": 26, "raw_value_size": 12768632, "raw_average_value_size": 1130, "num_data_blocks": 1576, "num_entries": 11297, "num_filter_entries": 11297, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759411562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.167043) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13035281 bytes
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.168574) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 261.6 rd, 227.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 12.2 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(12.9) write-amplify(6.0) OK, records in: 11814, records dropped: 517 output_compression: NoCompression
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.168592) EVENT_LOG_v1 {"time_micros": 1759411562168584, "job": 122, "event": "compaction_finished", "compaction_time_micros": 57260, "compaction_time_cpu_micros": 31960, "output_level": 6, "num_output_files": 1, "total_output_size": 13035281, "num_input_records": 11814, "num_output_records": 11297, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562169069, "job": 122, "event": "table_file_deletion", "file_number": 191}
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411562170869, "job": 122, "event": "table_file_deletion", "file_number": 189}
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.109476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.170928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.170932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.170933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.170935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:26:02.170936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:26:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:02.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:02 np0005465987 nova_compute[230713]: 2025-10-02 13:26:02.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:26:02 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:26:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:03.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:03 np0005465987 nova_compute[230713]: 2025-10-02 13:26:03.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:03 np0005465987 nova_compute[230713]: 2025-10-02 13:26:03.964 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:26:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:05 np0005465987 nova_compute[230713]: 2025-10-02 13:26:05.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:05.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:07 np0005465987 nova_compute[230713]: 2025-10-02 13:26:07.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:07.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:26:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:09.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:26:10 np0005465987 nova_compute[230713]: 2025-10-02 13:26:10.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:10.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:11.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:12.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:12 np0005465987 nova_compute[230713]: 2025-10-02 13:26:12.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:13.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:14.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:15 np0005465987 nova_compute[230713]: 2025-10-02 13:26:15.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:15.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:16.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:16 np0005465987 podman[329639]: 2025-10-02 13:26:16.852189956 +0000 UTC m=+0.074607117 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:26:17 np0005465987 nova_compute[230713]: 2025-10-02 13:26:17.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:17.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:18.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:19.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:20 np0005465987 nova_compute[230713]: 2025-10-02 13:26:20.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:20.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:20 np0005465987 podman[329661]: 2025-10-02 13:26:20.856513658 +0000 UTC m=+0.068277163 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:26:20 np0005465987 podman[329659]: 2025-10-02 13:26:20.868220897 +0000 UTC m=+0.091545128 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:26:20 np0005465987 podman[329660]: 2025-10-02 13:26:20.874127909 +0000 UTC m=+0.089168144 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:26:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:21.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:22.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:22 np0005465987 nova_compute[230713]: 2025-10-02 13:26:22.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:23.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:24.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:25 np0005465987 nova_compute[230713]: 2025-10-02 13:26:25.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:26.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:27 np0005465987 nova_compute[230713]: 2025-10-02 13:26:27.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:27.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:26:28.011 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:26:28.011 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:26:28.011 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:28.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:29.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:30 np0005465987 nova_compute[230713]: 2025-10-02 13:26:30.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:30.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:31.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:32.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:32 np0005465987 nova_compute[230713]: 2025-10-02 13:26:32.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:33.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:34.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:35 np0005465987 nova_compute[230713]: 2025-10-02 13:26:35.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:35.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:36.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:37 np0005465987 nova_compute[230713]: 2025-10-02 13:26:37.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:37.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:38.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:39.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:40 np0005465987 nova_compute[230713]: 2025-10-02 13:26:40.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:26:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:40.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:26:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:42.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:42 np0005465987 nova_compute[230713]: 2025-10-02 13:26:42.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:43.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:43 np0005465987 nova_compute[230713]: 2025-10-02 13:26:43.957 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:44.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:45 np0005465987 nova_compute[230713]: 2025-10-02 13:26:45.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:45.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:26:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:46.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:26:47 np0005465987 nova_compute[230713]: 2025-10-02 13:26:47.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:47 np0005465987 podman[329725]: 2025-10-02 13:26:47.824468251 +0000 UTC m=+0.048271398 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 09:26:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:47.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:48.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:49.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:50 np0005465987 nova_compute[230713]: 2025-10-02 13:26:50.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:50.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:50 np0005465987 nova_compute[230713]: 2025-10-02 13:26:50.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:50 np0005465987 nova_compute[230713]: 2025-10-02 13:26:50.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:26:50 np0005465987 nova_compute[230713]: 2025-10-02 13:26:50.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:26:51 np0005465987 nova_compute[230713]: 2025-10-02 13:26:51.083 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:26:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:51 np0005465987 podman[329746]: 2025-10-02 13:26:51.838779478 +0000 UTC m=+0.051885798 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:26:51 np0005465987 podman[329744]: 2025-10-02 13:26:51.86343706 +0000 UTC m=+0.084286561 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:26:51 np0005465987 podman[329745]: 2025-10-02 13:26:51.872504148 +0000 UTC m=+0.083855400 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:26:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:51.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:51 np0005465987 nova_compute[230713]: 2025-10-02 13:26:51.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:52.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:52 np0005465987 nova_compute[230713]: 2025-10-02 13:26:52.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:53.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:54.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:55 np0005465987 nova_compute[230713]: 2025-10-02 13:26:55.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:26:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:55.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:26:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:56.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:57 np0005465987 nova_compute[230713]: 2025-10-02 13:26:57.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:57.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:58 np0005465987 nova_compute[230713]: 2025-10-02 13:26:58.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:26:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:59.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:59 np0005465987 nova_compute[230713]: 2025-10-02 13:26:59.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:59 np0005465987 nova_compute[230713]: 2025-10-02 13:26:59.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.032 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.032 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.032 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.032 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.033 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:27:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3144941086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.457 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:00.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.623 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.624 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4281MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.624 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:00 np0005465987 nova_compute[230713]: 2025-10-02 13:27:00.624 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:01 np0005465987 nova_compute[230713]: 2025-10-02 13:27:01.293 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:27:01 np0005465987 nova_compute[230713]: 2025-10-02 13:27:01.294 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:27:01 np0005465987 nova_compute[230713]: 2025-10-02 13:27:01.310 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:27:01 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1429653974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:27:01 np0005465987 nova_compute[230713]: 2025-10-02 13:27:01.784 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:01 np0005465987 nova_compute[230713]: 2025-10-02 13:27:01.791 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:27:01 np0005465987 nova_compute[230713]: 2025-10-02 13:27:01.825 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:27:01 np0005465987 nova_compute[230713]: 2025-10-02 13:27:01.827 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:27:01 np0005465987 nova_compute[230713]: 2025-10-02 13:27:01.827 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:01.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:02.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:02 np0005465987 nova_compute[230713]: 2025-10-02 13:27:02.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:02 np0005465987 nova_compute[230713]: 2025-10-02 13:27:02.827 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:27:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:27:03 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:27:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:03.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:03 np0005465987 nova_compute[230713]: 2025-10-02 13:27:03.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:03 np0005465987 nova_compute[230713]: 2025-10-02 13:27:03.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:03 np0005465987 nova_compute[230713]: 2025-10-02 13:27:03.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:27:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:04.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:05 np0005465987 nova_compute[230713]: 2025-10-02 13:27:05.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:05.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:06.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:07 np0005465987 nova_compute[230713]: 2025-10-02 13:27:07.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:07.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:08.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:09.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:10 np0005465987 nova_compute[230713]: 2025-10-02 13:27:10.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:10.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:27:10 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:27:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:11.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:12 np0005465987 nova_compute[230713]: 2025-10-02 13:27:12.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:15 np0005465987 nova_compute[230713]: 2025-10-02 13:27:15.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:15.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:17 np0005465987 nova_compute[230713]: 2025-10-02 13:27:17.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:17.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:18 np0005465987 podman[330034]: 2025-10-02 13:27:18.862746778 +0000 UTC m=+0.077496145 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:27:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:19.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:20 np0005465987 nova_compute[230713]: 2025-10-02 13:27:20.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:20.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:22.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:22 np0005465987 podman[330056]: 2025-10-02 13:27:22.839628864 +0000 UTC m=+0.053841220 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:27:22 np0005465987 podman[330054]: 2025-10-02 13:27:22.854421008 +0000 UTC m=+0.074940416 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:27:22 np0005465987 podman[330055]: 2025-10-02 13:27:22.864414061 +0000 UTC m=+0.081802673 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:27:22 np0005465987 nova_compute[230713]: 2025-10-02 13:27:22.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:23.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:24.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:25 np0005465987 nova_compute[230713]: 2025-10-02 13:27:25.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:26.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:26 np0005465987 nova_compute[230713]: 2025-10-02 13:27:26.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:27 np0005465987 nova_compute[230713]: 2025-10-02 13:27:27.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:27.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:27:28.012 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:27:28.012 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:27:28.013 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:28.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:29.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:30 np0005465987 nova_compute[230713]: 2025-10-02 13:27:30.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:30.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:31.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:32.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:32 np0005465987 nova_compute[230713]: 2025-10-02 13:27:32.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:33.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:35 np0005465987 nova_compute[230713]: 2025-10-02 13:27:35.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:36.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:36.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:38 np0005465987 nova_compute[230713]: 2025-10-02 13:27:38.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:38.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:40.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:40 np0005465987 nova_compute[230713]: 2025-10-02 13:27:40.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:40.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:42.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:42.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:43 np0005465987 nova_compute[230713]: 2025-10-02 13:27:43.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:43 np0005465987 nova_compute[230713]: 2025-10-02 13:27:43.992 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:44.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:27:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:44.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:27:45 np0005465987 nova_compute[230713]: 2025-10-02 13:27:45.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:46.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:46.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:48.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:48 np0005465987 nova_compute[230713]: 2025-10-02 13:27:48.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:48.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:49 np0005465987 podman[330119]: 2025-10-02 13:27:49.829760959 +0000 UTC m=+0.046622824 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:27:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:50.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:50 np0005465987 nova_compute[230713]: 2025-10-02 13:27:50.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:50.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:52.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:52.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:52 np0005465987 nova_compute[230713]: 2025-10-02 13:27:52.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:52 np0005465987 nova_compute[230713]: 2025-10-02 13:27:52.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:27:52 np0005465987 nova_compute[230713]: 2025-10-02 13:27:52.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:27:53 np0005465987 nova_compute[230713]: 2025-10-02 13:27:53.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:53 np0005465987 podman[330140]: 2025-10-02 13:27:53.864462931 +0000 UTC m=+0.072024446 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:27:53 np0005465987 podman[330141]: 2025-10-02 13:27:53.877784145 +0000 UTC m=+0.092146635 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 09:27:53 np0005465987 podman[330139]: 2025-10-02 13:27:53.886276727 +0000 UTC m=+0.108723348 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 09:27:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:54.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:54.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:55 np0005465987 nova_compute[230713]: 2025-10-02 13:27:55.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:56.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:56 np0005465987 nova_compute[230713]: 2025-10-02 13:27:56.195 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:27:56 np0005465987 nova_compute[230713]: 2025-10-02 13:27:56.196 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:56.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:58.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:58 np0005465987 nova_compute[230713]: 2025-10-02 13:27:58.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:27:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:27:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:58.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:27:58 np0005465987 nova_compute[230713]: 2025-10-02 13:27:58.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:59 np0005465987 nova_compute[230713]: 2025-10-02 13:27:59.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:00.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:00 np0005465987 nova_compute[230713]: 2025-10-02 13:28:00.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:00.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:00 np0005465987 nova_compute[230713]: 2025-10-02 13:28:00.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:01 np0005465987 nova_compute[230713]: 2025-10-02 13:28:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.004 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.004 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.005 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:02.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:28:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1637292816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.427 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.559 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.560 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4257MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.561 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.561 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:02.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.892 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.892 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:28:02 np0005465987 nova_compute[230713]: 2025-10-02 13:28:02.944 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:03 np0005465987 nova_compute[230713]: 2025-10-02 13:28:03.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:28:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/990183815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:28:03 np0005465987 nova_compute[230713]: 2025-10-02 13:28:03.370 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:03 np0005465987 nova_compute[230713]: 2025-10-02 13:28:03.376 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:28:03 np0005465987 nova_compute[230713]: 2025-10-02 13:28:03.409 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:28:03 np0005465987 nova_compute[230713]: 2025-10-02 13:28:03.410 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:28:03 np0005465987 nova_compute[230713]: 2025-10-02 13:28:03.410 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:04.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:04.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:05 np0005465987 nova_compute[230713]: 2025-10-02 13:28:05.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:05 np0005465987 nova_compute[230713]: 2025-10-02 13:28:05.411 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:05 np0005465987 nova_compute[230713]: 2025-10-02 13:28:05.411 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:05 np0005465987 nova_compute[230713]: 2025-10-02 13:28:05.412 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:28:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:06.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:06.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:08.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:08 np0005465987 nova_compute[230713]: 2025-10-02 13:28:08.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:08.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:10.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:10 np0005465987 nova_compute[230713]: 2025-10-02 13:28:10.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:10.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:28:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:28:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:28:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:28:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:28:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:28:11 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:28:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:12.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:12.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:13 np0005465987 nova_compute[230713]: 2025-10-02 13:28:13.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:14.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:14.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:15 np0005465987 nova_compute[230713]: 2025-10-02 13:28:15.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:16.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:16.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:28:17 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:28:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:18.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:18 np0005465987 nova_compute[230713]: 2025-10-02 13:28:18.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:18.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:20.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:20 np0005465987 nova_compute[230713]: 2025-10-02 13:28:20.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:20.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:20 np0005465987 podman[330551]: 2025-10-02 13:28:20.839846754 +0000 UTC m=+0.061207932 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 09:28:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:22.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:22.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:23 np0005465987 nova_compute[230713]: 2025-10-02 13:28:23.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:24.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:24.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:24 np0005465987 podman[330572]: 2025-10-02 13:28:24.838545404 +0000 UTC m=+0.054739425 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct  2 09:28:24 np0005465987 podman[330570]: 2025-10-02 13:28:24.863550726 +0000 UTC m=+0.086471950 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:28:24 np0005465987 podman[330571]: 2025-10-02 13:28:24.875458551 +0000 UTC m=+0.090260954 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct  2 09:28:25 np0005465987 nova_compute[230713]: 2025-10-02 13:28:25.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:26.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:26.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:28:28.013 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:28:28.013 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:28:28.014 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:28.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:28 np0005465987 nova_compute[230713]: 2025-10-02 13:28:28.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:28.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:30.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:30 np0005465987 nova_compute[230713]: 2025-10-02 13:28:30.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:30.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:32.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:32.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:33 np0005465987 nova_compute[230713]: 2025-10-02 13:28:33.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:34.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:34.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:35 np0005465987 nova_compute[230713]: 2025-10-02 13:28:35.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:36.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:36.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:38.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:38 np0005465987 nova_compute[230713]: 2025-10-02 13:28:38.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:38.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:40.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:40 np0005465987 nova_compute[230713]: 2025-10-02 13:28:40.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:40.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:42.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:42.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:43 np0005465987 nova_compute[230713]: 2025-10-02 13:28:43.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:44.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:44.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:44 np0005465987 nova_compute[230713]: 2025-10-02 13:28:44.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:45 np0005465987 nova_compute[230713]: 2025-10-02 13:28:45.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:46.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:46.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:48.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:48 np0005465987 nova_compute[230713]: 2025-10-02 13:28:48.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:48.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:50.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:50 np0005465987 nova_compute[230713]: 2025-10-02 13:28:50.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:50.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:51 np0005465987 podman[330631]: 2025-10-02 13:28:51.831698913 +0000 UTC m=+0.054993671 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:28:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:52.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:52.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:53 np0005465987 nova_compute[230713]: 2025-10-02 13:28:53.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:53 np0005465987 nova_compute[230713]: 2025-10-02 13:28:53.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:53 np0005465987 nova_compute[230713]: 2025-10-02 13:28:53.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:28:53 np0005465987 nova_compute[230713]: 2025-10-02 13:28:53.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:28:53 np0005465987 nova_compute[230713]: 2025-10-02 13:28:53.986 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:28:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:54.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:54 np0005465987 nova_compute[230713]: 2025-10-02 13:28:54.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:55 np0005465987 nova_compute[230713]: 2025-10-02 13:28:55.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:55 np0005465987 podman[330652]: 2025-10-02 13:28:55.836593592 +0000 UTC m=+0.049730438 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:28:55 np0005465987 podman[330651]: 2025-10-02 13:28:55.839466361 +0000 UTC m=+0.053880541 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 09:28:55 np0005465987 podman[330650]: 2025-10-02 13:28:55.852960869 +0000 UTC m=+0.071181194 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:28:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:56.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:28:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:56.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:28:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:58.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:58 np0005465987 nova_compute[230713]: 2025-10-02 13:28:58.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:28:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:58.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:29:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:00.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:29:00 np0005465987 nova_compute[230713]: 2025-10-02 13:29:00.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:00.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:00 np0005465987 nova_compute[230713]: 2025-10-02 13:29:00.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:01 np0005465987 nova_compute[230713]: 2025-10-02 13:29:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:01 np0005465987 nova_compute[230713]: 2025-10-02 13:29:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:01 np0005465987 nova_compute[230713]: 2025-10-02 13:29:01.992 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:01 np0005465987 nova_compute[230713]: 2025-10-02 13:29:01.992 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:01 np0005465987 nova_compute[230713]: 2025-10-02 13:29:01.992 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:01 np0005465987 nova_compute[230713]: 2025-10-02 13:29:01.992 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:29:01 np0005465987 nova_compute[230713]: 2025-10-02 13:29:01.992 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:29:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:02.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:29:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2494224005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:29:02 np0005465987 nova_compute[230713]: 2025-10-02 13:29:02.421 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:29:02 np0005465987 nova_compute[230713]: 2025-10-02 13:29:02.595 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:29:02 np0005465987 nova_compute[230713]: 2025-10-02 13:29:02.597 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4281MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:29:02 np0005465987 nova_compute[230713]: 2025-10-02 13:29:02.597 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:02 np0005465987 nova_compute[230713]: 2025-10-02 13:29:02.597 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:29:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:02.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:29:02 np0005465987 nova_compute[230713]: 2025-10-02 13:29:02.745 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:29:02 np0005465987 nova_compute[230713]: 2025-10-02 13:29:02.745 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:29:02 np0005465987 nova_compute[230713]: 2025-10-02 13:29:02.763 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:29:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:29:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/979004558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:29:03 np0005465987 nova_compute[230713]: 2025-10-02 13:29:03.232 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:29:03 np0005465987 nova_compute[230713]: 2025-10-02 13:29:03.239 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:29:03 np0005465987 nova_compute[230713]: 2025-10-02 13:29:03.267 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:29:03 np0005465987 nova_compute[230713]: 2025-10-02 13:29:03.269 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:29:03 np0005465987 nova_compute[230713]: 2025-10-02 13:29:03.270 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:03 np0005465987 nova_compute[230713]: 2025-10-02 13:29:03.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:04.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:04 np0005465987 nova_compute[230713]: 2025-10-02 13:29:04.270 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:04.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:04 np0005465987 nova_compute[230713]: 2025-10-02 13:29:04.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:04 np0005465987 nova_compute[230713]: 2025-10-02 13:29:04.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:29:05 np0005465987 nova_compute[230713]: 2025-10-02 13:29:05.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:05 np0005465987 nova_compute[230713]: 2025-10-02 13:29:05.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:06.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:06.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:08.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:08 np0005465987 nova_compute[230713]: 2025-10-02 13:29:08.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:29:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:08.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:29:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:10.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:10 np0005465987 nova_compute[230713]: 2025-10-02 13:29:10.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:12.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:12.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:13 np0005465987 nova_compute[230713]: 2025-10-02 13:29:13.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:14.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:29:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:14.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:29:15 np0005465987 nova_compute[230713]: 2025-10-02 13:29:15.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:16.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:16.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:18.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:29:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:29:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:29:18 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:29:18 np0005465987 nova_compute[230713]: 2025-10-02 13:29:18.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:18.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:29:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:29:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:29:19 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:29:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:20.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:20 np0005465987 nova_compute[230713]: 2025-10-02 13:29:20.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:20.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:29:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:29:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:22.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:22 np0005465987 podman[330889]: 2025-10-02 13:29:22.836081003 +0000 UTC m=+0.059038142 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:29:23 np0005465987 nova_compute[230713]: 2025-10-02 13:29:23.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:24.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:25 np0005465987 nova_compute[230713]: 2025-10-02 13:29:25.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:25 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:29:25 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:29:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:26.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:26.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:26 np0005465987 podman[330961]: 2025-10-02 13:29:26.845469864 +0000 UTC m=+0.057412718 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:29:26 np0005465987 podman[330960]: 2025-10-02 13:29:26.897454492 +0000 UTC m=+0.117244449 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:29:26 np0005465987 podman[330959]: 2025-10-02 13:29:26.911065623 +0000 UTC m=+0.132019042 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:29:28.015 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:29:28.015 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:29:28.015 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:28.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:28 np0005465987 nova_compute[230713]: 2025-10-02 13:29:28.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:28.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:28 np0005465987 nova_compute[230713]: 2025-10-02 13:29:28.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:30.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:30 np0005465987 nova_compute[230713]: 2025-10-02 13:29:30.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:30.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:32.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:32.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:33 np0005465987 nova_compute[230713]: 2025-10-02 13:29:33.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:34.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:34.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:35 np0005465987 nova_compute[230713]: 2025-10-02 13:29:35.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:36.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:29:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:29:38 np0005465987 nova_compute[230713]: 2025-10-02 13:29:38.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:29:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:38.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:29:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:29:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:40.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:29:40 np0005465987 nova_compute[230713]: 2025-10-02 13:29:40.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:40.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.426828) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781426873, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2353, "num_deletes": 251, "total_data_size": 5821213, "memory_usage": 5888128, "flush_reason": "Manual Compaction"}
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781443511, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 3807499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93040, "largest_seqno": 95387, "table_properties": {"data_size": 3798053, "index_size": 6003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19116, "raw_average_key_size": 20, "raw_value_size": 3779237, "raw_average_value_size": 4007, "num_data_blocks": 262, "num_entries": 943, "num_filter_entries": 943, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411562, "oldest_key_time": 1759411562, "file_creation_time": 1759411781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 16733 microseconds, and 9235 cpu microseconds.
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.443560) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 3807499 bytes OK
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.443578) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.444773) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.444791) EVENT_LOG_v1 {"time_micros": 1759411781444786, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.444808) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 5811043, prev total WAL file size 5811043, number of live WAL files 2.
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.446013) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(3718KB)], [192(12MB)]
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781446046, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 16842780, "oldest_snapshot_seqno": -1}
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 11721 keys, 14758429 bytes, temperature: kUnknown
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781538845, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 14758429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14683630, "index_size": 44414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 309080, "raw_average_key_size": 26, "raw_value_size": 14479548, "raw_average_value_size": 1235, "num_data_blocks": 1687, "num_entries": 11721, "num_filter_entries": 11721, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759411781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.539187) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 14758429 bytes
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.540762) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.2 rd, 158.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.4 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 12240, records dropped: 519 output_compression: NoCompression
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.540779) EVENT_LOG_v1 {"time_micros": 1759411781540771, "job": 124, "event": "compaction_finished", "compaction_time_micros": 92936, "compaction_time_cpu_micros": 38936, "output_level": 6, "num_output_files": 1, "total_output_size": 14758429, "num_input_records": 12240, "num_output_records": 11721, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781541498, "job": 124, "event": "table_file_deletion", "file_number": 194}
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411781543830, "job": 124, "event": "table_file_deletion", "file_number": 192}
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.445924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.543961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.543968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.543973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.543976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:29:41 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:29:41.543979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:29:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:42.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:43 np0005465987 nova_compute[230713]: 2025-10-02 13:29:43.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:44.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:44.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:45 np0005465987 nova_compute[230713]: 2025-10-02 13:29:45.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:46.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:46.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:46 np0005465987 nova_compute[230713]: 2025-10-02 13:29:46.984 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:48.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:48 np0005465987 nova_compute[230713]: 2025-10-02 13:29:48.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:48.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:50.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:50 np0005465987 nova_compute[230713]: 2025-10-02 13:29:50.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:50.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:52.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:52.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:53 np0005465987 nova_compute[230713]: 2025-10-02 13:29:53.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:53 np0005465987 podman[331021]: 2025-10-02 13:29:53.829870903 +0000 UTC m=+0.050661883 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:29:53 np0005465987 nova_compute[230713]: 2025-10-02 13:29:53.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:53 np0005465987 nova_compute[230713]: 2025-10-02 13:29:53.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:29:53 np0005465987 nova_compute[230713]: 2025-10-02 13:29:53.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:29:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:29:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:54.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:29:54 np0005465987 nova_compute[230713]: 2025-10-02 13:29:54.293 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:29:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:29:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:54.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:29:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:29:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532813708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:29:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:29:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532813708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:29:55 np0005465987 nova_compute[230713]: 2025-10-02 13:29:55.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:55 np0005465987 nova_compute[230713]: 2025-10-02 13:29:55.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:56.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:56.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:57 np0005465987 podman[331043]: 2025-10-02 13:29:57.839507861 +0000 UTC m=+0.051147247 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:29:57 np0005465987 podman[331042]: 2025-10-02 13:29:57.83982752 +0000 UTC m=+0.054296553 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:29:57 np0005465987 podman[331041]: 2025-10-02 13:29:57.864403741 +0000 UTC m=+0.082880393 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:29:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:58 np0005465987 nova_compute[230713]: 2025-10-02 13:29:58.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:29:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:58.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:00.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:00 np0005465987 nova_compute[230713]: 2025-10-02 13:30:00.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:00 np0005465987 ceph-mon[80822]: overall HEALTH_OK
Oct  2 09:30:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:00.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:01 np0005465987 nova_compute[230713]: 2025-10-02 13:30:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:01 np0005465987 nova_compute[230713]: 2025-10-02 13:30:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:01 np0005465987 nova_compute[230713]: 2025-10-02 13:30:01.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:01 np0005465987 nova_compute[230713]: 2025-10-02 13:30:01.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:01 np0005465987 nova_compute[230713]: 2025-10-02 13:30:01.990 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:01 np0005465987 nova_compute[230713]: 2025-10-02 13:30:01.991 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:01 np0005465987 nova_compute[230713]: 2025-10-02 13:30:01.991 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:30:01 np0005465987 nova_compute[230713]: 2025-10-02 13:30:01.991 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:02.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:02 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:30:02 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/804517818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:30:02 np0005465987 nova_compute[230713]: 2025-10-02 13:30:02.439 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:02 np0005465987 nova_compute[230713]: 2025-10-02 13:30:02.628 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:30:02 np0005465987 nova_compute[230713]: 2025-10-02 13:30:02.630 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4279MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:30:02 np0005465987 nova_compute[230713]: 2025-10-02 13:30:02.630 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:02 np0005465987 nova_compute[230713]: 2025-10-02 13:30:02.630 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:02.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:02 np0005465987 nova_compute[230713]: 2025-10-02 13:30:02.878 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:30:02 np0005465987 nova_compute[230713]: 2025-10-02 13:30:02.879 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:30:02 np0005465987 nova_compute[230713]: 2025-10-02 13:30:02.905 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:30:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1008214551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:30:03 np0005465987 nova_compute[230713]: 2025-10-02 13:30:03.418 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:03 np0005465987 nova_compute[230713]: 2025-10-02 13:30:03.424 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:30:03 np0005465987 nova_compute[230713]: 2025-10-02 13:30:03.445 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:30:03 np0005465987 nova_compute[230713]: 2025-10-02 13:30:03.447 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:30:03 np0005465987 nova_compute[230713]: 2025-10-02 13:30:03.447 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:03 np0005465987 nova_compute[230713]: 2025-10-02 13:30:03.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:04.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:30:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:04.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:30:05 np0005465987 nova_compute[230713]: 2025-10-02 13:30:05.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:05 np0005465987 nova_compute[230713]: 2025-10-02 13:30:05.448 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:06.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:06.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:06 np0005465987 nova_compute[230713]: 2025-10-02 13:30:06.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:06 np0005465987 nova_compute[230713]: 2025-10-02 13:30:06.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:30:07 np0005465987 nova_compute[230713]: 2025-10-02 13:30:07.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:08.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:08 np0005465987 nova_compute[230713]: 2025-10-02 13:30:08.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:08.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:10 np0005465987 nova_compute[230713]: 2025-10-02 13:30:10.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:10.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:10.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:12.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:12.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:13 np0005465987 nova_compute[230713]: 2025-10-02 13:30:13.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:14.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:14.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:15 np0005465987 nova_compute[230713]: 2025-10-02 13:30:15.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:16.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:16.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:17 np0005465987 nova_compute[230713]: 2025-10-02 13:30:17.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:18 np0005465987 ceph-mds[84089]: mds.beacon.cephfs.compute-1.zfhmgy missed beacon ack from the monitors
Oct  2 09:30:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:18.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:18 np0005465987 nova_compute[230713]: 2025-10-02 13:30:18.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:18.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:20 np0005465987 nova_compute[230713]: 2025-10-02 13:30:20.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:20.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:20.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:21 np0005465987 nova_compute[230713]: 2025-10-02 13:30:21.985 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:21 np0005465987 nova_compute[230713]: 2025-10-02 13:30:21.986 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:30:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:22.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:22.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:23 np0005465987 nova_compute[230713]: 2025-10-02 13:30:23.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:24.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:24 np0005465987 podman[331147]: 2025-10-02 13:30:24.83884149 +0000 UTC m=+0.059276849 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:30:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:24.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:25 np0005465987 nova_compute[230713]: 2025-10-02 13:30:25.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:26.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:30:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:30:26 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:30:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:26.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:30:28.016 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:30:28.016 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:30:28.016 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:28.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:28 np0005465987 nova_compute[230713]: 2025-10-02 13:30:28.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:28 np0005465987 podman[331299]: 2025-10-02 13:30:28.843603885 +0000 UTC m=+0.059387011 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:30:28 np0005465987 podman[331300]: 2025-10-02 13:30:28.849192357 +0000 UTC m=+0.060556073 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:30:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:28.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:28 np0005465987 podman[331298]: 2025-10-02 13:30:28.870567391 +0000 UTC m=+0.087359955 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:30:30 np0005465987 nova_compute[230713]: 2025-10-02 13:30:30.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:30.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:30.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:32.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:30:32 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:30:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:32.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:33 np0005465987 nova_compute[230713]: 2025-10-02 13:30:33.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:34.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:34.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:35 np0005465987 nova_compute[230713]: 2025-10-02 13:30:35.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:35 np0005465987 nova_compute[230713]: 2025-10-02 13:30:35.984 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:35 np0005465987 nova_compute[230713]: 2025-10-02 13:30:35.985 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:30:36 np0005465987 nova_compute[230713]: 2025-10-02 13:30:36.011 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:30:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:36.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:36.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:38 np0005465987 nova_compute[230713]: 2025-10-02 13:30:38.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:38.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:40 np0005465987 nova_compute[230713]: 2025-10-02 13:30:40.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:40.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:42.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:42.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.192362) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843192402, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 812, "num_deletes": 251, "total_data_size": 1474799, "memory_usage": 1491112, "flush_reason": "Manual Compaction"}
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843199949, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 635511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95393, "largest_seqno": 96199, "table_properties": {"data_size": 632308, "index_size": 1046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8764, "raw_average_key_size": 20, "raw_value_size": 625466, "raw_average_value_size": 1478, "num_data_blocks": 46, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411782, "oldest_key_time": 1759411782, "file_creation_time": 1759411843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 7638 microseconds, and 2820 cpu microseconds.
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.199996) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 635511 bytes OK
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.200016) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.201781) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.201801) EVENT_LOG_v1 {"time_micros": 1759411843201790, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.201820) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 1470628, prev total WAL file size 1470628, number of live WAL files 2.
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.202365) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323632' seq:72057594037927935, type:22 .. '6D6772737461740033353134' seq:0, type:0; will stop at (end)
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(620KB)], [195(14MB)]
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843202411, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 15393940, "oldest_snapshot_seqno": -1}
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 11651 keys, 11880347 bytes, temperature: kUnknown
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843267803, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 11880347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11810091, "index_size": 39999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29189, "raw_key_size": 307816, "raw_average_key_size": 26, "raw_value_size": 11611401, "raw_average_value_size": 996, "num_data_blocks": 1503, "num_entries": 11651, "num_filter_entries": 11651, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759411843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.268066) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11880347 bytes
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.269362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.2 rd, 181.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(42.9) write-amplify(18.7) OK, records in: 12144, records dropped: 493 output_compression: NoCompression
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.269379) EVENT_LOG_v1 {"time_micros": 1759411843269371, "job": 126, "event": "compaction_finished", "compaction_time_micros": 65461, "compaction_time_cpu_micros": 30684, "output_level": 6, "num_output_files": 1, "total_output_size": 11880347, "num_input_records": 12144, "num_output_records": 11651, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843269631, "job": 126, "event": "table_file_deletion", "file_number": 197}
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411843272523, "job": 126, "event": "table_file_deletion", "file_number": 195}
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.202280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.272678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.272683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.272684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.272686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:43 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:30:43.272687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:43 np0005465987 nova_compute[230713]: 2025-10-02 13:30:43.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:44.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:30:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:44.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:30:45 np0005465987 nova_compute[230713]: 2025-10-02 13:30:45.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:46.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:46.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:46 np0005465987 nova_compute[230713]: 2025-10-02 13:30:46.984 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:48.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:48 np0005465987 nova_compute[230713]: 2025-10-02 13:30:48.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:48.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:50 np0005465987 nova_compute[230713]: 2025-10-02 13:30:50.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:50.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:50.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:52.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:52.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:53 np0005465987 nova_compute[230713]: 2025-10-02 13:30:53.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:54.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:54.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:30:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122462114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:30:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:30:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122462114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:30:55 np0005465987 nova_compute[230713]: 2025-10-02 13:30:55.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:55 np0005465987 podman[331408]: 2025-10-02 13:30:55.841831751 +0000 UTC m=+0.058582459 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:30:55 np0005465987 nova_compute[230713]: 2025-10-02 13:30:55.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:55 np0005465987 nova_compute[230713]: 2025-10-02 13:30:55.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:30:55 np0005465987 nova_compute[230713]: 2025-10-02 13:30:55.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:30:56 np0005465987 nova_compute[230713]: 2025-10-02 13:30:56.054 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:30:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:56.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:56.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:56 np0005465987 nova_compute[230713]: 2025-10-02 13:30:56.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:58.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:58 np0005465987 nova_compute[230713]: 2025-10-02 13:30:58.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:30:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:58.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:59 np0005465987 podman[331430]: 2025-10-02 13:30:59.836721298 +0000 UTC m=+0.050623422 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  2 09:30:59 np0005465987 podman[331429]: 2025-10-02 13:30:59.837266092 +0000 UTC m=+0.055467194 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:30:59 np0005465987 podman[331428]: 2025-10-02 13:30:59.862011148 +0000 UTC m=+0.081354051 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:31:00 np0005465987 nova_compute[230713]: 2025-10-02 13:31:00.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:00.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:00.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:02.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:02.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:02 np0005465987 nova_compute[230713]: 2025-10-02 13:31:02.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:02 np0005465987 nova_compute[230713]: 2025-10-02 13:31:02.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.008 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.009 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.009 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.009 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:03 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:31:03 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1898096587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.475 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.648 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.649 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4289MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.650 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.650 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.827 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.828 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.841 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing inventories for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.858 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating ProviderTree inventory for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.859 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Updating inventory in ProviderTree for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.876 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing aggregate associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.917 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Refreshing trait associations for resource provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:31:03 np0005465987 nova_compute[230713]: 2025-10-02 13:31:03.941 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:04.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:31:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1579476215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:31:04 np0005465987 nova_compute[230713]: 2025-10-02 13:31:04.412 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:04 np0005465987 nova_compute[230713]: 2025-10-02 13:31:04.418 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:31:04 np0005465987 nova_compute[230713]: 2025-10-02 13:31:04.487 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:31:04 np0005465987 nova_compute[230713]: 2025-10-02 13:31:04.489 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:31:04 np0005465987 nova_compute[230713]: 2025-10-02 13:31:04.489 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:04.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:05 np0005465987 nova_compute[230713]: 2025-10-02 13:31:05.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:05 np0005465987 nova_compute[230713]: 2025-10-02 13:31:05.490 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:05 np0005465987 nova_compute[230713]: 2025-10-02 13:31:05.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:06.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:06.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:07 np0005465987 nova_compute[230713]: 2025-10-02 13:31:07.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:08.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:08 np0005465987 nova_compute[230713]: 2025-10-02 13:31:08.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:08.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:08 np0005465987 nova_compute[230713]: 2025-10-02 13:31:08.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:08 np0005465987 nova_compute[230713]: 2025-10-02 13:31:08.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:31:10 np0005465987 nova_compute[230713]: 2025-10-02 13:31:10.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:10.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:10.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:12.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:12.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:13 np0005465987 nova_compute[230713]: 2025-10-02 13:31:13.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:15 np0005465987 nova_compute[230713]: 2025-10-02 13:31:15.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:18.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.416034) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878416289, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 597, "num_deletes": 256, "total_data_size": 917098, "memory_usage": 928384, "flush_reason": "Manual Compaction"}
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878422924, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 604704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96204, "largest_seqno": 96796, "table_properties": {"data_size": 601723, "index_size": 952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6919, "raw_average_key_size": 18, "raw_value_size": 595657, "raw_average_value_size": 1584, "num_data_blocks": 44, "num_entries": 376, "num_filter_entries": 376, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411844, "oldest_key_time": 1759411844, "file_creation_time": 1759411878, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 6720 microseconds, and 2635 cpu microseconds.
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.422957) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 604704 bytes OK
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.422972) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.424752) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.424762) EVENT_LOG_v1 {"time_micros": 1759411878424759, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.424778) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 913731, prev total WAL file size 913731, number of live WAL files 2.
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.425247) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373630' seq:72057594037927935, type:22 .. '6C6F676D0034303132' seq:0, type:0; will stop at (end)
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(590KB)], [198(11MB)]
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878425290, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 12485051, "oldest_snapshot_seqno": -1}
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 11503 keys, 12368566 bytes, temperature: kUnknown
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878494691, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 12368566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12298263, "index_size": 40430, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28805, "raw_key_size": 305667, "raw_average_key_size": 26, "raw_value_size": 12100965, "raw_average_value_size": 1051, "num_data_blocks": 1519, "num_entries": 11503, "num_filter_entries": 11503, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759411878, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.495101) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 12368566 bytes
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.496458) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.6 rd, 177.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.3 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(41.1) write-amplify(20.5) OK, records in: 12027, records dropped: 524 output_compression: NoCompression
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.496485) EVENT_LOG_v1 {"time_micros": 1759411878496473, "job": 128, "event": "compaction_finished", "compaction_time_micros": 69517, "compaction_time_cpu_micros": 29761, "output_level": 6, "num_output_files": 1, "total_output_size": 12368566, "num_input_records": 12027, "num_output_records": 11503, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878496796, "job": 128, "event": "table_file_deletion", "file_number": 200}
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411878500308, "job": 128, "event": "table_file_deletion", "file_number": 198}
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.425152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.500387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.500392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.500394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.500395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:18 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:31:18.500397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:18 np0005465987 nova_compute[230713]: 2025-10-02 13:31:18.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:18.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:20 np0005465987 nova_compute[230713]: 2025-10-02 13:31:20.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:20.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:20.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:22.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:22.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:23 np0005465987 nova_compute[230713]: 2025-10-02 13:31:23.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:24.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:24.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:25 np0005465987 nova_compute[230713]: 2025-10-02 13:31:25.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:26.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:26 np0005465987 podman[331537]: 2025-10-02 13:31:26.841465111 +0000 UTC m=+0.054282021 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct  2 09:31:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:26.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:31:28.017 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:31:28.017 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:31:28.017 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:28.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:28 np0005465987 nova_compute[230713]: 2025-10-02 13:31:28.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:28 np0005465987 nova_compute[230713]: 2025-10-02 13:31:28.958 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:28.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:30 np0005465987 nova_compute[230713]: 2025-10-02 13:31:30.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:30.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:30 np0005465987 podman[331559]: 2025-10-02 13:31:30.832955655 +0000 UTC m=+0.045953454 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:31:30 np0005465987 podman[331558]: 2025-10-02 13:31:30.864096064 +0000 UTC m=+0.075958713 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct  2 09:31:30 np0005465987 podman[331557]: 2025-10-02 13:31:30.886471645 +0000 UTC m=+0.106550008 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:31:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:30.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:32.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:31:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:31:33 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:31:33 np0005465987 nova_compute[230713]: 2025-10-02 13:31:33.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:34.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:34.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:35 np0005465987 nova_compute[230713]: 2025-10-02 13:31:35.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:36.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:36.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:38.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:38 np0005465987 nova_compute[230713]: 2025-10-02 13:31:38.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:38.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:40 np0005465987 nova_compute[230713]: 2025-10-02 13:31:40.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:40.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:40.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:31:41 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:31:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:42.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:42.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:43 np0005465987 nova_compute[230713]: 2025-10-02 13:31:43.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:44.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:44.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:45 np0005465987 nova_compute[230713]: 2025-10-02 13:31:45.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:46.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:46.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:48 np0005465987 nova_compute[230713]: 2025-10-02 13:31:48.014 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:48.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:48 np0005465987 nova_compute[230713]: 2025-10-02 13:31:48.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:48.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:50 np0005465987 nova_compute[230713]: 2025-10-02 13:31:50.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:50.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:50.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:52.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:52.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:53 np0005465987 nova_compute[230713]: 2025-10-02 13:31:53.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:54.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:55 np0005465987 nova_compute[230713]: 2025-10-02 13:31:55.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:31:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:31:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:56 np0005465987 nova_compute[230713]: 2025-10-02 13:31:56.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:56 np0005465987 nova_compute[230713]: 2025-10-02 13:31:56.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:31:56 np0005465987 nova_compute[230713]: 2025-10-02 13:31:56.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:31:56 np0005465987 nova_compute[230713]: 2025-10-02 13:31:56.993 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:31:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:57.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:57 np0005465987 podman[331804]: 2025-10-02 13:31:57.837507713 +0000 UTC m=+0.053191183 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:31:57 np0005465987 nova_compute[230713]: 2025-10-02 13:31:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:58 np0005465987 nova_compute[230713]: 2025-10-02 13:31:58.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:31:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:59.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:00 np0005465987 nova_compute[230713]: 2025-10-02 13:32:00.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:00.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:01.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:01 np0005465987 podman[331825]: 2025-10-02 13:32:01.84748212 +0000 UTC m=+0.053688386 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:32:01 np0005465987 podman[331824]: 2025-10-02 13:32:01.870434127 +0000 UTC m=+0.081338290 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:32:01 np0005465987 podman[331826]: 2025-10-02 13:32:01.877026446 +0000 UTC m=+0.080191199 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:32:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:02.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:02 np0005465987 nova_compute[230713]: 2025-10-02 13:32:02.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:03.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:03 np0005465987 nova_compute[230713]: 2025-10-02 13:32:03.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:03 np0005465987 nova_compute[230713]: 2025-10-02 13:32:03.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:03 np0005465987 nova_compute[230713]: 2025-10-02 13:32:03.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:03 np0005465987 nova_compute[230713]: 2025-10-02 13:32:03.987 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:03 np0005465987 nova_compute[230713]: 2025-10-02 13:32:03.988 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:03 np0005465987 nova_compute[230713]: 2025-10-02 13:32:03.988 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:03 np0005465987 nova_compute[230713]: 2025-10-02 13:32:03.988 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:32:03 np0005465987 nova_compute[230713]: 2025-10-02 13:32:03.989 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:32:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:04.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:04 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:32:04 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2420125702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:32:04 np0005465987 nova_compute[230713]: 2025-10-02 13:32:04.567 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:32:04 np0005465987 nova_compute[230713]: 2025-10-02 13:32:04.719 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:32:04 np0005465987 nova_compute[230713]: 2025-10-02 13:32:04.720 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4270MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:32:04 np0005465987 nova_compute[230713]: 2025-10-02 13:32:04.720 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:04 np0005465987 nova_compute[230713]: 2025-10-02 13:32:04.721 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:05 np0005465987 nova_compute[230713]: 2025-10-02 13:32:05.009 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:32:05 np0005465987 nova_compute[230713]: 2025-10-02 13:32:05.010 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:32:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:05.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:05 np0005465987 nova_compute[230713]: 2025-10-02 13:32:05.029 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:32:05 np0005465987 nova_compute[230713]: 2025-10-02 13:32:05.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:32:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3790900183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:32:05 np0005465987 nova_compute[230713]: 2025-10-02 13:32:05.447 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:32:05 np0005465987 nova_compute[230713]: 2025-10-02 13:32:05.452 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:32:05 np0005465987 nova_compute[230713]: 2025-10-02 13:32:05.471 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:32:05 np0005465987 nova_compute[230713]: 2025-10-02 13:32:05.473 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:32:05 np0005465987 nova_compute[230713]: 2025-10-02 13:32:05.473 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:06.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:07.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:08.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:08 np0005465987 nova_compute[230713]: 2025-10-02 13:32:08.473 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:08 np0005465987 nova_compute[230713]: 2025-10-02 13:32:08.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:08 np0005465987 nova_compute[230713]: 2025-10-02 13:32:08.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:08 np0005465987 nova_compute[230713]: 2025-10-02 13:32:08.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:08 np0005465987 nova_compute[230713]: 2025-10-02 13:32:08.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:32:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:09.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:10 np0005465987 nova_compute[230713]: 2025-10-02 13:32:10.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:10.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:11.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:12.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:13.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:13 np0005465987 nova_compute[230713]: 2025-10-02 13:32:13.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:14.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:15.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:15 np0005465987 nova_compute[230713]: 2025-10-02 13:32:15.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:16.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:17.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:18.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:18 np0005465987 nova_compute[230713]: 2025-10-02 13:32:18.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:19.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:20 np0005465987 nova_compute[230713]: 2025-10-02 13:32:20.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:20.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:21.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:22.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:23.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:23 np0005465987 nova_compute[230713]: 2025-10-02 13:32:23.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:24.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:25.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:25 np0005465987 nova_compute[230713]: 2025-10-02 13:32:25.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:26.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:27.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:32:28.018 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:32:28.018 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:32:28.018 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:28.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:28 np0005465987 podman[331933]: 2025-10-02 13:32:28.860948063 +0000 UTC m=+0.079567152 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct  2 09:32:28 np0005465987 nova_compute[230713]: 2025-10-02 13:32:28.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:29.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:30 np0005465987 nova_compute[230713]: 2025-10-02 13:32:30.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:30.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:32.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:32 np0005465987 podman[331952]: 2025-10-02 13:32:32.841507688 +0000 UTC m=+0.053283625 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 09:32:32 np0005465987 podman[331953]: 2025-10-02 13:32:32.849547207 +0000 UTC m=+0.057912511 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:32:32 np0005465987 podman[331951]: 2025-10-02 13:32:32.862693996 +0000 UTC m=+0.079118319 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:32:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:33.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:33 np0005465987 nova_compute[230713]: 2025-10-02 13:32:33.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:34.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:35.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:35 np0005465987 nova_compute[230713]: 2025-10-02 13:32:35.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:36.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:37.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:38.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:38 np0005465987 nova_compute[230713]: 2025-10-02 13:32:38.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:39.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:40 np0005465987 nova_compute[230713]: 2025-10-02 13:32:40.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:41 np0005465987 podman[332186]: 2025-10-02 13:32:41.461395071 +0000 UTC m=+0.057261693 container exec 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:32:41 np0005465987 podman[332186]: 2025-10-02 13:32:41.54860672 +0000 UTC m=+0.144473322 container exec_died 87c34bf8dcfef71bf8b09a54de0f47ec63ce6fe0b571c436e12b261ca9f1f257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-fd4c5763-22d1-50ea-ad0b-96a3dc3040b2-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 09:32:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:42.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:32:43 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:32:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:43.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:43 np0005465987 nova_compute[230713]: 2025-10-02 13:32:43.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:32:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:32:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:32:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:32:44 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:32:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:44.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:45 np0005465987 nova_compute[230713]: 2025-10-02 13:32:45.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:46.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:48.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:48 np0005465987 nova_compute[230713]: 2025-10-02 13:32:48.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:49 np0005465987 nova_compute[230713]: 2025-10-02 13:32:49.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:50 np0005465987 nova_compute[230713]: 2025-10-02 13:32:50.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:32:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:50.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:32:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:32:50 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:32:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:51.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:52.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:53.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:53 np0005465987 nova_compute[230713]: 2025-10-02 13:32:53.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:54.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:32:54 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 63K writes, 236K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.23 GB, 0.03 MB/s#012Cumulative WAL: 63K writes, 23K syncs, 2.66 writes per sync, written: 0.23 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 705 writes, 1074 keys, 705 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s#012Interval WAL: 705 writes, 351 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:32:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:55.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:32:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300819681' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:32:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:32:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300819681' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:32:55 np0005465987 nova_compute[230713]: 2025-10-02 13:32:55.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:56.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:57.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:57 np0005465987 nova_compute[230713]: 2025-10-02 13:32:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:57 np0005465987 nova_compute[230713]: 2025-10-02 13:32:57.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:32:57 np0005465987 nova_compute[230713]: 2025-10-02 13:32:57.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:32:57 np0005465987 nova_compute[230713]: 2025-10-02 13:32:57.986 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:32:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:58.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:58 np0005465987 nova_compute[230713]: 2025-10-02 13:32:58.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:32:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:59.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:59 np0005465987 podman[332491]: 2025-10-02 13:32:59.838759044 +0000 UTC m=+0.053388308 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:32:59 np0005465987 nova_compute[230713]: 2025-10-02 13:32:59.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:00 np0005465987 nova_compute[230713]: 2025-10-02 13:33:00.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:00.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:01.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:02.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:02 np0005465987 nova_compute[230713]: 2025-10-02 13:33:02.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:03 np0005465987 podman[332512]: 2025-10-02 13:33:03.845501114 +0000 UTC m=+0.060891132 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:33:03 np0005465987 podman[332511]: 2025-10-02 13:33:03.86881959 +0000 UTC m=+0.089315288 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:33:03 np0005465987 podman[332513]: 2025-10-02 13:33:03.880516029 +0000 UTC m=+0.094105028 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:33:03 np0005465987 nova_compute[230713]: 2025-10-02 13:33:03.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:04 np0005465987 nova_compute[230713]: 2025-10-02 13:33:04.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:04 np0005465987 nova_compute[230713]: 2025-10-02 13:33:04.988 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:33:04 np0005465987 nova_compute[230713]: 2025-10-02 13:33:04.988 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:33:04 np0005465987 nova_compute[230713]: 2025-10-02 13:33:04.988 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:33:04 np0005465987 nova_compute[230713]: 2025-10-02 13:33:04.988 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:33:04 np0005465987 nova_compute[230713]: 2025-10-02 13:33:04.989 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:33:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:05.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:05 np0005465987 nova_compute[230713]: 2025-10-02 13:33:05.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:33:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2099059103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:33:05 np0005465987 nova_compute[230713]: 2025-10-02 13:33:05.417 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:33:05 np0005465987 nova_compute[230713]: 2025-10-02 13:33:05.573 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:33:05 np0005465987 nova_compute[230713]: 2025-10-02 13:33:05.574 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4245MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:33:05 np0005465987 nova_compute[230713]: 2025-10-02 13:33:05.575 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:33:05 np0005465987 nova_compute[230713]: 2025-10-02 13:33:05.575 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:33:05 np0005465987 nova_compute[230713]: 2025-10-02 13:33:05.754 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:33:05 np0005465987 nova_compute[230713]: 2025-10-02 13:33:05.755 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:33:05 np0005465987 nova_compute[230713]: 2025-10-02 13:33:05.768 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:33:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:33:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4110316203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:33:06 np0005465987 nova_compute[230713]: 2025-10-02 13:33:06.197 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:33:06 np0005465987 nova_compute[230713]: 2025-10-02 13:33:06.203 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:33:06 np0005465987 nova_compute[230713]: 2025-10-02 13:33:06.218 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:33:06 np0005465987 nova_compute[230713]: 2025-10-02 13:33:06.219 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:33:06 np0005465987 nova_compute[230713]: 2025-10-02 13:33:06.219 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:33:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:06.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:07.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:07 np0005465987 nova_compute[230713]: 2025-10-02 13:33:07.221 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:07 np0005465987 nova_compute[230713]: 2025-10-02 13:33:07.971 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:08.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:08 np0005465987 nova_compute[230713]: 2025-10-02 13:33:08.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:08 np0005465987 nova_compute[230713]: 2025-10-02 13:33:08.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:09.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:09 np0005465987 nova_compute[230713]: 2025-10-02 13:33:09.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:09 np0005465987 nova_compute[230713]: 2025-10-02 13:33:09.966 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:33:10 np0005465987 nova_compute[230713]: 2025-10-02 13:33:10.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:10.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:11.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:12.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:13 np0005465987 nova_compute[230713]: 2025-10-02 13:33:13.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:14.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:15 np0005465987 nova_compute[230713]: 2025-10-02 13:33:15.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:16.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:17.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:18.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:18 np0005465987 nova_compute[230713]: 2025-10-02 13:33:18.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:19.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:20 np0005465987 nova_compute[230713]: 2025-10-02 13:33:20.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:21.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:22 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:22 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:22 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:22.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.032682) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003032742, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1447, "num_deletes": 251, "total_data_size": 3325260, "memory_usage": 3370640, "flush_reason": "Manual Compaction"}
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003044418, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 2171920, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96801, "largest_seqno": 98243, "table_properties": {"data_size": 2165786, "index_size": 3396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12988, "raw_average_key_size": 19, "raw_value_size": 2153502, "raw_average_value_size": 3313, "num_data_blocks": 151, "num_entries": 650, "num_filter_entries": 650, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411879, "oldest_key_time": 1759411879, "file_creation_time": 1759412003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 11770 microseconds, and 5454 cpu microseconds.
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.044455) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 2171920 bytes OK
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.044475) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.046019) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.046043) EVENT_LOG_v1 {"time_micros": 1759412003046036, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.046065) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 3318600, prev total WAL file size 3318600, number of live WAL files 2.
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.047410) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(2121KB)], [201(11MB)]
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003047440, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 14540486, "oldest_snapshot_seqno": -1}
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 11636 keys, 12558059 bytes, temperature: kUnknown
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003101824, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 12558059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12486760, "index_size": 41106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29125, "raw_key_size": 309085, "raw_average_key_size": 26, "raw_value_size": 12287035, "raw_average_value_size": 1055, "num_data_blocks": 1544, "num_entries": 11636, "num_filter_entries": 11636, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759404811, "oldest_key_time": 0, "file_creation_time": 1759412003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1a73ed96-995a-4c76-99da-82d621f02865", "db_session_id": "N1IM92GH3TJZCQ2U874J", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.102023) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12558059 bytes
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.105002) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 267.0 rd, 230.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.8 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(12.5) write-amplify(5.8) OK, records in: 12153, records dropped: 517 output_compression: NoCompression
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.105021) EVENT_LOG_v1 {"time_micros": 1759412003105012, "job": 130, "event": "compaction_finished", "compaction_time_micros": 54450, "compaction_time_cpu_micros": 28034, "output_level": 6, "num_output_files": 1, "total_output_size": 12558059, "num_input_records": 12153, "num_output_records": 11636, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003105525, "job": 130, "event": "table_file_deletion", "file_number": 203}
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412003107526, "job": 130, "event": "table_file_deletion", "file_number": 201}
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.047337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.107549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.107554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.107555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.107557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:23 np0005465987 ceph-mon[80822]: rocksdb: (Original Log Time 2025/10/02-13:33:23.107558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:23 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:23 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:23 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:23.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:23 np0005465987 nova_compute[230713]: 2025-10-02 13:33:23.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:24 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:24 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:24 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:24.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:25 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:25 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:25 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:25 np0005465987 nova_compute[230713]: 2025-10-02 13:33:25.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:26 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:26 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:26 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:26 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:26.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:27 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:27 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:27 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:33:28.019 138325 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:33:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:33:28.019 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:33:28 np0005465987 ovn_metadata_agent[138320]: 2025-10-02 13:33:28.019 138325 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:33:28 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:28 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:28 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:28.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:28 np0005465987 nova_compute[230713]: 2025-10-02 13:33:28.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:29 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:29 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:29 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:29.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:30 np0005465987 nova_compute[230713]: 2025-10-02 13:33:30.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:30 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:30 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:30 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:30.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:30 np0005465987 podman[332622]: 2025-10-02 13:33:30.847033211 +0000 UTC m=+0.056135203 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 09:33:31 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:31 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:31 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:33:31 np0005465987 ceph-mon[80822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 19K writes, 98K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1488 writes, 7237 keys, 1488 commit groups, 1.0 writes per commit group, ingest: 14.98 MB, 0.02 MB/s#012Interval WAL: 1488 writes, 1488 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     63.5      1.91              0.37        65    0.029       0      0       0.0       0.0#012  L6      1/0   11.98 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6    129.6    111.6      6.08              1.94        64    0.095    530K    34K       0.0       0.0#012 Sum      1/0   11.98 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     98.6    100.1      7.99              2.32       129    0.062    530K    34K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0    175.2    177.7      0.47              0.22        12    0.040     72K   3095       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    129.6    111.6      6.08              1.94        64    0.095    530K    34K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     63.5      1.91              0.37        64    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.119, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.78 GB write, 0.11 MB/s write, 0.77 GB read, 0.11 MB/s read, 8.0 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f9644871f0#2 capacity: 304.00 MB usage: 86.48 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000628 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5320,82.74 MB,27.2174%) FilterBlock(129,1.42 MB,0.46848%) IndexBlock(129,2.32 MB,0.762889%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:33:31 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:32 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:32 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:32 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:32.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:32 np0005465987 nova_compute[230713]: 2025-10-02 13:33:32.959 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:33 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:33 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:33 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:33.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:33 np0005465987 nova_compute[230713]: 2025-10-02 13:33:33.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:34 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:34 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:34 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:34.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:34 np0005465987 podman[332643]: 2025-10-02 13:33:34.83869578 +0000 UTC m=+0.052796092 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 09:33:34 np0005465987 podman[332644]: 2025-10-02 13:33:34.842449942 +0000 UTC m=+0.053374157 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 09:33:34 np0005465987 podman[332642]: 2025-10-02 13:33:34.909731317 +0000 UTC m=+0.126632595 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:33:35 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:35 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:35 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:35.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:35 np0005465987 nova_compute[230713]: 2025-10-02 13:33:35.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:36 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:36 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:36 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:36 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:36.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:37 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:37 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:37 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:38 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:38 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:38 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:38.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:39 np0005465987 nova_compute[230713]: 2025-10-02 13:33:39.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:39 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:39 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:39 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:39.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:40 np0005465987 nova_compute[230713]: 2025-10-02 13:33:40.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:40 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:40 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:40 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:40.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:41 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:41 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:33:41 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:41.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:33:41 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:42 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:42 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:42 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:42.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:43 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:43 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:43 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:43.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:44 np0005465987 nova_compute[230713]: 2025-10-02 13:33:44.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:44 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:44 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:44 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:44.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:45 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:45 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:45 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:45.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:45 np0005465987 nova_compute[230713]: 2025-10-02 13:33:45.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:46 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:46 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:46 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:46 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:46.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:47 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:47 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:47 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:47.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:48 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:48 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:48 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:48.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:49 np0005465987 nova_compute[230713]: 2025-10-02 13:33:49.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:49 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:49 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:49 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:50 np0005465987 nova_compute[230713]: 2025-10-02 13:33:50.003 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:50 np0005465987 nova_compute[230713]: 2025-10-02 13:33:50.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:50 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:50 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:50 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:50.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:51 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:51 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:51 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:33:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:33:51 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:33:51 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:52 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:52 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:52 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:53 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:53 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:33:53 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:33:54 np0005465987 nova_compute[230713]: 2025-10-02 13:33:54.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:54 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:54 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:54 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:55 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:55 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:55 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:55.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:33:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3140606409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:33:55 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:33:55 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3140606409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:33:55 np0005465987 nova_compute[230713]: 2025-10-02 13:33:55.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:56 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:56 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:56 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:56 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:56 np0005465987 systemd-logind[794]: New session 68 of user zuul.
Oct  2 09:33:56 np0005465987 systemd[1]: Started Session 68 of User zuul.
Oct  2 09:33:57 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:57 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:33:57 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:57.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:33:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:33:57 np0005465987 ceph-mon[80822]: from='mgr.14132 192.168.122.100:0/4553957' entity='mgr.compute-0.fmcstn' 
Oct  2 09:33:57 np0005465987 nova_compute[230713]: 2025-10-02 13:33:57.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:57 np0005465987 nova_compute[230713]: 2025-10-02 13:33:57.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:33:57 np0005465987 nova_compute[230713]: 2025-10-02 13:33:57.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:33:58 np0005465987 nova_compute[230713]: 2025-10-02 13:33:58.127 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:33:58 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:58 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:58 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:59 np0005465987 nova_compute[230713]: 2025-10-02 13:33:59.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:59 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:33:59 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:59 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:00 np0005465987 nova_compute[230713]: 2025-10-02 13:34:00.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:00 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:00 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:00 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:00.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:00 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 09:34:00 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1382525877' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 09:34:00 np0005465987 nova_compute[230713]: 2025-10-02 13:34:00.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:01 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:01 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:01 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:01.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:01 np0005465987 podman[333141]: 2025-10-02 13:34:01.40935649 +0000 UTC m=+0.057488150 container health_status 04ad1d8b45870a5bf548d46e8eec0e640e457728437e2ce9d3c62625c1bbc916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:34:01 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:02 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:02 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:02 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:02.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:02 np0005465987 nova_compute[230713]: 2025-10-02 13:34:02.966 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:03 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:03 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:03 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:03.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:03 np0005465987 ovs-vsctl[333193]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  2 09:34:04 np0005465987 nova_compute[230713]: 2025-10-02 13:34:04.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:04 np0005465987 virtqemud[230442]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  2 09:34:04 np0005465987 virtqemud[230442]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  2 09:34:04 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:04 np0005465987 virtqemud[230442]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 09:34:04 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:04 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:04.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:04 np0005465987 ceph-mgr[81180]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3158772141
Oct  2 09:34:04 np0005465987 podman[333420]: 2025-10-02 13:34:04.977506593 +0000 UTC m=+0.068581373 container health_status 2cf44ec397265566e5c100cbdfb45cd84f213b82dd19b66d02421ddd945c8610 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:34:04 np0005465987 podman[333426]: 2025-10-02 13:34:04.983694092 +0000 UTC m=+0.073155327 container health_status 40e9789bb7f5c40d057382dfb2a535b911a3e1e856069c96c1c6fae43fb033ff (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  2 09:34:05 np0005465987 podman[333443]: 2025-10-02 13:34:05.032301018 +0000 UTC m=+0.088480085 container health_status 110c3bfdd4b4b20d56e138132fbfecae75c73aa62aa4b6e40115e5f08203b517 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct  2 09:34:05 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: cache status {prefix=cache status} (starting...)
Oct  2 09:34:05 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:05 np0005465987 lvm[333570]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 09:34:05 np0005465987 lvm[333570]: VG ceph_vg0 finished
Oct  2 09:34:05 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:05 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:05 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:05.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:05 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: client ls {prefix=client ls} (starting...)
Oct  2 09:34:05 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:05 np0005465987 nova_compute[230713]: 2025-10-02 13:34:05.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:05 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: damage ls {prefix=damage ls} (starting...)
Oct  2 09:34:05 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:05 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  2 09:34:05 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2173317173' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  2 09:34:05 np0005465987 nova_compute[230713]: 2025-10-02 13:34:05.964 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump loads {prefix=dump loads} (starting...)
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.147 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.147 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.147 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.147 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.148 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 09:34:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3186655812' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:34:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4026358957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.574 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:34:06 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:06 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:06 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:06.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.715 2 WARNING nova.virt.libvirt.driver [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.716 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4018MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.716 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.716 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:34:06 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  2 09:34:06 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3726434202' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.823 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.823 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  2 09:34:06 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:06 np0005465987 nova_compute[230713]: 2025-10-02 13:34:06.917 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:34:07 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: ops {prefix=ops} (starting...)
Oct  2 09:34:07 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:07 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:07 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:07 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:07.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  2 09:34:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/460104981' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  2 09:34:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  2 09:34:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1006836127' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  2 09:34:07 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:34:07 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3744183020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:34:07 np0005465987 nova_compute[230713]: 2025-10-02 13:34:07.337 2 DEBUG oslo_concurrency.processutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:34:07 np0005465987 nova_compute[230713]: 2025-10-02 13:34:07.342 2 DEBUG nova.compute.provider_tree [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed in ProviderTree for provider: abc4c1e4-e97d-4065-b850-e0065c8f9ab7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:34:07 np0005465987 nova_compute[230713]: 2025-10-02 13:34:07.397 2 DEBUG nova.scheduler.client.report [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Inventory has not changed for provider abc4c1e4-e97d-4065-b850-e0065c8f9ab7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:34:07 np0005465987 nova_compute[230713]: 2025-10-02 13:34:07.398 2 DEBUG nova.compute.resource_tracker [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:34:07 np0005465987 nova_compute[230713]: 2025-10-02 13:34:07.398 2 DEBUG oslo_concurrency.lockutils [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:34:07 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: session ls {prefix=session ls} (starting...)
Oct  2 09:34:07 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy Can't run that command on an inactive MDS!
Oct  2 09:34:07 np0005465987 ceph-mds[84089]: mds.cephfs.compute-1.zfhmgy asok_command: status {prefix=status} (starting...)
Oct  2 09:34:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:34:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1188429990' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:34:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  2 09:34:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1775481542' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  2 09:34:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:34:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2508060939' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:34:08 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:08 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:34:08 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:08.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:34:08 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  2 09:34:08 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2180784943' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  2 09:34:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:34:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3463687866' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:34:09 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:09 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:34:09 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:09.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:34:09 np0005465987 nova_compute[230713]: 2025-10-02 13:34:09.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:09 np0005465987 nova_compute[230713]: 2025-10-02 13:34:09.399 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:09 np0005465987 nova_compute[230713]: 2025-10-02 13:34:09.400 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:09 np0005465987 nova_compute[230713]: 2025-10-02 13:34:09.400 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  2 09:34:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2695947281' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  2 09:34:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  2 09:34:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3106330298' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  2 09:34:09 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:34:09 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/821893910' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:34:09 np0005465987 nova_compute[230713]: 2025-10-02 13:34:09.965 2 DEBUG oslo_service.periodic_task [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:09 np0005465987 nova_compute[230713]: 2025-10-02 13:34:09.965 2 DEBUG nova.compute.manager [None req-5ed4826e-a42f-4a86-b4c0-bcc0c4354761 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:34:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  2 09:34:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/190382831' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  2 09:34:10 np0005465987 nova_compute[230713]: 2025-10-02 13:34:10.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:10 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:34:10 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2304674771' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:34:10 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:10 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.002000054s ======
Oct  2 09:34:10 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:10.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct  2 09:34:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:34:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3423444673' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:34:11 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:11 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:11 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:11.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4081000/0x0/0x1bfc00000, data 0x3e99525/0x40ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0c314d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4521555 data_alloc: 234881024 data_used: 34316288
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8a800 session 0x558e0c56f0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4081000/0x0/0x1bfc00000, data 0x3e99525/0x40ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0a5490e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.657582283s of 11.743950844s, submitted: 39
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0c4b6d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 64921600 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4529956 data_alloc: 234881024 data_used: 34934784
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a407e000/0x0/0x1bfc00000, data 0x3e9a534/0x40af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 56K writes, 211K keys, 56K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.04 MB/s#012Cumulative WAL: 56K writes, 20K syncs, 2.71 writes per sync, written: 0.21 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4720 writes, 16K keys, 4720 commit groups, 1.0 writes per commit group, ingest: 17.63 MB, 0.03 MB/s#012Interval WAL: 4720 writes, 1941 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8a800 session 0x558e0c6ce000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a407e000/0x0/0x1bfc00000, data 0x3e9a534/0x40af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x3fb4534/0x41c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4541482 data_alloc: 234881024 data_used: 34934784
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3f65000/0x0/0x1bfc00000, data 0x3fb4534/0x41c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407371776 unmapped: 64913408 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.362330437s of 13.426147461s, submitted: 20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407707648 unmapped: 64577536 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8b800 session 0x558e0c66d2c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4547110 data_alloc: 234881024 data_used: 34947072
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0c030000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3ded000/0x0/0x1bfc00000, data 0x412c534/0x4341000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408068096 unmapped: 64217088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 63987712 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: mgrc ms_handle_reset ms_handle_reset con 0x558e0de69c00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3158772141
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3158772141,v1:192.168.122.100:6801/3158772141]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: mgrc handle_mgr_configure stats_period=5
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3da9000/0x0/0x1bfc00000, data 0x4170534/0x4385000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 63987712 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0d25c1e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408297472 unmapped: 63987712 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c2e8c00 session 0x558e0c6d54a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408313856 unmapped: 63971328 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cf4bc00 session 0x558e0cc3e5a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4562468 data_alloc: 234881024 data_used: 34971648
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408322048 unmapped: 63963136 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d71000/0x0/0x1bfc00000, data 0x41a8534/0x43bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d52000/0x0/0x1bfc00000, data 0x41c7534/0x43dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4561344 data_alloc: 234881024 data_used: 34996224
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d52000/0x0/0x1bfc00000, data 0x41c7534/0x43dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 63840256 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408453120 unmapped: 63832064 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408453120 unmapped: 63832064 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d52000/0x0/0x1bfc00000, data 0x41c7534/0x43dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.371699333s of 13.565903664s, submitted: 57
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408567808 unmapped: 63717376 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408567808 unmapped: 63717376 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4561608 data_alloc: 234881024 data_used: 34996224
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408567808 unmapped: 63717376 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408576000 unmapped: 63709184 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3d44000/0x0/0x1bfc00000, data 0x41d5534/0x43ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408584192 unmapped: 63700992 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a39fa000/0x0/0x1bfc00000, data 0x451f534/0x4734000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409632768 unmapped: 62652416 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409862144 unmapped: 62423040 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4590294 data_alloc: 234881024 data_used: 35237888
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409862144 unmapped: 62423040 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a39bd000/0x0/0x1bfc00000, data 0x4556534/0x476b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 62414848 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409878528 unmapped: 62406656 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409878528 unmapped: 62406656 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.749568939s of 10.976778984s, submitted: 48
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0d5a30e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0c4b7680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409886720 unmapped: 62398464 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593286 data_alloc: 234881024 data_used: 35057664
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a39a7000/0x0/0x1bfc00000, data 0x4572534/0x4787000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cf4bc00 session 0x558e0c56f0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3c34000/0x0/0x1bfc00000, data 0x42834c3/0x4496000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4558884 data_alloc: 234881024 data_used: 34394112
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0a0f0d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2800 session 0x558e0cc88d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.785618782s of 10.672557831s, submitted: 50
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 62390272 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3c35000/0x0/0x1bfc00000, data 0x42834b3/0x4495000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4322596 data_alloc: 234881024 data_used: 23928832
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0cc3fa40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5269000/0x0/0x1bfc00000, data 0x2cb44a3/0x2ec5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407011328 unmapped: 65273856 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4321871 data_alloc: 234881024 data_used: 23928832
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407019520 unmapped: 65265664 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407027712 unmapped: 65257472 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407027712 unmapped: 65257472 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c60cf00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5269000/0x0/0x1bfc00000, data 0x2cb44a3/0x2ec5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.960060120s of 10.001210213s, submitted: 20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0ad5a960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283431 data_alloc: 234881024 data_used: 23838720
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 65249280 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5714000/0x0/0x1bfc00000, data 0x28094a3/0x2a1a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283431 data_alloc: 234881024 data_used: 23838720
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5714000/0x0/0x1bfc00000, data 0x28094a3/0x2a1a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f837000 session 0x558e0a0e3a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5714000/0x0/0x1bfc00000, data 0x28094a3/0x2a1a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.565967560s of 10.593852997s, submitted: 5
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283359 data_alloc: 234881024 data_used: 23838720
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a16a000 session 0x558e0cc88f00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a4c4780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407044096 unmapped: 65241088 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5714000/0x0/0x1bfc00000, data 0x28094a3/0x2a1a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407060480 unmapped: 65224704 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398753792 unmapped: 73531392 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0b29cd20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4155389 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4155389 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4155389 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398761984 unmapped: 73523200 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a606d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4155389 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 73515008 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 73506816 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.332891464s of 23.415493011s, submitted: 20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c030f00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f837000 session 0x558e08c6b680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cf4bc00 session 0x558e0d3cb860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0a0f10e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c56fe00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0d1c74a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c030000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 71401472 heap: 472285184 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f837000 session 0x558e0c6ce000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66c3c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0d30c3c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0d457e00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0cc894a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c67dc00 session 0x558e0b29de00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d5a21e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57e3000/0x0/0x1bfc00000, data 0x273b441/0x294b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281374 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fef000/0x0/0x1bfc00000, data 0x2f2e451/0x313f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0c6d43c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281374 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0d30dc20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 75276288 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fef000/0x0/0x1bfc00000, data 0x2f2e451/0x313f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0c684780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0ea8b800 session 0x558e0c6d5680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0c6d54a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401244160 unmapped: 75243520 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 75235328 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 73572352 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4412060 data_alloc: 234881024 data_used: 33955840
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc3000/0x0/0x1bfc00000, data 0x2f58484/0x316b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4412060 data_alloc: 234881024 data_used: 33955840
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc3000/0x0/0x1bfc00000, data 0x2f58484/0x316b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc3000/0x0/0x1bfc00000, data 0x2f58484/0x316b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4fc3000/0x0/0x1bfc00000, data 0x2f58484/0x316b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.165388107s of 19.341264725s, submitted: 39
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403505152 unmapped: 72982528 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 403546112 unmapped: 72941568 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409624576 unmapped: 66863104 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4510974 data_alloc: 234881024 data_used: 35069952
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4089000/0x0/0x1bfc00000, data 0x3a82484/0x3c95000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4522164 data_alloc: 234881024 data_used: 35672064
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 66617344 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4087000/0x0/0x1bfc00000, data 0x3a84484/0x3c97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409886720 unmapped: 66600960 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409886720 unmapped: 66600960 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 66592768 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4520192 data_alloc: 234881024 data_used: 35680256
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 66592768 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.241706848s of 13.720140457s, submitted: 470
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409894912 unmapped: 66592768 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409911296 unmapped: 66576384 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a71c5a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a2400 session 0x558e0a0e3e00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4086000/0x0/0x1bfc00000, data 0x3a85484/0x3c98000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a3400 session 0x558e0d1c6780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0d07c800 session 0x558e0c60c1e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408666112 unmapped: 67821568 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 67813376 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4331577 data_alloc: 234881024 data_used: 26632192
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0af021e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0d457c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408674304 unmapped: 67813376 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a0f12c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4172895 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4172895 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 72433664 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0d07a800 session 0x558e0d1c6960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 72425472 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 72425472 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4172895 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 72425472 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 72425472 heap: 476487680 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.704429626s of 20.876094818s, submitted: 60
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a3400 session 0x558e0c684b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 80101376 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 80101376 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404267008 unmapped: 80101376 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281101 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404275200 unmapped: 80093184 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404283392 unmapped: 80084992 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404283392 unmapped: 80084992 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4db1000/0x0/0x1bfc00000, data 0x2d5d441/0x2f6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404283392 unmapped: 80084992 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404299776 unmapped: 80068608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281101 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404299776 unmapped: 80068608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404299776 unmapped: 80068608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404299776 unmapped: 80068608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0d07c800 session 0x558e0c314960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.310799599s of 11.459929466s, submitted: 13
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404242432 unmapped: 80125952 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a3400 session 0x558e0c56e1e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d29000/0x0/0x1bfc00000, data 0x2de5441/0x2ff5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c315680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0bf4b860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404242432 unmapped: 80125952 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0c5712c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291355 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404242432 unmapped: 80125952 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 404242432 unmapped: 80125952 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405372928 unmapped: 78995456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405372928 unmapped: 78995456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405372928 unmapped: 78995456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4403355 data_alloc: 234881024 data_used: 30060544
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405372928 unmapped: 78995456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0d1c7a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4403355 data_alloc: 234881024 data_used: 30060544
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405381120 unmapped: 78987264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.166716576s of 14.232734680s, submitted: 5
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 78741504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,1,4])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4d28000/0x0/0x1bfc00000, data 0x2de5451/0x2ff6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407838720 unmapped: 76529664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a44f9000/0x0/0x1bfc00000, data 0x3614451/0x3825000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4487753 data_alloc: 234881024 data_used: 30928896
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 76095488 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4483793 data_alloc: 234881024 data_used: 30941184
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4473000/0x0/0x1bfc00000, data 0x369a451/0x38ab000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0a0e2960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a44d9000/0x0/0x1bfc00000, data 0x3634451/0x3845000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4473185 data_alloc: 234881024 data_used: 30420992
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408412160 unmapped: 75956224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.639778137s of 13.917524338s, submitted: 75
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409460736 unmapped: 74907648 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a44d4000/0x0/0x1bfc00000, data 0x3639451/0x384a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0ce90780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409460736 unmapped: 74907648 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f836c00 session 0x558e0d25d860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0c571680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409460736 unmapped: 74907648 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406626304 unmapped: 77742080 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4187461 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a5a3400 session 0x558e0a4c45a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5d000/0x0/0x1bfc00000, data 0x1eb1441/0x20c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186736 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e098d9a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.108777046s of 10.413861275s, submitted: 17
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0ae33c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406634496 unmapped: 77733888 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406642688 unmapped: 77725696 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f836c00 session 0x558e0c56f4a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186039 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186039 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406650880 unmapped: 77717504 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406659072 unmapped: 77709312 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406659072 unmapped: 77709312 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186039 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406667264 unmapped: 77701120 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c5e000/0x0/0x1bfc00000, data 0x1eb13df/0x20c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406675456 unmapped: 77692928 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186039 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0ae325a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0cf0ed20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c4b6f00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406675456 unmapped: 77692928 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0a5490e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.797447205s of 19.174055099s, submitted: 9
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406683648 unmapped: 77684736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0a548b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f836c00 session 0x558e0ae16b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0ae321e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0cc3f4a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0d25de00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57ef000/0x0/0x1bfc00000, data 0x231f3ef/0x252f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406683648 unmapped: 77684736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406683648 unmapped: 77684736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406691840 unmapped: 77676544 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4226107 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57ef000/0x0/0x1bfc00000, data 0x231f3ef/0x252f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406691840 unmapped: 77676544 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406691840 unmapped: 77676544 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406700032 unmapped: 77668352 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406700032 unmapped: 77668352 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406700032 unmapped: 77668352 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0c6d4780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4228757 data_alloc: 218103808 data_used: 16834560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407052288 unmapped: 77316096 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57c5000/0x0/0x1bfc00000, data 0x23493ef/0x2559000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407052288 unmapped: 77316096 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.692783356s of 10.731881142s, submitted: 9
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0a0f0d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0b3d2800 session 0x558e0d30dc20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d1c63c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0c314000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0ce912c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4267989 data_alloc: 218103808 data_used: 21397504
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0d1c6f00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cf4d400 session 0x558e0c6d41e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0ce90000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57be000/0x0/0x1bfc00000, data 0x234f451/0x2560000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0a14f0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 77209600 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4269287 data_alloc: 218103808 data_used: 21422080
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57bd000/0x0/0x1bfc00000, data 0x234f461/0x2561000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406708224 unmapped: 77660160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57bd000/0x0/0x1bfc00000, data 0x234f461/0x2561000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406708224 unmapped: 77660160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57bd000/0x0/0x1bfc00000, data 0x234f461/0x2561000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406708224 unmapped: 77660160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.362232208s of 11.464373589s, submitted: 34
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 406937600 unmapped: 77430784 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5589000/0x0/0x1bfc00000, data 0x2583461/0x2795000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407609344 unmapped: 76759040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4317003 data_alloc: 218103808 data_used: 21434368
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407912448 unmapped: 76455936 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407240704 unmapped: 77127680 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a514b000/0x0/0x1bfc00000, data 0x29c1461/0x2bd3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407707648 unmapped: 76660736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407707648 unmapped: 76660736 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 407715840 unmapped: 76652544 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4339349 data_alloc: 234881024 data_used: 22343680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408764416 unmapped: 75603968 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408920064 unmapped: 75448320 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e75000/0x0/0x1bfc00000, data 0x2c89461/0x2e9b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408928256 unmapped: 75440128 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.758886337s of 10.001246452s, submitted: 100
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4364119 data_alloc: 234881024 data_used: 22368256
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4364119 data_alloc: 234881024 data_used: 22368256
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4364119 data_alloc: 234881024 data_used: 22368256
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e69000/0x0/0x1bfc00000, data 0x2c9b461/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409026560 unmapped: 75341824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.248374939s of 15.257328033s, submitted: 11
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409034752 unmapped: 75333632 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409034752 unmapped: 75333632 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363399 data_alloc: 234881024 data_used: 22360064
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0cccf800 session 0x558e0c684d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e11154c00 session 0x558e0a4c45a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409034752 unmapped: 75333632 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0f836c00 session 0x558e0c66cd20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e109db400 session 0x558e0d5a34a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409042944 unmapped: 75325440 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e71000/0x0/0x1bfc00000, data 0x2c9b451/0x2eac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,0,0,1])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0a2bf000 session 0x558e0bf4b0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409042944 unmapped: 75325440 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409042944 unmapped: 75325440 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 ms_handle_reset con 0x558e0c667c00 session 0x558e0c56fe00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409042944 unmapped: 75325440 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0cccf800 session 0x558e0a71cf00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0f836c00 session 0x558e0b29d860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a71da40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231339 data_alloc: 218103808 data_used: 16879616
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0c667c00 session 0x558e0c6cf0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409067520 unmapped: 75300864 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e0cccf800 session 0x558e0c4b7a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a59bd000/0x0/0x1bfc00000, data 0x214e0aa/0x2360000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409075712 unmapped: 75292672 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409075712 unmapped: 75292672 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a59bd000/0x0/0x1bfc00000, data 0x214e0aa/0x2360000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 75538432 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4232271 data_alloc: 218103808 data_used: 16965632
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a59bd000/0x0/0x1bfc00000, data 0x214e0aa/0x2360000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 75530240 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 75522048 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 ms_handle_reset con 0x558e11154c00 session 0x558e0d5a23c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.663591385s of 16.776542664s, submitted: 48
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231687 data_alloc: 218103808 data_used: 16965632
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 401 ms_handle_reset con 0x558e0b340000 session 0x558e0d1c6b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 401 heartbeat osd_stat(store_statfs(0x1a59be000/0x0/0x1bfc00000, data 0x214e0aa/0x2360000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 75505664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 401 heartbeat osd_stat(store_statfs(0x1a59bb000/0x0/0x1bfc00000, data 0x214fd57/0x2363000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4239553 data_alloc: 218103808 data_used: 17051648
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 75505664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 75505664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 75505664 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 75522048 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b7000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 75522048 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4241247 data_alloc: 218103808 data_used: 17059840
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b7000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b7000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b7000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.227296829s of 12.311328888s, submitted: 44
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a59b8000/0x0/0x1bfc00000, data 0x2151896/0x2366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408854528 unmapped: 75513856 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0f836c00 session 0x558e0c6cfc20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e109db400 session 0x558e0b29da40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240071 data_alloc: 218103808 data_used: 17059840
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0b340000 session 0x558e0d5a2960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213106 data_alloc: 218103808 data_used: 16859136
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408879104 unmapped: 75489280 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213106 data_alloc: 218103808 data_used: 16859136
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 75481088 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213106 data_alloc: 218103808 data_used: 16859136
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5c54000/0x0/0x1bfc00000, data 0x1eb6824/0x20c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213106 data_alloc: 218103808 data_used: 16859136
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408895488 unmapped: 75472896 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.005113602s of 24.130119324s, submitted: 35
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6cfe00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 75538432 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0c667c00 session 0x558e0d457680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a71d0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408625152 unmapped: 75743232 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408633344 unmapped: 75735040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408633344 unmapped: 75735040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0b340000 session 0x558e0a0f1860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301691 data_alloc: 218103808 data_used: 16859136
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0c667c00 session 0x558e0d25de00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408633344 unmapped: 75735040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e0f836c00 session 0x558e0d1c74a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 ms_handle_reset con 0x558e109db400 session 0x558e0cc88d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408633344 unmapped: 75735040 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408444928 unmapped: 75923456 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4380091 data_alloc: 234881024 data_used: 27590656
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4380091 data_alloc: 234881024 data_used: 27590656
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a5167000/0x0/0x1bfc00000, data 0x29a3886/0x2bb7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408461312 unmapped: 75907072 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.854196548s of 17.023889542s, submitted: 38
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 75177984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410050560 unmapped: 74317824 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a4ce6000/0x0/0x1bfc00000, data 0x2e1e886/0x3032000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4431225 data_alloc: 234881024 data_used: 28008448
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4431865 data_alloc: 234881024 data_used: 28090368
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a3a98000/0x0/0x1bfc00000, data 0x2ed2886/0x30e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.631714821s of 11.070816994s, submitted: 104
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a3a97000/0x0/0x1bfc00000, data 0x2ed3886/0x30e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4422481 data_alloc: 234881024 data_used: 28094464
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a3a95000/0x0/0x1bfc00000, data 0x2ed4886/0x30e8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411238400 unmapped: 73129984 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 403 ms_handle_reset con 0x558e0c667c00 session 0x558e0a71c5a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 417325056 unmapped: 67043328 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 403 ms_handle_reset con 0x558e0f836c00 session 0x558e08c6b680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 404 ms_handle_reset con 0x558e0cccf800 session 0x558e0c314b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413450240 unmapped: 70918144 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e11154c00 session 0x558e0c685a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 70893568 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4513340 data_alloc: 234881024 data_used: 28905472
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 heartbeat osd_stat(store_statfs(0x1a32c0000/0x0/0x1bfc00000, data 0x36a3e11/0x38bc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 70885376 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0ccce000 session 0x558e0c6ce3c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0c667c00 session 0x558e0c4b6000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0cccf800 session 0x558e0c6d5c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410288128 unmapped: 74080256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410296320 unmapped: 74072064 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 74063872 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 74063872 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4509472 data_alloc: 234881024 data_used: 28901376
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.822659492s of 11.073749542s, submitted: 77
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0a2bf000 session 0x558e0bf4a5a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 74063872 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 heartbeat osd_stat(store_statfs(0x1a32b5000/0x0/0x1bfc00000, data 0x36b0e11/0x38c9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 74063872 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0f836c00 session 0x558e0a0f1c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e11154c00 session 0x558e0d30d680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d1c6960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0c667c00 session 0x558e0c4b6b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410329088 unmapped: 74039296 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0f836c00 session 0x558e0d5a2000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0cccf800 session 0x558e0ce90780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e11154c00 session 0x558e0a5492c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c56f0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 406 ms_handle_reset con 0x558e0c667c00 session 0x558e0c6d4d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4509750 data_alloc: 234881024 data_used: 28909568
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a32b0000/0x0/0x1bfc00000, data 0x36b2960/0x38cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a32b0000/0x0/0x1bfc00000, data 0x36b2960/0x38cd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4509822 data_alloc: 234881024 data_used: 28909568
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.139338493s of 10.163581848s, submitted: 14
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0cccf800 session 0x558e0c314780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410345472 unmapped: 74022912 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410353664 unmapped: 74014720 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a32ad000/0x0/0x1bfc00000, data 0x36b45b9/0x38d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0f836c00 session 0x558e0d3cba40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410353664 unmapped: 74014720 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0cf4c400 session 0x558e0d1c61e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 74006528 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410370048 unmapped: 73998336 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0c667c00 session 0x558e0c030000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4514644 data_alloc: 234881024 data_used: 29429760
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0cccf800 session 0x558e0bf4bc20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410370048 unmapped: 73998336 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a32ad000/0x0/0x1bfc00000, data 0x36b45b9/0x38d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410370048 unmapped: 73998336 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410378240 unmapped: 73990144 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d5a3860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410443776 unmapped: 73924608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410443776 unmapped: 73924608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4543604 data_alloc: 234881024 data_used: 32706560
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410443776 unmapped: 73924608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a32ad000/0x0/0x1bfc00000, data 0x36b45b9/0x38d0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410443776 unmapped: 73924608 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.790160179s of 11.805708885s, submitted: 4
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 ms_handle_reset con 0x558e11155400 session 0x558e0cc89a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410460160 unmapped: 73908224 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 408 ms_handle_reset con 0x558e0ccce400 session 0x558e0c66d680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410468352 unmapped: 73900032 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410468352 unmapped: 73900032 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4572762 data_alloc: 234881024 data_used: 32714752
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 408 heartbeat osd_stat(store_statfs(0x1a32aa000/0x0/0x1bfc00000, data 0x36b6266/0x38d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410468352 unmapped: 73900032 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410501120 unmapped: 73867264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 410501120 unmapped: 73867264 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4614288 data_alloc: 234881024 data_used: 33460224
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f9c000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f9c000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 68444160 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416186368 unmapped: 68182016 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4616848 data_alloc: 234881024 data_used: 33906688
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416186368 unmapped: 68182016 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416186368 unmapped: 68182016 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.853892326s of 14.974067688s, submitted: 36
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f9c000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 70172672 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4632678 data_alloc: 234881024 data_used: 34930688
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4633158 data_alloc: 234881024 data_used: 34942976
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c60c1e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414359552 unmapped: 70008832 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x39c2da5/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [0,1])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 ms_handle_reset con 0x558e0c667c00 session 0x558e0a71d0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a346e000/0x0/0x1bfc00000, data 0x3113d43/0x3331000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a346e000/0x0/0x1bfc00000, data 0x3113d43/0x3331000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524596 data_alloc: 234881024 data_used: 32493568
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a346e000/0x0/0x1bfc00000, data 0x3113d43/0x3331000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.275312424s of 16.359561920s, submitted: 35
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 414384128 unmapped: 69984256 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525396 data_alloc: 234881024 data_used: 32514048
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 ms_handle_reset con 0x558e0cccf800 session 0x558e0ad5a780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a346e000/0x0/0x1bfc00000, data 0x3113d43/0x3331000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e11155400 session 0x558e0d1c65a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e0cd2d000 session 0x558e0a0f03c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e0b3d3400 session 0x558e0a4c4b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423624704 unmapped: 60743680 heap: 484368384 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c60d2c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a177a000/0x0/0x1bfc00000, data 0x51e49fe/0x5404000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 410 ms_handle_reset con 0x558e0c667c00 session 0x558e0c5123c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 419741696 unmapped: 77234176 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 412 ms_handle_reset con 0x558e0cccf800 session 0x558e0c6cf680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 419758080 unmapped: 77217792 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 412 ms_handle_reset con 0x558e11155400 session 0x558e0d30d4a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 419758080 unmapped: 77217792 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 412 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a548960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 412 ms_handle_reset con 0x558e0b3d3400 session 0x558e0a4c4780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 78233600 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764253 data_alloc: 234881024 data_used: 32800768
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 78233600 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a1549000/0x0/0x1bfc00000, data 0x54142be/0x5635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 78217216 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 413 ms_handle_reset con 0x558e0c667c00 session 0x558e098d9a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416489472 unmapped: 80486400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 413 ms_handle_reset con 0x558e0f836c00 session 0x558e0d5a3c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.862730026s of 10.283346176s, submitted: 116
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 80445440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 413 ms_handle_reset con 0x558e0cccf800 session 0x558e0d1c65a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 80445440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4489765 data_alloc: 234881024 data_used: 29618176
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 80445440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 80445440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3b49000/0x0/0x1bfc00000, data 0x2e11c40/0x3034000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66d680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416555008 unmapped: 80420864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0b340000 session 0x558e0a549c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 416555008 unmapped: 80420864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0b3d3400 session 0x558e0c685a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 88064000 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291724 data_alloc: 218103808 data_used: 16338944
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 88064000 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0c667c00 session 0x558e0cc88d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a4a8e000/0x0/0x1bfc00000, data 0x1ecd7c3/0x20f0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0f836c00 session 0x558e0cc3ed20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c684780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0b340000 session 0x558e0c6ce5a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 ms_handle_reset con 0x558e0b3d3400 session 0x558e0bf4b860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4312252 data_alloc: 218103808 data_used: 16338944
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a48b2000/0x0/0x1bfc00000, data 0x20a97c3/0x22cc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409288704 unmapped: 87687168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.552254677s of 13.795706749s, submitted: 89
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4320330 data_alloc: 218103808 data_used: 16969728
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4320330 data_alloc: 218103808 data_used: 16969728
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a48ae000/0x0/0x1bfc00000, data 0x20ab302/0x22cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1907f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409239552 unmapped: 87736320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.908812523s of 11.926033020s, submitted: 12
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409862144 unmapped: 87113728 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4366102 data_alloc: 218103808 data_used: 17080320
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409870336 unmapped: 87105536 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a303e000/0x0/0x1bfc00000, data 0x277b302/0x299f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4378202 data_alloc: 218103808 data_used: 17108992
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 84574208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3020000/0x0/0x1bfc00000, data 0x279a302/0x29be000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4374966 data_alloc: 218103808 data_used: 17108992
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.472076416s of 13.670102119s, submitted: 55
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 84770816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3013000/0x0/0x1bfc00000, data 0x27a7302/0x29cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4375226 data_alloc: 218103808 data_used: 17108992
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3013000/0x0/0x1bfc00000, data 0x27a7302/0x29cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3013000/0x0/0x1bfc00000, data 0x27a7302/0x29cb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 84762624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377610 data_alloc: 218103808 data_used: 17108992
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e12992800 session 0x558e0cc3fa40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 84754432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0c667c00 session 0x558e0c56eb40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a300f000/0x0/0x1bfc00000, data 0x27aa312/0x29cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377770 data_alloc: 218103808 data_used: 17113088
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a300f000/0x0/0x1bfc00000, data 0x27aa312/0x29cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 84746240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377770 data_alloc: 218103808 data_used: 17113088
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 84738048 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c4b7a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.976377487s of 19.017902374s, submitted: 5
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412237824 unmapped: 84738048 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 84729856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b340000 session 0x558e0a0e2960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b3d3400 session 0x558e0d30d0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e12992800 session 0x558e0ce90b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a300c000/0x0/0x1bfc00000, data 0x27ab322/0x29d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 84729856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0de69000 session 0x558e0c6d41e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412246016 unmapped: 84729856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305796 data_alloc: 218103808 data_used: 16351232
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6854a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0f496400 session 0x558e0c60d680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0cd2c800 session 0x558e0c6d4780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38e9000/0x0/0x1bfc00000, data 0x1ecf322/0x20f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b340000 session 0x558e0c4b70e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304511 data_alloc: 218103808 data_used: 16351232
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 84713472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b3d3400 session 0x558e0d3cad20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.843249321s of 11.028274536s, submitted: 15
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c56e960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x1ecf302/0x20f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303631 data_alloc: 218103808 data_used: 16347136
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x1ecf302/0x20f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0b340000 session 0x558e0d3cbe00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38eb000/0x0/0x1bfc00000, data 0x1ecf302/0x20f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0cd2c800 session 0x558e08c6be00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38ea000/0x0/0x1bfc00000, data 0x1ecf364/0x20f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 84705280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305413 data_alloc: 218103808 data_used: 16347136
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0f496400 session 0x558e0c5714a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 84688896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38ea000/0x0/0x1bfc00000, data 0x1ecf364/0x20f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a21f9c7), peers [1,2] op hist [2,0,0,0,0,1])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0de69000 session 0x558e0cc3f2c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66de00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412688384 unmapped: 84287488 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.943180084s of 10.085391045s, submitted: 40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412696576 unmapped: 84279296 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0b340000 session 0x558e0bf4bc20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0cd2c800 session 0x558e0af02960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 84303872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2cc4000/0x0/0x1bfc00000, data 0x26e3fbd/0x290a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 84303872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4407523 data_alloc: 218103808 data_used: 16355328
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a287a000/0x0/0x1bfc00000, data 0x2b2dfbd/0x2d54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a287a000/0x0/0x1bfc00000, data 0x2b2dfbd/0x2d54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 84295680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a287a000/0x0/0x1bfc00000, data 0x2b2dfbd/0x2d54000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 84295680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0f496400 session 0x558e0d3cb860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409100288 unmapped: 87875584 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e12992800 session 0x558e0d1c7e00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2856000/0x0/0x1bfc00000, data 0x2b51fbd/0x2d78000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409100288 unmapped: 87875584 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409116672 unmapped: 87859200 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446431 data_alloc: 218103808 data_used: 20959232
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2855000/0x0/0x1bfc00000, data 0x2b5201f/0x2d79000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4468351 data_alloc: 234881024 data_used: 24096768
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.258309364s of 12.342646599s, submitted: 19
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 87752704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2855000/0x0/0x1bfc00000, data 0x2b5201f/0x2d79000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 85942272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411328512 unmapped: 85647360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 411328512 unmapped: 85647360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412581888 unmapped: 84393984 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525595 data_alloc: 234881024 data_used: 28930048
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0cd2c800 session 0x558e0c571a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 84852736 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0f496400 session 0x558e0d30d4a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a245e000/0x0/0x1bfc00000, data 0x2f4901f/0x3170000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412426240 unmapped: 84549632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412426240 unmapped: 84549632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e12992800 session 0x558e0d1c65a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2426000/0x0/0x1bfc00000, data 0x2f8101f/0x31a8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c56e960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412434432 unmapped: 84541440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412434432 unmapped: 84541440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4542345 data_alloc: 234881024 data_used: 29458432
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 ms_handle_reset con 0x558e15232c00 session 0x558e0d1c6960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.196988106s of 10.386419296s, submitted: 61
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 412434432 unmapped: 84541440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 418 ms_handle_reset con 0x558e0cd2c800 session 0x558e0ce91860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 418 ms_handle_reset con 0x558e0ea8a000 session 0x558e0a0e3680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a28ce000/0x0/0x1bfc00000, data 0x2ad8c08/0x2cff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4474491 data_alloc: 234881024 data_used: 24653824
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 418 ms_handle_reset con 0x558e0f496400 session 0x558e0d3ca960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a28ce000/0x0/0x1bfc00000, data 0x2ad8c08/0x2cff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4474491 data_alloc: 234881024 data_used: 24653824
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a28ce000/0x0/0x1bfc00000, data 0x2ad8c08/0x2cff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.505303383s of 11.614216805s, submitted: 31
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e12992800 session 0x558e0cc88f00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0b340000 session 0x558e0bf4a780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a4c45a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0cd2c800 session 0x558e0b29c000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a28cb000/0x0/0x1bfc00000, data 0x2ada747/0x2d02000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0ea8a000 session 0x558e0d3cbe00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405253 data_alloc: 218103808 data_used: 16371712
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405253 data_alloc: 218103808 data_used: 16371712
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405253 data_alloc: 218103808 data_used: 16371712
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0f496400 session 0x558e0c4b7680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e12992800 session 0x558e0cc89e00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0a2bf000 session 0x558e0cc89c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0cd2c800 session 0x558e0c314b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 413491200 unmapped: 83484672 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405253 data_alloc: 218103808 data_used: 16371712
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4477893 data_alloc: 234881024 data_used: 26533888
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 81117184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4477893 data_alloc: 234881024 data_used: 26533888
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.765428543s of 28.864784241s, submitted: 33
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a2ada000/0x0/0x1bfc00000, data 0x28cc747/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [0,0,3,1,1])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 420429824 unmapped: 76546048 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 77209600 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0f496400 session 0x558e0c60cf00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0b341800 session 0x558e0c6d5a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a14e5000/0x0/0x1bfc00000, data 0x3ec07a9/0x40e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a14e5000/0x0/0x1bfc00000, data 0x3ec07a9/0x40e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4660750 data_alloc: 234881024 data_used: 26886144
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e104ef000 session 0x558e0af02f00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a14c3000/0x0/0x1bfc00000, data 0x3ee27a9/0x410b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c684f00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0b341800 session 0x558e0cc3e5a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4657126 data_alloc: 234881024 data_used: 26886144
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 ms_handle_reset con 0x558e0cd2c800 session 0x558e0a0f14a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a14c3000/0x0/0x1bfc00000, data 0x3ee27a9/0x410b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 75923456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.701116562s of 11.043125153s, submitted: 137
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 421076992 unmapped: 75898880 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 420 ms_handle_reset con 0x558e0f496400 session 0x558e0c60d680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422682624 unmapped: 74293248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a14bb000/0x0/0x1bfc00000, data 0x3ee605b/0x4111000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422682624 unmapped: 74293248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4740938 data_alloc: 234881024 data_used: 35729408
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a14b8000/0x0/0x1bfc00000, data 0x40a105b/0x4116000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422690816 unmapped: 74285056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4740938 data_alloc: 234881024 data_used: 35729408
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a14b8000/0x0/0x1bfc00000, data 0x40a105b/0x4116000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0c618c00 session 0x558e0cf0f860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422699008 unmapped: 74276864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422699008 unmapped: 74276864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.234736443s of 11.276307106s, submitted: 10
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 422699008 unmapped: 74276864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 69681152 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427311104 unmapped: 69664768 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4863206 data_alloc: 234881024 data_used: 36593664
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0615000/0x0/0x1bfc00000, data 0x4f4306b/0x4fb9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0615000/0x0/0x1bfc00000, data 0x4f4306b/0x4fb9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 69648384 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6d4b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0b341800 session 0x558e0ad5b0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 69648384 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0c618c00 session 0x558e0c685a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 69640192 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0cd2c800 session 0x558e0ae32d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427343872 unmapped: 69632000 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427343872 unmapped: 69632000 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4871830 data_alloc: 234881024 data_used: 36737024
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427532288 unmapped: 69443584 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a060b000/0x0/0x1bfc00000, data 0x4f4d06b/0x4fc3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427532288 unmapped: 69443584 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.665211678s of 10.000015259s, submitted: 103
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0605000/0x0/0x1bfc00000, data 0x4f5306b/0x4fc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e104ef000 session 0x558e0c56ed20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4873286 data_alloc: 234881024 data_used: 36773888
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0605000/0x0/0x1bfc00000, data 0x4f5306b/0x4fc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4873446 data_alloc: 234881024 data_used: 36798464
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427556864 unmapped: 69419008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427687936 unmapped: 69287936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0605000/0x0/0x1bfc00000, data 0x4f5306b/0x4fc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4890086 data_alloc: 251658240 data_used: 39841792
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 59K writes, 225K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s#012Cumulative WAL: 59K writes, 22K syncs, 2.68 writes per sync, written: 0.22 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3818 writes, 14K keys, 3818 commit groups, 1.0 writes per commit group, ingest: 14.28 MB, 0.02 MB/s#012Interval WAL: 3818 writes, 1605 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e08aff610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.779505730s of 14.846899033s, submitted: 3
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428474368 unmapped: 68501504 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428728320 unmapped: 68247552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4908390 data_alloc: 251658240 data_used: 41259008
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0599000/0x0/0x1bfc00000, data 0x4fc006b/0x5035000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0599000/0x0/0x1bfc00000, data 0x4fc006b/0x5035000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4908770 data_alloc: 251658240 data_used: 41263104
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0598000/0x0/0x1bfc00000, data 0x4fc106b/0x5036000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0598000/0x0/0x1bfc00000, data 0x4fc106b/0x5036000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 68149248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428425216 unmapped: 68550656 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.186706543s of 11.215800285s, submitted: 11
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428425216 unmapped: 68550656 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a0e2000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649418 data_alloc: 234881024 data_used: 30011392
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0b341800 session 0x558e0c315680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a0596000/0x0/0x1bfc00000, data 0x4fc206b/0x5037000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a12b1000/0x0/0x1bfc00000, data 0x3879009/0x38ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428457984 unmapped: 68517888 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649418 data_alloc: 234881024 data_used: 30011392
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0f496400 session 0x558e0d30d680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0c618c00 session 0x558e0c56e780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1ce1000/0x0/0x1bfc00000, data 0x3879009/0x38ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1ce1000/0x0/0x1bfc00000, data 0x3879009/0x38ed000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649066 data_alloc: 234881024 data_used: 30011392
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0cd2c800 session 0x558e0c571860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.735739708s of 13.791164398s, submitted: 25
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c6d4780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1ce1000/0x0/0x1bfc00000, data 0x3878ff9/0x38ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1d51000/0x0/0x1bfc00000, data 0x3809ff9/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4641654 data_alloc: 234881024 data_used: 30007296
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1d51000/0x0/0x1bfc00000, data 0x3809ff9/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 ms_handle_reset con 0x558e0b341800 session 0x558e08c6a1e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1d51000/0x0/0x1bfc00000, data 0x3809ff9/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428466176 unmapped: 68509696 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a1d51000/0x0/0x1bfc00000, data 0x3809ff9/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a1d4e000/0x0/0x1bfc00000, data 0x3807ca6/0x387f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 422 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c6d5680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 422 ms_handle_reset con 0x558e0c618c00 session 0x558e0d457680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4637380 data_alloc: 234881024 data_used: 30015488
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 67452928 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a1d50000/0x0/0x1bfc00000, data 0x3651ca6/0x387d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 429531136 unmapped: 67444736 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.924095154s of 10.024603844s, submitted: 47
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 423 ms_handle_reset con 0x558e0f496400 session 0x558e0d1c6780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a34c5000/0x0/0x1bfc00000, data 0x1edb7e5/0x2108000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4371570 data_alloc: 218103808 data_used: 16392192
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423264256 unmapped: 73711616 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0a2bf000 session 0x558e0bf4b0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423280640 unmapped: 73695232 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0b341800 session 0x558e0d25cf00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0c618c00 session 0x558e0d457e00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0ea8a000 session 0x558e0d3ca000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e1cd82800 session 0x558e0d456960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a34c2000/0x0/0x1bfc00000, data 0x1edd492/0x210b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426958848 unmapped: 70017024 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 424 ms_handle_reset con 0x558e0a2bf000 session 0x558e0ae33860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a2f98000/0x0/0x1bfc00000, data 0x2407492/0x2635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 72646656 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417202 data_alloc: 218103808 data_used: 16392192
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 72646656 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424337408 unmapped: 72638464 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420000 data_alloc: 218103808 data_used: 16392192
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2f95000/0x0/0x1bfc00000, data 0x2408fd1/0x2638000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424345600 unmapped: 72630272 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b341800 session 0x558e0a71cf00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424353792 unmapped: 72622080 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 72589312 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.533414841s of 16.642957687s, submitted: 39
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c570b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2f95000/0x0/0x1bfc00000, data 0x2408fd1/0x2638000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 72589312 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459394 data_alloc: 234881024 data_used: 21716992
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 72589312 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 72589312 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c684b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8b800 session 0x558e0cc3f860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66cf00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541033 data_alloc: 234881024 data_used: 21716992
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2528000/0x0/0x1bfc00000, data 0x2e76fd1/0x30a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 72269824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 71360512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.110908508s of 10.324708939s, submitted: 77
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427384832 unmapped: 69591040 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4607333 data_alloc: 234881024 data_used: 21712896
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427393024 unmapped: 69582848 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1c7e000/0x0/0x1bfc00000, data 0x3718fd1/0x3948000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1c7d000/0x0/0x1bfc00000, data 0x3721fd1/0x3951000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b341800 session 0x558e0d3ca5a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67dc00 session 0x558e098d8960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4619143 data_alloc: 234881024 data_used: 22691840
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c513a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b206400 session 0x558e0c571680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426532864 unmapped: 70443008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1c7c000/0x0/0x1bfc00000, data 0x3721fe1/0x3952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426541056 unmapped: 70434816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1c7c000/0x0/0x1bfc00000, data 0x3721fe1/0x3952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426639360 unmapped: 70336512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426639360 unmapped: 70336512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426639360 unmapped: 70336512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4699282 data_alloc: 234881024 data_used: 33542144
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426647552 unmapped: 70328320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c618c00 session 0x558e08c6a960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426647552 unmapped: 70328320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.568646431s of 12.707002640s, submitted: 28
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67dc00 session 0x558e0d25d860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8a000 session 0x558e0cf0f860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67c800 session 0x558e0a0e2b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c619800 session 0x558e0c6d43c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c618c00 session 0x558e0c4b6000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 70762496 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67c800 session 0x558e0d1c7860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c67dc00 session 0x558e0d30c5a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2a51000/0x0/0x1bfc00000, data 0x294cfe1/0x2b7d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 72982528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 72982528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4610407 data_alloc: 234881024 data_used: 27230208
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0ea8a000 session 0x558e0c56e1e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 72982528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 72974336 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 424173568 unmapped: 72802304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2268000/0x0/0x1bfc00000, data 0x3135043/0x3366000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [0,0,0,0,7])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427737088 unmapped: 69238784 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427884544 unmapped: 69091328 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4751766 data_alloc: 234881024 data_used: 35971072
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1831000/0x0/0x1bfc00000, data 0x3b64043/0x3d95000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759170 data_alloc: 234881024 data_used: 36057088
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.375086784s of 13.775465012s, submitted: 154
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427892736 unmapped: 69083136 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427909120 unmapped: 69066752 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a1835000/0x0/0x1bfc00000, data 0x3b68043/0x3d99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d30de00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b341800 session 0x558e0c314000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428294144 unmapped: 68681728 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66d2c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a0df9000/0x0/0x1bfc00000, data 0x459e033/0x47ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432095232 unmapped: 64880640 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4849623 data_alloc: 234881024 data_used: 36495360
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a0d47000/0x0/0x1bfc00000, data 0x4648033/0x4878000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4849783 data_alloc: 234881024 data_used: 36499456
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a0d36000/0x0/0x1bfc00000, data 0x4668033/0x4898000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a0d36000/0x0/0x1bfc00000, data 0x4668033/0x4898000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 64626688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4841275 data_alloc: 234881024 data_used: 36503552
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.999837875s of 14.550086975s, submitted: 164
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0c4b6960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c71b000 session 0x558e0c6cf4a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432365568 unmapped: 64610304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c618c00 session 0x558e0a4c45a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 426991616 unmapped: 69984256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffd1/0x35af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [0,0,0,0,0,0,1,1])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427048960 unmapped: 69926912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201f000/0x0/0x1bfc00000, data 0x337ffd1/0x35af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427065344 unmapped: 69910528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427081728 unmapped: 69894144 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4641177 data_alloc: 234881024 data_used: 27955200
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427122688 unmapped: 69853184 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d456000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427139072 unmapped: 69836800 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0b341800 session 0x558e0d1c7c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427155456 unmapped: 69820416 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0c56f4a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 ms_handle_reset con 0x558e0c618c00 session 0x558e0a0f0780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427155456 unmapped: 69820416 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427163648 unmapped: 69812224 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4643015 data_alloc: 234881024 data_used: 27955200
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4644455 data_alloc: 234881024 data_used: 28127232
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4644455 data_alloc: 234881024 data_used: 28127232
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 427171840 unmapped: 69804032 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.808780670s of 20.876974106s, submitted: 392
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428220416 unmapped: 68755456 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4665527 data_alloc: 234881024 data_used: 30130176
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4665527 data_alloc: 234881024 data_used: 30130176
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a201e000/0x0/0x1bfc00000, data 0x337ffe1/0x35b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4669367 data_alloc: 234881024 data_used: 30838784
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.971826553s of 13.989375114s, submitted: 17
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a201a000/0x0/0x1bfc00000, data 0x3381c3a/0x35b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4673541 data_alloc: 234881024 data_used: 30846976
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 68706304 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c67dc00 session 0x558e0a0e3a40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c67dc00 session 0x558e0c4b65a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a8b000/0x0/0x1bfc00000, data 0x3910c9c/0x3b43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4723414 data_alloc: 234881024 data_used: 30846976
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a8b000/0x0/0x1bfc00000, data 0x3910c9c/0x3b43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4723414 data_alloc: 234881024 data_used: 30846976
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a8b000/0x0/0x1bfc00000, data 0x3910c9c/0x3b43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.506410599s of 16.591121674s, submitted: 32
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a8b000/0x0/0x1bfc00000, data 0x3910c9c/0x3b43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428670976 unmapped: 68304896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0b341800 session 0x558e0d30cd20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428679168 unmapped: 68296704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428679168 unmapped: 68296704 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4784980 data_alloc: 234881024 data_used: 36495360
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1a66000/0x0/0x1bfc00000, data 0x3b60c9c/0x3b68000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4785940 data_alloc: 234881024 data_used: 36536320
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 428695552 unmapped: 68280320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436420608 unmapped: 60555264 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.235580444s of 10.482039452s, submitted: 97
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a0e02000/0x0/0x1bfc00000, data 0x47c4c9c/0x47cc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a62f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x47cac9c/0x47d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4880954 data_alloc: 234881024 data_used: 37404672
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435781632 unmapped: 61194240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a09ec000/0x0/0x1bfc00000, data 0x47cac9c/0x47d2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439377920 unmapped: 57597952 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439386112 unmapped: 57589760 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439386112 unmapped: 57589760 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4946272 data_alloc: 234881024 data_used: 38883328
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04cd000/0x0/0x1bfc00000, data 0x4ce9c9c/0x4cf1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04cd000/0x0/0x1bfc00000, data 0x4ce9c9c/0x4cf1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04cd000/0x0/0x1bfc00000, data 0x4ce9c9c/0x4cf1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4946752 data_alloc: 234881024 data_used: 39026688
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04cd000/0x0/0x1bfc00000, data 0x4ce9c9c/0x4cf1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439394304 unmapped: 57581568 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0a71cf00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c618c00 session 0x558e0c5705a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.299722672s of 15.453255653s, submitted: 26
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0ea8a000 session 0x558e0af02f00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04f1000/0x0/0x1bfc00000, data 0x4cc5c9c/0x4ccd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04f1000/0x0/0x1bfc00000, data 0x4cc5c9c/0x4ccd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4933476 data_alloc: 234881024 data_used: 38961152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c66de00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a04f1000/0x0/0x1bfc00000, data 0x4cc5c9c/0x4ccd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436207616 unmapped: 60768256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0b341800 session 0x558e0d3cb0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436232192 unmapped: 60743680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0d1c72c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 427 ms_handle_reset con 0x558e0c618c00 session 0x558e0d25c1e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436248576 unmapped: 60727296 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a09ff000/0x0/0x1bfc00000, data 0x4af7c9c/0x47bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4914724 data_alloc: 234881024 data_used: 38768640
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436248576 unmapped: 60727296 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a0a0c000/0x0/0x1bfc00000, data 0x457cafd/0x47b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 427 ms_handle_reset con 0x558e0c71b000 session 0x558e0c5712c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 427 ms_handle_reset con 0x558e0c67c800 session 0x558e0c4b6b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a0a0c000/0x0/0x1bfc00000, data 0x457cafd/0x47b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.103678703s of 10.367775917s, submitted: 42
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 427 ms_handle_reset con 0x558e0a2bf000 session 0x558e0cc885a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4903765 data_alloc: 234881024 data_used: 38871040
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436256768 unmapped: 60719104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436264960 unmapped: 60710912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 429 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0ce91860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436281344 unmapped: 60694528 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a0a0a000/0x0/0x1bfc00000, data 0x457e62c/0x47b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 429 ms_handle_reset con 0x558e0c618c00 session 0x558e0cc88f00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4623165 data_alloc: 234881024 data_used: 23072768
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ea9000/0x0/0x1bfc00000, data 0x30df277/0x3314000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ea9000/0x0/0x1bfc00000, data 0x30df277/0x3314000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434724864 unmapped: 62251008 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.837158203s of 11.938447952s, submitted: 44
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ea9000/0x0/0x1bfc00000, data 0x30df277/0x3314000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434733056 unmapped: 62242816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4633765 data_alloc: 234881024 data_used: 24612864
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434733056 unmapped: 62242816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434733056 unmapped: 62242816 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434741248 unmapped: 62234624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434741248 unmapped: 62234624 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1ea6000/0x0/0x1bfc00000, data 0x30e0db6/0x3317000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 434749440 unmapped: 62226432 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4643795 data_alloc: 234881024 data_used: 25104384
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435798016 unmapped: 61177856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435798016 unmapped: 61177856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435798016 unmapped: 61177856 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e08c6a960
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1ea7000/0x0/0x1bfc00000, data 0x30e0db6/0x3317000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c513e00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435806208 unmapped: 61169664 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435806208 unmapped: 61169664 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435806208 unmapped: 61169664 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435806208 unmapped: 61169664 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 61161472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435822592 unmapped: 61153280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435822592 unmapped: 61153280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435822592 unmapped: 61153280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444513 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61145088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.666606903s of 31.999231339s, submitted: 37
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0d25da40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0c6d4d20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c618c00 session 0x558e0bf4b680
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c74000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c67c800 session 0x558e0d5a32c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0d3ca3c0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4488421 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0d3ca000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0d5a25a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c618c00 session 0x558e0d457c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c71b000 session 0x558e0c315e00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 435994624 unmapped: 60981248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436002816 unmapped: 60973056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4497833 data_alloc: 218103808 data_used: 17612800
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436011008 unmapped: 60964864 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521193 data_alloc: 218103808 data_used: 20926464
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521193 data_alloc: 218103808 data_used: 20926464
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 436215808 unmapped: 60760064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2c2b000/0x0/0x1bfc00000, data 0x235cdb6/0x2593000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.873628616s of 18.985849380s, submitted: 42
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 438075392 unmapped: 58900480 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439517184 unmapped: 57458688 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 438853632 unmapped: 58122240 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c66db6/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [1])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4601825 data_alloc: 234881024 data_used: 22523904
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600273 data_alloc: 234881024 data_used: 22536192
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439910400 unmapped: 57065472 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a71c780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4599449 data_alloc: 234881024 data_used: 22536192
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439918592 unmapped: 57057280 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.488086700s of 19.132707596s, submitted: 110
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4599581 data_alloc: 234881024 data_used: 22536192
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600541 data_alloc: 234881024 data_used: 22593536
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4600541 data_alloc: 234881024 data_used: 22593536
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.338050842s of 12.457947731s, submitted: 1
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2319000/0x0/0x1bfc00000, data 0x2c6edb6/0x2ea5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4606689 data_alloc: 234881024 data_used: 23166976
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:34:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/303777078' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439926784 unmapped: 57049088 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609505 data_alloc: 234881024 data_used: 23175168
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 439934976 unmapped: 57040896 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2314000/0x0/0x1bfc00000, data 0x2c73db6/0x2eaa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0bf4ba40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0cc88000
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454343 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454343 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454343 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e15232400 session 0x558e0c6850e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454343 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f7f000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.194274902s of 32.377357483s, submitted: 50
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e15232400 session 0x558e0d1c6780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c67dc00 session 0x558e0d5a30e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0c56e780
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0c570b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0ce91860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480452 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2d7d000/0x0/0x1bfc00000, data 0x220bd54/0x2441000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480452 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 64815104 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e15232400 session 0x558e0c4b6b40
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2d7d000/0x0/0x1bfc00000, data 0x220bd54/0x2441000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4498652 data_alloc: 218103808 data_used: 18604032
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2d59000/0x0/0x1bfc00000, data 0x222fd54/0x2465000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 64806912 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4498652 data_alloc: 218103808 data_used: 18604032
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432177152 unmapped: 64798720 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432177152 unmapped: 64798720 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2d59000/0x0/0x1bfc00000, data 0x222fd54/0x2465000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432177152 unmapped: 64798720 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.752426147s of 18.832401276s, submitted: 18
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432177152 unmapped: 64798720 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4545632 data_alloc: 218103808 data_used: 18874368
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2840000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4545632 data_alloc: 218103808 data_used: 18874368
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b387800 session 0x558e0d25c1e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c54b000 session 0x558e0a549c20
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2840000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2841000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4543984 data_alloc: 218103808 data_used: 18874368
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.182659149s of 14.319707870s, submitted: 28
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2841000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4544276 data_alloc: 218103808 data_used: 18878464
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433135616 unmapped: 63840256 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2841000/0x0/0x1bfc00000, data 0x2747d54/0x297d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433143808 unmapped: 63832064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a2bf000 session 0x558e0a5494a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433143808 unmapped: 63832064 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0b341800 session 0x558e0c6cf0e0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0c2e9c00 session 0x558e0c6ce5a0
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a307d000/0x0/0x1bfc00000, data 0x1f0bd54/0x2141000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433152000 unmapped: 63823872 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433160192 unmapped: 63815680 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 63807488 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 433168384 unmapped: 63807488 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 65216512 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 65208320 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431775744 unmapped: 65200128 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431775744 unmapped: 65200128 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 65191936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 65191936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 65191936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 65191936 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 65183744 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 65183744 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 65183744 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 65183744 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 65175552 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 65167360 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 65159168 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 65150976 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431833088 unmapped: 65142784 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431833088 unmapped: 65142784 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0a5a2400 session 0x558e08c6b860
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 65134592 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 65126400 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 65118208 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431865856 unmapped: 65110016 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431865856 unmapped: 65110016 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431865856 unmapped: 65110016 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431865856 unmapped: 65110016 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431874048 unmapped: 65101824 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 65093632 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65085440 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 65077248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 65077248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 65077248 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431906816 unmapped: 65069056 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'config diff' '{prefix=config diff}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'config show' '{prefix=config show}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 65667072 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431071232 unmapped: 65904640 heap: 496975872 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'log dump' '{prefix=log dump}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 441999360 unmapped: 66019328 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'perf dump' '{prefix=perf dump}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'perf schema' '{prefix=perf schema}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430456832 unmapped: 77561856 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430473216 unmapped: 77545472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430473216 unmapped: 77545472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430473216 unmapped: 77545472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430473216 unmapped: 77545472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430473216 unmapped: 77545472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430473216 unmapped: 77545472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430473216 unmapped: 77545472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430473216 unmapped: 77545472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430481408 unmapped: 77537280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430481408 unmapped: 77537280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430481408 unmapped: 77537280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430489600 unmapped: 77529088 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430489600 unmapped: 77529088 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430489600 unmapped: 77529088 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430489600 unmapped: 77529088 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430489600 unmapped: 77529088 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430497792 unmapped: 77520896 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430497792 unmapped: 77520896 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430497792 unmapped: 77520896 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430497792 unmapped: 77520896 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430497792 unmapped: 77520896 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430497792 unmapped: 77520896 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430505984 unmapped: 77512704 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430505984 unmapped: 77512704 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430514176 unmapped: 77504512 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430514176 unmapped: 77504512 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 77496320 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430530560 unmapped: 77488128 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430530560 unmapped: 77488128 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430530560 unmapped: 77488128 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430546944 unmapped: 77471744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430546944 unmapped: 77471744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430546944 unmapped: 77471744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430546944 unmapped: 77471744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430546944 unmapped: 77471744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430546944 unmapped: 77471744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430546944 unmapped: 77471744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430546944 unmapped: 77471744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430555136 unmapped: 77463552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430555136 unmapped: 77463552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430555136 unmapped: 77463552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430555136 unmapped: 77463552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430555136 unmapped: 77463552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430555136 unmapped: 77463552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430555136 unmapped: 77463552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430555136 unmapped: 77463552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430571520 unmapped: 77447168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430571520 unmapped: 77447168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430571520 unmapped: 77447168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430571520 unmapped: 77447168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430571520 unmapped: 77447168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430571520 unmapped: 77447168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430579712 unmapped: 77438976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430579712 unmapped: 77438976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430579712 unmapped: 77438976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430579712 unmapped: 77438976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430579712 unmapped: 77438976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430587904 unmapped: 77430784 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 77422592 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 77422592 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 77422592 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 77422592 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 77414400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 77414400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 77414400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 77414400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 77414400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 77414400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 77414400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 77414400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430612480 unmapped: 77406208 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430612480 unmapped: 77406208 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 77398016 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 77398016 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 77398016 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 77398016 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 77398016 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430620672 unmapped: 77398016 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430628864 unmapped: 77389824 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430628864 unmapped: 77389824 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430628864 unmapped: 77389824 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430628864 unmapped: 77389824 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430628864 unmapped: 77389824 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430628864 unmapped: 77389824 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430628864 unmapped: 77389824 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430637056 unmapped: 77381632 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430645248 unmapped: 77373440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430645248 unmapped: 77373440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430645248 unmapped: 77373440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430645248 unmapped: 77373440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430653440 unmapped: 77365248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430653440 unmapped: 77365248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430653440 unmapped: 77365248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430653440 unmapped: 77365248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430653440 unmapped: 77365248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430661632 unmapped: 77357056 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430669824 unmapped: 77348864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 77332480 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 62K writes, 235K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.23 GB, 0.04 MB/s#012Cumulative WAL: 62K writes, 23K syncs, 2.67 writes per sync, written: 0.23 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2539 writes, 9349 keys, 2539 commit groups, 1.0 writes per commit group, ingest: 9.91 MB, 0.02 MB/s#012Interval WAL: 2539 writes, 1034 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 77332480 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 77332480 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 77332480 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 77332480 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 77332480 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 77332480 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430694400 unmapped: 77324288 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430694400 unmapped: 77324288 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430694400 unmapped: 77324288 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430702592 unmapped: 77316096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430702592 unmapped: 77316096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430702592 unmapped: 77316096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430702592 unmapped: 77316096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430702592 unmapped: 77316096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430702592 unmapped: 77316096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430718976 unmapped: 77299712 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430718976 unmapped: 77299712 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430718976 unmapped: 77299712 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430718976 unmapped: 77299712 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430727168 unmapped: 77291520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430727168 unmapped: 77291520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430727168 unmapped: 77291520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430727168 unmapped: 77291520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430735360 unmapped: 77283328 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430735360 unmapped: 77283328 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430735360 unmapped: 77283328 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430743552 unmapped: 77275136 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430743552 unmapped: 77275136 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430743552 unmapped: 77275136 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430743552 unmapped: 77275136 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430743552 unmapped: 77275136 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430751744 unmapped: 77266944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 77250560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 77250560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 77250560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 77250560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 77250560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 77250560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430776320 unmapped: 77242368 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430776320 unmapped: 77242368 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430784512 unmapped: 77234176 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430784512 unmapped: 77234176 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430784512 unmapped: 77234176 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430784512 unmapped: 77234176 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430784512 unmapped: 77234176 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430792704 unmapped: 77225984 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430800896 unmapped: 77217792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430800896 unmapped: 77217792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430800896 unmapped: 77217792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430800896 unmapped: 77217792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430800896 unmapped: 77217792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430800896 unmapped: 77217792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430800896 unmapped: 77217792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430800896 unmapped: 77217792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430817280 unmapped: 77201408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430817280 unmapped: 77201408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430817280 unmapped: 77201408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430817280 unmapped: 77201408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430817280 unmapped: 77201408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430817280 unmapped: 77201408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430817280 unmapped: 77201408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430825472 unmapped: 77193216 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430825472 unmapped: 77193216 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430825472 unmapped: 77193216 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430841856 unmapped: 77176832 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430841856 unmapped: 77176832 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430841856 unmapped: 77176832 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430841856 unmapped: 77176832 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430841856 unmapped: 77176832 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430850048 unmapped: 77168640 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430858240 unmapped: 77160448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430858240 unmapped: 77160448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430858240 unmapped: 77160448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430858240 unmapped: 77160448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430858240 unmapped: 77160448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430858240 unmapped: 77160448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430858240 unmapped: 77160448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430858240 unmapped: 77160448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430858240 unmapped: 77160448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430866432 unmapped: 77152256 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430866432 unmapped: 77152256 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430866432 unmapped: 77152256 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430866432 unmapped: 77152256 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430874624 unmapped: 77144064 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430874624 unmapped: 77144064 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430882816 unmapped: 77135872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430882816 unmapped: 77135872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430882816 unmapped: 77135872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430882816 unmapped: 77135872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430882816 unmapped: 77135872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430882816 unmapped: 77135872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430882816 unmapped: 77135872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430882816 unmapped: 77135872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430907392 unmapped: 77111296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430907392 unmapped: 77111296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430907392 unmapped: 77111296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 352.878387451s of 352.958587646s, submitted: 24
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430907392 unmapped: 77111296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 430964736 unmapped: 77053952 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 77004800 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,1])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460624 data_alloc: 218103808 data_used: 16433152
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431104000 unmapped: 76914688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 76906496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 76898304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 76898304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 76898304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 76898304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 76898304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 76898304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 76898304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431120384 unmapped: 76898304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431128576 unmapped: 76890112 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431128576 unmapped: 76890112 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431128576 unmapped: 76890112 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431128576 unmapped: 76890112 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 76881920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 76881920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 76881920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431136768 unmapped: 76881920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431144960 unmapped: 76873728 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431144960 unmapped: 76873728 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431153152 unmapped: 76865536 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431153152 unmapped: 76865536 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431153152 unmapped: 76865536 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431153152 unmapped: 76865536 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431153152 unmapped: 76865536 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431153152 unmapped: 76865536 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 76857344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 76857344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 76857344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 76857344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 76857344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 76857344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 76857344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 76857344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 76849152 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 76849152 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 76849152 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 76849152 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 76849152 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431177728 unmapped: 76840960 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431177728 unmapped: 76840960 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431177728 unmapped: 76840960 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431177728 unmapped: 76840960 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431177728 unmapped: 76840960 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431185920 unmapped: 76832768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431185920 unmapped: 76832768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431185920 unmapped: 76832768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431185920 unmapped: 76832768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431185920 unmapped: 76832768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431185920 unmapped: 76832768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431202304 unmapped: 76816384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431202304 unmapped: 76816384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431202304 unmapped: 76816384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431202304 unmapped: 76816384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431202304 unmapped: 76816384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431202304 unmapped: 76816384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431202304 unmapped: 76816384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431202304 unmapped: 76816384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431210496 unmapped: 76808192 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431210496 unmapped: 76808192 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431210496 unmapped: 76808192 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431210496 unmapped: 76808192 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431210496 unmapped: 76808192 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431210496 unmapped: 76808192 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431210496 unmapped: 76808192 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431210496 unmapped: 76808192 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431218688 unmapped: 76800000 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431218688 unmapped: 76800000 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431218688 unmapped: 76800000 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431226880 unmapped: 76791808 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431226880 unmapped: 76791808 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431226880 unmapped: 76791808 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431226880 unmapped: 76791808 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431226880 unmapped: 76791808 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 76783616 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 76783616 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 76783616 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 76783616 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 76783616 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 76783616 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431235072 unmapped: 76783616 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431243264 unmapped: 76775424 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431243264 unmapped: 76775424 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431243264 unmapped: 76775424 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 76767232 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 76767232 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 76767232 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 76767232 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 76767232 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431251456 unmapped: 76767232 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431259648 unmapped: 76759040 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431259648 unmapped: 76759040 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431259648 unmapped: 76759040 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431259648 unmapped: 76759040 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431259648 unmapped: 76759040 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431259648 unmapped: 76759040 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431259648 unmapped: 76759040 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431259648 unmapped: 76759040 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431276032 unmapped: 76742656 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431276032 unmapped: 76742656 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431276032 unmapped: 76742656 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431276032 unmapped: 76742656 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431276032 unmapped: 76742656 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431276032 unmapped: 76742656 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431276032 unmapped: 76742656 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431284224 unmapped: 76734464 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431284224 unmapped: 76734464 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431284224 unmapped: 76734464 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431292416 unmapped: 76726272 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431292416 unmapped: 76726272 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431292416 unmapped: 76726272 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431292416 unmapped: 76726272 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431292416 unmapped: 76726272 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431292416 unmapped: 76726272 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431300608 unmapped: 76718080 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431300608 unmapped: 76718080 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431300608 unmapped: 76718080 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431300608 unmapped: 76718080 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431300608 unmapped: 76718080 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431300608 unmapped: 76718080 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431300608 unmapped: 76718080 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431300608 unmapped: 76718080 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 76709888 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 76709888 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 76709888 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 76709888 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 76709888 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 76709888 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 76709888 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431308800 unmapped: 76709888 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431316992 unmapped: 76701696 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431316992 unmapped: 76701696 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431316992 unmapped: 76701696 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431325184 unmapped: 76693504 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431325184 unmapped: 76693504 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431325184 unmapped: 76693504 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431325184 unmapped: 76693504 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431325184 unmapped: 76693504 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431333376 unmapped: 76685312 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431333376 unmapped: 76685312 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431341568 unmapped: 76677120 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431341568 unmapped: 76677120 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431341568 unmapped: 76677120 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431341568 unmapped: 76677120 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431341568 unmapped: 76677120 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431341568 unmapped: 76677120 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431349760 unmapped: 76668928 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431349760 unmapped: 76668928 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431349760 unmapped: 76668928 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431349760 unmapped: 76668928 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431349760 unmapped: 76668928 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431357952 unmapped: 76660736 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431357952 unmapped: 76660736 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431357952 unmapped: 76660736 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431374336 unmapped: 76644352 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431374336 unmapped: 76644352 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431374336 unmapped: 76644352 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431374336 unmapped: 76644352 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431382528 unmapped: 76636160 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431382528 unmapped: 76636160 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431382528 unmapped: 76636160 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431382528 unmapped: 76636160 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431382528 unmapped: 76636160 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431382528 unmapped: 76636160 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431398912 unmapped: 76619776 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431407104 unmapped: 76611584 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431407104 unmapped: 76611584 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431407104 unmapped: 76611584 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431431680 unmapped: 76587008 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431431680 unmapped: 76587008 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431431680 unmapped: 76587008 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431431680 unmapped: 76587008 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431431680 unmapped: 76587008 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431431680 unmapped: 76587008 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431431680 unmapped: 76587008 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431431680 unmapped: 76587008 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431439872 unmapped: 76578816 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431439872 unmapped: 76578816 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431439872 unmapped: 76578816 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431439872 unmapped: 76578816 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431439872 unmapped: 76578816 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431439872 unmapped: 76578816 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431439872 unmapped: 76578816 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431439872 unmapped: 76578816 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431456256 unmapped: 76562432 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431456256 unmapped: 76562432 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431456256 unmapped: 76562432 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431456256 unmapped: 76562432 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431456256 unmapped: 76562432 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431456256 unmapped: 76562432 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431456256 unmapped: 76562432 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431472640 unmapped: 76546048 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431472640 unmapped: 76546048 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431480832 unmapped: 76537856 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431480832 unmapped: 76537856 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431480832 unmapped: 76537856 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431480832 unmapped: 76537856 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431480832 unmapped: 76537856 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431480832 unmapped: 76537856 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431497216 unmapped: 76521472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431497216 unmapped: 76521472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431497216 unmapped: 76521472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431497216 unmapped: 76521472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431497216 unmapped: 76521472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431497216 unmapped: 76521472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431497216 unmapped: 76521472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431497216 unmapped: 76521472 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431505408 unmapped: 76513280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431505408 unmapped: 76513280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431505408 unmapped: 76513280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431505408 unmapped: 76513280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431505408 unmapped: 76513280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431505408 unmapped: 76513280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431505408 unmapped: 76513280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431505408 unmapped: 76513280 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431521792 unmapped: 76496896 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431521792 unmapped: 76496896 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431521792 unmapped: 76496896 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 76488704 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 76488704 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 76488704 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 76488704 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 76488704 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 76488704 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 76488704 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431538176 unmapped: 76480512 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431538176 unmapped: 76480512 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431538176 unmapped: 76480512 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431538176 unmapped: 76480512 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431538176 unmapped: 76480512 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431538176 unmapped: 76480512 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431554560 unmapped: 76464128 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431554560 unmapped: 76464128 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431554560 unmapped: 76464128 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431562752 unmapped: 76455936 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431562752 unmapped: 76455936 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431562752 unmapped: 76455936 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431562752 unmapped: 76455936 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431562752 unmapped: 76455936 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431570944 unmapped: 76447744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431570944 unmapped: 76447744 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431579136 unmapped: 76439552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431579136 unmapped: 76439552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431579136 unmapped: 76439552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431579136 unmapped: 76439552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431579136 unmapped: 76439552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431579136 unmapped: 76439552 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431595520 unmapped: 76423168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431595520 unmapped: 76423168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431595520 unmapped: 76423168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431595520 unmapped: 76423168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431595520 unmapped: 76423168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431595520 unmapped: 76423168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431595520 unmapped: 76423168 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431603712 unmapped: 76414976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431603712 unmapped: 76414976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431603712 unmapped: 76414976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431603712 unmapped: 76414976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431603712 unmapped: 76414976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431603712 unmapped: 76414976 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431611904 unmapped: 76406784 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431611904 unmapped: 76406784 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431611904 unmapped: 76406784 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431620096 unmapped: 76398592 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431620096 unmapped: 76398592 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431620096 unmapped: 76398592 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431628288 unmapped: 76390400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431628288 unmapped: 76390400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431628288 unmapped: 76390400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431628288 unmapped: 76390400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431628288 unmapped: 76390400 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431636480 unmapped: 76382208 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431636480 unmapped: 76382208 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431636480 unmapped: 76382208 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431636480 unmapped: 76382208 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431636480 unmapped: 76382208 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431636480 unmapped: 76382208 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431644672 unmapped: 76374016 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431644672 unmapped: 76374016 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431669248 unmapped: 76349440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431669248 unmapped: 76349440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431669248 unmapped: 76349440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431669248 unmapped: 76349440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431669248 unmapped: 76349440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431669248 unmapped: 76349440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431669248 unmapped: 76349440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431669248 unmapped: 76349440 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 76341248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 76341248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 76341248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 76341248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 76341248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 76341248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 76341248 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431685632 unmapped: 76333056 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431693824 unmapped: 76324864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431693824 unmapped: 76324864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431693824 unmapped: 76324864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431693824 unmapped: 76324864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431693824 unmapped: 76324864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431693824 unmapped: 76324864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431693824 unmapped: 76324864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431693824 unmapped: 76324864 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431710208 unmapped: 76308480 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431710208 unmapped: 76308480 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431726592 unmapped: 76292096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431726592 unmapped: 76292096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431726592 unmapped: 76292096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431726592 unmapped: 76292096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431726592 unmapped: 76292096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431726592 unmapped: 76292096 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431734784 unmapped: 76283904 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431734784 unmapped: 76283904 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431734784 unmapped: 76283904 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431742976 unmapped: 76275712 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431742976 unmapped: 76275712 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431742976 unmapped: 76275712 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431742976 unmapped: 76275712 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431742976 unmapped: 76275712 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431751168 unmapped: 76267520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431751168 unmapped: 76267520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431751168 unmapped: 76267520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431751168 unmapped: 76267520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431751168 unmapped: 76267520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431751168 unmapped: 76267520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431751168 unmapped: 76267520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431751168 unmapped: 76267520 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 76259328 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 76251136 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 76251136 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 76251136 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431767552 unmapped: 76251136 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431775744 unmapped: 76242944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431775744 unmapped: 76242944 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 76234752 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431783936 unmapped: 76234752 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 76226560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 76226560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 76226560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 76226560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 76226560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431792128 unmapped: 76226560 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 76218368 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 76218368 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431800320 unmapped: 76218368 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 76210176 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 76210176 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431808512 unmapped: 76210176 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 76201984 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 76201984 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431816704 unmapped: 76201984 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 76193792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 76193792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 76193792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 76193792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 76193792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 76193792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 76193792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431824896 unmapped: 76193792 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 76177408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 76177408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 76177408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 76177408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431841280 unmapped: 76177408 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 76169216 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 76169216 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 76161024 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431857664 unmapped: 76161024 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431865856 unmapped: 76152832 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431874048 unmapped: 76144640 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431874048 unmapped: 76144640 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431874048 unmapped: 76144640 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431874048 unmapped: 76144640 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431874048 unmapped: 76144640 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431874048 unmapped: 76144640 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 76136448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 76136448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 76136448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 76136448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 76136448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 76136448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431882240 unmapped: 76136448 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 76128256 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 76120064 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 76120064 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 76120064 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 76120064 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 76120064 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 76120064 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431898624 unmapped: 76120064 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431906816 unmapped: 76111872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431906816 unmapped: 76111872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431906816 unmapped: 76111872 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431915008 unmapped: 76103680 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431915008 unmapped: 76103680 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431915008 unmapped: 76103680 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431915008 unmapped: 76103680 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431915008 unmapped: 76103680 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431915008 unmapped: 76103680 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431923200 unmapped: 76095488 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431923200 unmapped: 76095488 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431931392 unmapped: 76087296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431931392 unmapped: 76087296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431931392 unmapped: 76087296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431931392 unmapped: 76087296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431931392 unmapped: 76087296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431931392 unmapped: 76087296 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431947776 unmapped: 76070912 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431947776 unmapped: 76070912 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431947776 unmapped: 76070912 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431947776 unmapped: 76070912 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431947776 unmapped: 76070912 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431947776 unmapped: 76070912 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431947776 unmapped: 76070912 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431964160 unmapped: 76054528 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431972352 unmapped: 76046336 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431972352 unmapped: 76046336 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 76038144 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 76038144 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 76038144 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 76038144 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 76038144 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 76038144 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431988736 unmapped: 76029952 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431988736 unmapped: 76029952 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431988736 unmapped: 76029952 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431988736 unmapped: 76029952 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431988736 unmapped: 76029952 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431988736 unmapped: 76029952 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 431988736 unmapped: 76029952 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432013312 unmapped: 76005376 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432013312 unmapped: 76005376 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432013312 unmapped: 76005376 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432013312 unmapped: 76005376 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432013312 unmapped: 76005376 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432013312 unmapped: 76005376 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432013312 unmapped: 76005376 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432013312 unmapped: 76005376 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432029696 unmapped: 75988992 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432029696 unmapped: 75988992 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432029696 unmapped: 75988992 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432029696 unmapped: 75988992 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432029696 unmapped: 75988992 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432029696 unmapped: 75988992 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432029696 unmapped: 75988992 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432029696 unmapped: 75988992 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432046080 unmapped: 75972608 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432046080 unmapped: 75972608 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432046080 unmapped: 75972608 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432054272 unmapped: 75964416 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432054272 unmapped: 75964416 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432054272 unmapped: 75964416 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432054272 unmapped: 75964416 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432054272 unmapped: 75964416 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432062464 unmapped: 75956224 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 63K writes, 236K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.23 GB, 0.03 MB/s#012Cumulative WAL: 63K writes, 23K syncs, 2.66 writes per sync, written: 0.23 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 705 writes, 1074 keys, 705 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s#012Interval WAL: 705 writes, 351 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432062464 unmapped: 75956224 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432062464 unmapped: 75956224 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432070656 unmapped: 75948032 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432070656 unmapped: 75948032 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432070656 unmapped: 75948032 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432070656 unmapped: 75948032 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432070656 unmapped: 75948032 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432078848 unmapped: 75939840 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432078848 unmapped: 75939840 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: mgrc ms_handle_reset ms_handle_reset con 0x558e0b387400
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3158772141
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3158772141,v1:192.168.122.100:6801/3158772141]
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: mgrc handle_mgr_configure stats_period=5
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432087040 unmapped: 75931648 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432087040 unmapped: 75931648 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432087040 unmapped: 75931648 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 ms_handle_reset con 0x558e0ea8a800 session 0x558e0cc3fe00
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432087040 unmapped: 75931648 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432087040 unmapped: 75931648 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432095232 unmapped: 75923456 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432103424 unmapped: 75915264 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432103424 unmapped: 75915264 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432103424 unmapped: 75915264 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432103424 unmapped: 75915264 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432103424 unmapped: 75915264 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432103424 unmapped: 75915264 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432103424 unmapped: 75915264 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432103424 unmapped: 75915264 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432103424 unmapped: 75915264 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432111616 unmapped: 75907072 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432111616 unmapped: 75907072 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432111616 unmapped: 75907072 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432111616 unmapped: 75907072 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432111616 unmapped: 75907072 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432111616 unmapped: 75907072 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432119808 unmapped: 75898880 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432128000 unmapped: 75890688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432128000 unmapped: 75890688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432128000 unmapped: 75890688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432128000 unmapped: 75890688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432128000 unmapped: 75890688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432128000 unmapped: 75890688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432128000 unmapped: 75890688 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432136192 unmapped: 75882496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432136192 unmapped: 75882496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432136192 unmapped: 75882496 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432144384 unmapped: 75874304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432144384 unmapped: 75874304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432144384 unmapped: 75874304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432144384 unmapped: 75874304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432144384 unmapped: 75874304 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432152576 unmapped: 75866112 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 75857920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 75857920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 75857920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 75857920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 75857920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432160768 unmapped: 75857920 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432168960 unmapped: 75849728 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432177152 unmapped: 75841536 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432185344 unmapped: 75833344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432185344 unmapped: 75833344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432185344 unmapped: 75833344 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432193536 unmapped: 75825152 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432193536 unmapped: 75825152 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432193536 unmapped: 75825152 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432193536 unmapped: 75825152 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432201728 unmapped: 75816960 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432209920 unmapped: 75808768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432209920 unmapped: 75808768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432209920 unmapped: 75808768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432209920 unmapped: 75808768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432209920 unmapped: 75808768 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432218112 unmapped: 75800576 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432218112 unmapped: 75800576 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432218112 unmapped: 75800576 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432226304 unmapped: 75792384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432226304 unmapped: 75792384 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a30a1000/0x0/0x1bfc00000, data 0x1ee7d54/0x211d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432373760 unmapped: 75644928 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'config diff' '{prefix=config diff}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'config show' '{prefix=config show}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: bluestore.MempoolThread(0x558e08bddb60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460784 data_alloc: 218103808 data_used: 16437248
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432242688 unmapped: 75776000 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: prioritycache tune_memory target: 4294967296 mapped: 432381952 unmapped: 75636736 heap: 508018688 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:11 np0005465987 ceph-osd[78159]: do_command 'log dump' '{prefix=log dump}'
Oct  2 09:34:11 np0005465987 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:34:11 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:34:11 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/38276567' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:34:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:34:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1045062937' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:34:12 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:12 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:12 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:12.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:12 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  2 09:34:12 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3587561828' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  2 09:34:13 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:13 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:13 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:13.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:13 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  2 09:34:13 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2273320171' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  2 09:34:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  2 09:34:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/177922527' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  2 09:34:14 np0005465987 nova_compute[230713]: 2025-10-02 13:34:14.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  2 09:34:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2037802205' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  2 09:34:14 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:14 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:14 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:14.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  2 09:34:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3909806530' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  2 09:34:14 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  2 09:34:14 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2154797546' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  2 09:34:15 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:15 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:15 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:15.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2925061133' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  2 09:34:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3831175002' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  2 09:34:15 np0005465987 nova_compute[230713]: 2025-10-02 13:34:15.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  2 09:34:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1496143167' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  2 09:34:15 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  2 09:34:15 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4099705086' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3299708536' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3651288603' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2879993378' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  2 09:34:16 np0005465987 systemd[1]: Starting Hostname Service...
Oct  2 09:34:16 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:16 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000026s ======
Oct  2 09:34:16 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:16.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct  2 09:34:16 np0005465987 systemd[1]: Started Hostname Service.
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2093367489' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct  2 09:34:16 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1302999161' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  2 09:34:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct  2 09:34:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/744023568' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  2 09:34:17 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:17 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:17 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:17.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:17 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct  2 09:34:17 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/178438410' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  2 09:34:18 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:18 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:34:18 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:18.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:34:18 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct  2 09:34:18 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/23752323' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  2 09:34:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct  2 09:34:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/854352252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  2 09:34:19 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:19 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:19 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:19.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:19 np0005465987 nova_compute[230713]: 2025-10-02 13:34:19.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:34:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3202899176' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:34:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:34:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:34:19 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct  2 09:34:19 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/437354534' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  2 09:34:20 np0005465987 nova_compute[230713]: 2025-10-02 13:34:20.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:34:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:34:20 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:20 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:20 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:20.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:20 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct  2 09:34:20 np0005465987 ceph-mon[80822]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3580891169' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  2 09:34:21 np0005465987 radosgw[82824]: ====== starting new request req=0x7f1025df76f0 =====
Oct  2 09:34:21 np0005465987 radosgw[82824]: ====== req done req=0x7f1025df76f0 op status=0 http_status=200 latency=0.001000027s ======
Oct  2 09:34:21 np0005465987 radosgw[82824]: beast: 0x7f1025df76f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct  2 09:34:21 np0005465987 ceph-mon[80822]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
